text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Drosophila Nrf2/Keap1 Mediated Redox Signaling Supports Synaptic Function and Longevity and Impacts on Circadian Activity Many neurodegenerative conditions and age-related neuropathologies are associated with increased levels of reactive oxygen species (ROS). The cap “n” collar (CncC) family of transcription factors is one of the major cellular system that fights oxidative insults, becoming activated in response to oxidative stress. This transcription factor signaling is conserved from metazoans to human and has a major developmental and disease-associated relevance. An important mammalian member of the CncC family is nuclear factor erythroid 2-related factor 2 (Nrf2) which has been studied in numerous cellular systems and represents an important target for drug discovery in different diseases. CncC is negatively regulated by Kelch-like ECH associated protein 1 (Keap1) and this interaction provides the basis for a homeostatic control of cellular antioxidant defense. We have utilized the Drosophila model system to investigate the roles of CncC signaling on longevity, neuronal function and circadian rhythm. Furthermore, we assessed the effects of CncC function on larvae and adult flies following exposure to stress. Our data reveal that constitutive overexpression of CncC modifies synaptic mechanisms that positively impact on neuronal function, and suppression of CncC inhibitor, Keap1, shows beneficial phenotypes on synaptic function and longevity. Moreover, supplementation of antioxidants mimics the effects of augmenting CncC signaling. Under stress conditions, lack of CncC signaling worsens survival rates and neuronal function whilst silencing Keap1 protects against stress-induced neuronal decline. Interestingly, overexpression and RNAi-mediated downregulation of CncC have differential effects on sleep patterns possibly via interactions with redox-sensitive circadian cycles. Thus, our data illustrate the important regulatory potential of CncC signaling in neuronal function and synaptic release affecting multiple aspects within the nervous system. INTRODUCTION One of the major cellular defense mechanisms against oxidative stress is mediated by the mammalian nuclear factor erythroid 2-related factor 2 (Nrf2) and Kelch-like ECH associated protein 1 (Keap1) signaling cascade. The Nrf2/Keap1 pathway regulates gene expression of many cytoprotective and detoxifying enzymes, thus playing a pivotal role in maintaining cellular redox homeostasis. Nrf2 belongs to the cap ''n'' collar (CncC) subfamily of basic region leucine zipper transcription factors and its regulation and importance in cellular defense mechanisms has been studied in numerous physiological and pathological conditions. Nrf2 plays a key role in neuronal resistance to oxidative stress mediated by reactive oxygen species (ROS) and glutamateinduced excitotoxicity (He et al., 2014). Balancing oxidative stress by up-regulation of Nrf2 antioxidant defense has been demonstrated to be effective in neurodegenerative disease treatment. Aging is one of the main risk factors for neurodegenerative conditions, but it is also closely associated with a loss of Nrf2 activity. Nrf2 and expression of downstream target genes are decreased in the substantia nigra of aged rats, with Nrf2 overexpression exerting a protective response to neurodegeneration (Habas et al., 2013), including in models of amyotrophic lateral sclerosis (ALS), stroke, Alzheimer's disease (AD) and Parkinson's disease (PD). Indeed, Nrf2 activation has been shown to alleviate neurodegenerative symptoms in a Drosophila model of PD (Barone et al., 2011). Furthermore, Nrf2-mediated neuroprotection is primarily conferred by astroglia both in vitro and in vivo (Liddell, 2017) and in AD patients, Nrf2 expression is decreased in both hippocampal neurons and astrocytes (Ramsey et al., 2007) indicating a strong involvement of Nrf2 signaling in neurodegeneration and neuronal function. Previous work has shown that activation of the Nrf2/Keap1 transcriptional pathways can protect hippocampal neurons from Aβ-induced neurodegeneration in an AD mouse model (Lipton et al., 2016) and rescue neuronal deficiencies in various models for PD (Johnson and Johnson, 2015), confirming a protective role in neuronal function with potential for therapeutic treatments. However, the exact targets and mechanisms of the antioxidant activities of Nrf2/Keap1 activation in the modulation of neuronal function are not fully understood. One important characteristic of neurodegenerative diseases and aging is dysregulation of sleep patterns which has been reported across different species ranging from fly to human (De Lazzari et al., 2018;Vanderheyden et al., 2018). Cumulative evidence demonstrates a close connection between cellular circadian rhythm and redox systems. The circadian clock is involved in the regulation of ROS levels both in vivo and in vitro (Desvergne et al., 2014;Early et al., 2018). In mammals, the circadian clock orchestrates the activities of the antioxidant defense and oxidative stress response systems by mediating Nrf2 signaling. Two proteins involved in circadian rhythm, circadian locomotor output cycles kaput (Clock) and Brain and muscle arnt-like protein-1 (Bmal1) can positively regulate Nrf2 transcription, which in turn drives rhythmic oscillations of antioxidant genes (Xu et al., 2012;Pekovic-Vaughan et al., 2014). Conversely, the cellular redox state is critically important for the regulation of Bmal1 and Clock gene transcriptional activities (Ranieri et al., 2015). The mechanism underlying the Nrf2 protective response remains obscure and given the limited understanding of Nrf2/Keap1 signaling on neuronal function we utilized the Drosophila model in this study to investigate the effects on aging, synapse function, and circadian activity. Drosophila Keap1 acts as a negative regulator of CncC (Itoh et al., 1999;Sykiotis and Bohmann, 2008;Pitoniak and Bohmann, 2015) and its silencing by RNAi leads to endogenous activation of CncC signaling with flies showing upregulation of the classical antioxidant response element cascade (Sykiotis and Bohmann, 2008), which increases their stress resistance. In particular, CncC activation results in enhanced transcription of the antioxidant and detoxifying enzyme glutathione S-transferase encoded by the Drosophila gstD1 gene (Sawicki et al., 2003;Sykiotis and Bohmann, 2008) which acts in a neuroprotective manner. We manipulated neuronal antioxidant response ability by either overexpressing CncC or reducing the expression of CncC and Keap1 protein by RNAi. We then investigated longevity, activity, and circadian behavior in adult flies in addition to synaptic function at the larval neuromuscular junction (NMJ). The data show that constitutive overexpression of CncC has important impacts on synaptic release and survival with silencing of the CncC inhibitor, Keap1, inducing beneficial effects on survival and synaptic function. Importantly, application of the antioxidant compounds dithiothreitol (DTT) and glutathione (GSH) produced similar effects to those mediated by CncC overexpression or Keap1 silencing, suggesting that an antioxidant environment boosts synaptic function in a redox-specific manner. Fly Husbandry Flies were raised on standard maize media at 25 • C at a 12-h LD cycle. The elav-GAL4[C155] driver was obtained from the Bloomington Stock Center (Indiana, US). The UAS-RNAi lines [Keap1 (CG3962) and CncC (CG43286)] were purchased from the Vienna Drosophila Resource Centre (VDRC). The UAS-CncC line was kindly provided by Dirk Bohmann, University of Rochester, USA (Sykiotis and Bohmann, 2008;Pitoniak and Bohmann, 2015). The UAS/GAL4 bipartite expression system was utilized to drive pan-neuronal expression. The elav-GAL4 driver (female flies) and the UAS responder lines (male flies) were crossed to obtain offspring expressing the genes of interest. As a control for the RNAi strains, a line carrying an empty RNAi vector inserted in the AttP40 site was used and crossed to the elav-GAL4 driver (referred to as RNAi Ctrl). For CncC overexpressing (OE) lines, experimental lines were compared to control obtained by crossing the GAL4 (elav Ctrl) and UAS lines to w 1118 . The homozygote w 1118 line was used as a control in the pharmacology experiments. Miniature excitatory junctional currents (mEJCs) were recorded in the presence of 0.5 µM tetrodotoxin (Tocris, UK). All synaptic responses were recorded from muscles with input resistances ≥4 MΩ, holding currents <4 nA at −60 mV and resting potentials more negative than −60 mV at 25 • C, as differences in recording temperature cause changes in glutamate receptor kinetics and amplitudes (Postlethwaite et al., 2007). Holding potentials were −60 mV. The extracellular HL-3 contained (in mM): 70 NaCl, 5 KCl, 20 MgCl 2 , 10 NaHCO 3 , 115 sucrose, 5 trehalose, 5 HEPES, and 1.5 CaCl 2 . Average single evoked EJC (eEJC) amplitudes (stimulus: 0.1 ms, 1-5 V) were based on the mean peak eEJC amplitude in response to 10 presynaptic stimuli (recorded at 0.2 Hz). Nerve stimulation was performed with an isolated stimulator (DS2A, Digitimer). All data were digitized at 10 kHz and for miniature recordings, 200-s recordings were analyzed to obtain mean mEJC amplitudes. The quantal content (QC) was estimated for each recording by calculating the ratio of eEJC amplitude/average mEJC amplitude, followed by averaging recordings across all NMJs for a given genotype. mEJC and eEJC recordings were off-line low-pass filtered at 500 Hz and 1 kHz, respectively. Materials were purchased from Sigma-Aldrich (UK). Cumulative Postsynaptic Current Analysis The apparent size of the RRP was probed by the method of cumulative eEJC amplitudes (Schneggenburger et al., 1999). Muscles were clamped to −60 mV and eEJC amplitudes during a stimulus train [50 Hz, 500 ms (of a 1-s train)] were calculated as the difference between peak and baseline before stimulus onset of a given eEJC. Receptor desensitization was not blocked as it did not affect eEJC amplitudes, because a comparison of the decay of the first and the last eEJC within a train did not reveal any significant difference in decay kinetics. The number of releaseready vesicles (N) was obtained by back extrapolating a line fit to the linear phase of the 500-ms cumulative eEJC plot (the last 200 ms of the train) to time zero by dividing the cumulative eEJC amplitude at time zero by the mean mEJC amplitude recorded in the same cell. To calculate the QC in the train, we used mean mEJC amplitudes measured before the train. Heat Shock Protocol For heat shock survival experiments, methods were adapted from Ishida et al. (2012). Briefly, male adult flies (aged 3-5 days) were transferred to vials containing a moistened filter pad to prevent dehydration. Vials were placed in a 37 • C water bath and live flies were counted every 30 min. In larval experiments, heat shock was induced using previously described methods (Robinson et al., 2017). Briefly, age-matched third instar larvae were incubated at 37 • C for 1 h and used for electrophysiology 24 h later. These experiments were repeated a minimum of three times for each genotype. Circadian Circadian activity and sleep analysis were performed as described previously (Ishida et al., 2012). Briefly, adult male flies (aged 3-5 days) were individually transferred into glass tubes containing food. Single tubes were then loaded into Drosophila Activity Monitor system (Trikinetics). Following a 2 days period of entrainment in incubators kept on a 12:12 light/dark regime at 25 • C, locomotor activity was recorded for the consecutive 5 days. Sleep behavior was analyzed by using pySolo software (Gilestro and Cirelli, 2009). Survival Groups of 10 newly emerged adult male flies were transferred to new vials containing food and deaths were scored daily. Flies were transferred to new food three times per week and otherwise left undisturbed. Cumulative survival curves are presented and compared using the Log-rank (Mantel-Cox) test. Geotaxis Rapid iterative negative geotaxis behavior was performed using methods outlined previously (Rhodenizer et al., 2008;Nichols et al., 2012). Briefly, age-matched adult male flies were collected and groups of 10 were transferred to a clear empty vial without anesthesia at weekly intervals. Tubes were transferred to a quiet room and flies acclimated for 15 min. Tubes were tapped three times on a bench and images were taken after 3 s using a digital camera. A minimum of five trials were conducted per session with an inter-trial interval of 1 min. Average height climbed per vial was calculated from images using Image J software. These experiments were repeated a minimum of three times per genotype. Crawling Activity Age-matched third instar male larvae (∼100-120 h) were selected, washed and placed onto a moist, food-free surface at a constant temperature of 20 • C. Crawling activities were imaged over 10 min using AnyMaze software v4.98 (Stoelting Co., Wood Dale, IL, USA) and data were analyzed off-line as reported previously (Robinson et al., 2014). Statistics Statistical analysis was performed with Prism 7 (Graphpad Software Inc., San Diego, CA, USA). Statistical tests were carried out using a one-way ANOVA test when applicable with a posteriori test (with Tukey's multiple comparisons) or unpaired Student's t-test [for comparisons between elav × CncC and elav × w 1118 (elav Ctrl)]. Data in figures are expressed as mean ± SEM where n is the number NMJs, flies or larvae as indicated, and significance is shown as * p < 0.05, * * p < 0.01, * * * p < 0.001, and * * * * p < 0.0001. RESULTS We used this well-characterized expression system and first assessed the effects of down-regulating either CncC or Keap1 protein expression on Drosophila life span as it has long been postulated that oxidative stress contributes to age-related neuronal dysfunction known as the free radical Data were compared using the Log-rank (Mantel-Cox) test. (C) Negative geotaxis performance declines with age in control flies (RNAi Ctrl at 21 days, dark gray), however, CncC RNAi expression induces a strong reduction in climbing activity at 7 days (black) with no further effects at older ages. RNAi silencing of Keap1 augments activity decline relative to CncC silencing at seven (black) and 14 (light gray) days (n-number of flies indicated within bars). Data denote mean ± SEM for all data comparisons in (C). One-way ANOVA with post hoc Tukey-Kramer was used for comparisons with * * p < 0.01, * * * p < 0.001, * * * * p < 0.0001. To further characterize the effects on survival and geotaxisdriven activity and their connection to neuronal health, we assessed neuronal function in more detail in electrophysiological experiments of the larval NMJ. This well-studied synapse allows direct assessment of neuronal health and synaptic function by recording single action potential-eEJC, spontaneous release events (mEJC), and total vesicular pool size release. Oxidative stress and aging have been related to compromised neuronal function and diminished synaptic release. However, to our knowledge, the direct effects of CncC/Keap1 signaling on synaptic release have not yet been documented (Besson et al., 2000;Fremeau et al., 2004;Escartin et al., 2011;Wu and Cooper, 2012;Cirillo et al., 2015;Ivannikov and Van Remmen, 2015). To exclude the possibility that genetic manipulation caused Overexpression of CncC and silencing of Keap1 enhances the quantal size, illustrated in amplitude frequency plots for miniature excitatory junctional current (mEJC) amplitudes (left), cumulative amplitude frequency plots (middle) and mean bar graphs (right). (C) Exposure to antioxidants dithiothreitol (DTT) and glutathione (GSH) induced a strong increase in quantal size at NMJs of w 1118 larvae as illustrated in frequency plots for mEJC amplitudes (left), cumulative amplitude frequency plots (middle) and mean bar graphs (right). One-way ANOVA with post hoc Tukey-Kramer was used for comparisons with * * p < 0.01, * * * p < 0.001, * * * * p < 0.0001 [n-number of NMJs (from at least three larvae) indicated within bars]. developmental changes which could interfere with our data interpretation, we again assessed the effects of pharmacological manipulation of redox signaling on release parameters of the synapse. Quantification of NMJ evoked responses and QC revealed similar changes using antioxidant supplementation as following genetic manipulations. DTT application for 45 min induced mild effects and GSH application for the same length led to strong effects on the three parameters [eEJC amplitude: w 1118 : 118 ± 5 nA, DTT: 140 ± 7 nA, GSH: 113 ± 4 nA, QC: w 1118 : 213 ± 25, DTT: 167 ± 12, GSH: 65 ± 3, cumulative QC: w 1118 : 441 ± 40, DTT: 247 ± 49, GSH: 63 ± 35, ANOVA, Figures 3G-K] compared to controls (w 1118 ), similar to the changes observed following genetically-induced increase in antioxidant potential suggesting an acute mechanism mediated by a reduction in basal oxidative stress levels. Importantly, CncC KD was without effects indicating that under these unchallenged conditions the deficiency in potential antioxidant capacity did not alter basal synaptic transmission. One important regulatory mechanism with the ability to modulate synaptic release is the control of synaptic release probabilities. The initial release probability can be adjusted in response to various mechanisms. We assessed potential effects on the initial vesicular release probability (p vr ) by measuring paired-pulse ratios (PPR) at a 20 ms inter-spike interval. Previously we found that the nitrergic regulation of synaptic release at the NMJ is mediated via a reduction of p vr (Robinson et al., 2014) as manifested in an increased PPR. Paired-pulse experiments at larval NMJs revealed a significant increase in PPR following genetic alterations of Keap1 expression suggesting that p vr is reduced at lower oxidative stress levels [RNAi Ctrl: 0.81 ± 0.05, CncC-RNAi: 0.92 ± 0.02, Keap1-RNAi: 0.99 ± 0.02 (ANOVA); CncC OE: 1.00 ± 0.05, elav Ctrl: 0.88 ± 0.11 (Student's t-test), Figure 3L]. However, following pharmacological modulation of redox signaling we detected an increase in PPR only after GSH application [w 1118 : 0.88 ± 0.04, DTT: 0.92 ± 0.04, GSH: 1.12 ± 0.05, ANOVA, Figure 3L]. Importantly, changes in frequency (f ) of spontaneous release indicates direct effects on vesicle fusion mediated by the soluble N-ethyl-maleimidesensitive fusion protein Attachment Protein Receptor (SNARE) and SNARE binding proteins. To investigate the modulation of these mechanisms, we measured mEJC frequencies. The results did not show any differences in f between larvae [RNAi Ctrl: 2.6 ± 0.4 s −1 , CncC-RNAi: 3.2 ± 0.3 s −1 , Keap1-RNAi: 3.4 ± 0.5 s −1 (ANOVA); CncC OE: 1.4 ± 0.2 s −1 , elav Ctrl: 3.14 ± 0.80 (Student's t-test); w 1118 : 2.0 ± 0.2 s −1 , GSH: 1.2 ± 0.2 s −1 , DTT: 3.4 ± 0.3 s −1 (ANOVA) p > 0.05] suggesting that vesicle fusion mechanisms per se are not modulated by changes in the redox level following genetic or pharmacological manipulations. Together, our data show that regulation of the redox environment can alter synaptic function with spontaneous FIGURE 4 | Silencing of Keap1 protects against stress-induced synaptic decline. Life spans were analyzed for indicated lines and survivorship was plotted over time. (A) Survival curves represent an average of three life-span trials (n = 80-106 flies). Data were compared using the Log-rank (Mantel-Cox) test, p < 0.0001). Synaptic function was analyzed showing eEJC amplitudes (B) QC (C), cumulative QC (D) and mEJC amplitudes (E) under control [no heat shock (no HS), gray] and heat shock challenged (24 h HS) conditions. Note that the bars in gray are repeats from Figures 2, 3 and comparisons were made for each genotype before and after HS using the unpaired Student's t-test with * p < 0.05, * * p < 0.01 [n-number of NMJs (from at least three larvae) indicated within bars]. release events being positively modulated in a low oxidative stress environment. We next wondered if the observed regulation of synaptic release could translate into changes of larval activity. To test larval activity, we assessed crawling distances of the different genotypes over a period of 10 min (Robinson et al., 2014). Since motoneuronal transmission during crawling activity is predominately related to single motoneuronal action potential-induced synaptic release, which corresponds to a single eEJC event, we would not expect major effects on larval activity. Indeed, neither activation nor suppression of CncC signaling affected larval crawling distances relative to controls [RNAi Ctrl: 0.5 ± 0.03 cm, CncC-RNAi: 0.4 ± 0.02 cm, Keap1-RNAi: 0.5 ± 0.02 cm, CncC OE: 0.4 ± 0.02 cm, elav × w 1118 : 0.5 ± 0.02 cm, w 1118 × CncC: 0.3 ± 0.02 cm, ANOVA, Figure 3M] but the data showed subtle differences between CncC-RNAi and Keap1-RNAi expressing larvae. Increased oxidative stress levels are caused by altered activities of thiol redox circuits that can result in impaired cell signaling and dysfunctional redox-control (Finkel, 2011). It is linked to several pathological processes including dysfunction of proteostasis and the accumulation of misfolded proteins in the lumen of the endoplasmic reticulum (ER), resulting in ER stress (Braakman and Hebert, 2013). As heat shock is involved in triggering ER stress and ROS signaling, we next wanted to test whether altered expression of CncC and Keap1 would affect Drosophila longevity and synapse function following heat shock challenge. We found that under continual heat stress-challenged conditions, KD of CncC drastically reduced longevity [median life span (in hours): RNAi Ctrl: 4.5, CncC-RNAi: 3, p < 0.0001, Log-rank (Mantel-Cox) test] which can be explained by lack of neuroprotective Nrf2 signaling (Ahmed et al., 2017), while expectedly, Keap1 silencing protected and led to increased survival compared to CncC KD [median life span (in hours): Keap1-RNAi: 4, p < 0.0001, Log-rank (Mantel-Cox) test, Figure 4A]. The neuroprotection mediated by CncC (Nrf2) activation was further assessed in heat shock challenged larvae in which we characterized synaptic responses at the NMJ. The most striking changes at the level of synapse physiology occurred in CncC KD larvae 24 h after a single 1 h of heat shock challenge in which eEJC amplitudes declined drastically. This physiological response was not due to changes in quantal size but rather due to a reduced quantal content [eEJC amplitude: RNAi Ctrl: 112 ± 9 nA, CncC-RNAi: 74 ± 4 nA, Keap1-RNAi: 103 ± 9 nA, QC: RNAi Ctrl: 200 ± 15, CncC-RNAi: 186 ± 33, Keap1-RNAi: 142 ± 15, mEJC amplitude: RNAi Ctrl: 0.56 ± 0.04 nA, CncC-RNAi: 0.54 ± 0.09 nA, Keap1-RNAi: 0.66 ± 0.03 nA, Figures 4B,C,E]. Importantly, control larvae also showed a reduction in QC following challenge with heat shock, with Keap1 KD, however, preventing further neuronal deterioration upon this challenge. These changes were consolidated by measuring the vesicle pool size as cumulative release following synaptic stimulation in trains at 50 Hz [cumulative QC: RNAi Ctrl: 398 ± 64, CncC-RNAi: 423 ± 75, Keap1-RNAi: 272 ± 30, Figure 4D]. The data suggest that following heat shock stimulation, the lack of CncC produces strong phenotypes with regards to longevity and synapse function which were partially observed in controls but abolished following Keap1 KD. Many neurological conditions including AD and PD exhibit perturbations of the circadian system (sometimes prior to any motor symptoms or clinical manifestation of symptoms), and underlying pathways have been studied in various animal models (Videnovic et al., 2014). Light and temperature are the two most reliable environmental timing cues, referred to as Zeitgeber (ZT), for the resetting of circadian clocks (Pittendrigh, 1960;Buhr et al., 2010;Musiek et al., 2013;Tamaru et al., 2013). Notably, mRNA expression levels for Keap1a/b and Nrf2 vary significantly within 12 h (i.e., between ZT0 and ZT12) implicating their involvement in circadian redox regulation (Zheng et al., 2017). This prompted us to evaluate how changes in ROS levels would impact the circadian behavior of flies with reduced or augmented cellular antioxidant capacity in a 24-h light-dark (LD) cycle. To determine how the circadian system is affected by modulating ROS levels via CncC/Keap1 signaling, we measured activity and sleep pattern as an index of circadian behavior in flies with reduced or augmented cellular antioxidant capacity. Quantification of sleep episodes in the day and night should give the best overall picture of sleep behavior. We quantified the relative length and number of sleep episodes and found that in the light phase CncC OE reduces the length of sleep but enhances the number of sleep episodes, an effect that was reversed with silencing of CncC (Figures 5A,C). Unexpectedly, following Keap1 KD, sleep behavior was similar to CncC KD flies in this phase. In the scotophase (dark), the overexpression of CncC did not cause any change in sleep parameters which were similar to the controls (Figures 5B,D). Interestingly, the behavior of CncC KD flies in the dark phase was different from the one observed in the photophase (light), showing a decreased length of sleep episodes and an increase in the number of these episodes suggesting that driving redox levels in one direction has opposite effects on sleep depending on the time. Conversely, sleep behavior of Keap1 KD flies was more similar to that observed in the photophase (light), showing only a significant increase in the sleep length episodes (Figures 5C,D). Finally, we measured the total activity of adult flies and found that total activity was only reduced in CncC KD flies in comparison to controls and Keap1 KD (Figures 5E,F). Figure 5G separates the total activity profiles for the studied genotypes into day and night phases showing that CncC but also Keap1 KD reduced activity in the dark phase only. These data imply complex interactions between ROS signaling which causes differential effects over the circadian cycle. In summary, our data provide new evidence of how regulation of the redox homeostasis via modulation of CncC/Keap1 signaling can modulate aging, synapse function and sleep behavior. Specifically, the suppression of Keap1 expression induced beneficial effects on survival and synapse function. Equally, overexpression of CncC and pharmacologically enhancing antioxidant signaling resulted in similar phenotypes, with increases in quantal release being a major result of lowered oxidative stress signaling. The changes in synaptic function can further impact redox-sensitive aspects of sleep behavior which is implicated in disease-associated defects of circadian rhythm in neurodegeneration. DISCUSSION Drosophila has been instrumental in studying synapse function but also modeling various neurodegenerative diseases, including polyglutamine expansion diseases, α-synuclein-linked PD, and other prionopathies and tauopathies (see review McGurk et al., 2015). We and others have previously shown that expression of huntingtin with polyglutamine expansions, mutant α-synuclein, Aβ 40/42 toxicity and prion-mediated pathology suppresses glutamatergic function at the NMJ and causes neurodegenerative phenotypes (Outeiro et al., 2007;Romero et al., 2008;Chakraborty et al., 2011;Steinert et al., 2012;Breda et al., 2015;Vicente Miranda et al., 2016;Fernandez-Funez et al., 2017;Martin-Peña et al., 2017). Furthermore, studies in fly have found that dysfunctional superoxide dismutase 1 (SOD1) activity associated with enhanced oxidative stress can impact upon synapse function. In particular, SOD1 mutant flies exhibit signs of neurodegeneration, locomotor deficits, and shortened life span (Sahinahin et al., 2017). The Drosophila NMJ specifically offers a unique model synapse to study regulatory mechanisms of vesicular release. However, the direct effects of the redox signaling mediated by the Nrf2/Keap1 cascade have not yet been fully assessed at the level of synapse function and whole animal behavior. Our data present new evidence of how CncC and Keap1 signaling modulates Drosophila longevity, synapse function, and larval and adult fly activities, including effects on circadian sleep patterns. Furthermore, we determined the protective effects of CncC/Keap1 activity under stresschallenged conditions. Many neurodegenerative diseases are characterized by a slowly progressive loss of neurons. The etiology of these diseases has still not yet been fully elucidated, although elevated levels of oxidative stress have been suggested as one of the potential common factors. One disease, in particular, is associated with defects in the antioxidant system, with mutations in SOD1 being a strong contributor to ALS. Drosophila has been utilized to characterize the effects of ALS-relevant mutant proteins (Milton et al., 2011;Coyne et al., 2017;Kim et al., 2018) and induction of homeostatic neuronal plasticity can reverse ALS-induced degeneration at the NMJ (Kim et al., 2018). Data indicate that the abnormal excitotoxic glutamate release in the spinal cord of pre-symptomatic ALS mice is mainly based on the increased size of the readily releasable pool of vesicles and release facilitation, supported by plastic changes of specific presynaptic mechanisms (Bonifacino et al., 2016). As ALS is characterized by enhanced cytotoxic oxidative stress, one could speculate that these conditions favor non-physiological release of neurotransmitter. In light of our data which show a strong reduction in the number of evoked quantal release events with an increased quantal size following augmented antioxidant signaling (GSH, Keap1-RNAi, CncC OE), we suggest that low oxidative stress balances release with a reduction in the number of energy-demanding vesicular release events and simultaneously increased quantal size to sustain physiological action potentialevoked synaptic responses. If one considers that a single vesicular release event requires around 60,000 ATP molecules at a glutamatergic synapse in the mammalian central nervous system (CNS), including neurotransmitter refilling, SNARE protein assemble/dissemble, ion pump activities (Attwell and Laughlin, 2001;Harris et al., 2012) and vesicular release represents the highest energy burden to the presynapse (Rangaraju et al., 2014), it is conceivable to suggest that low oxidative stress levels lead to advantageous low QC/high quantal size release parameters. A great variety of factors controls the level of neurotransmitter within the vesicle and changes in vesicle filling thus have great potential to influence synaptic transmission with vesicular glutamate filling determining quantal size (Karunanithi et al., 2002;Wu et al., 2007;Huang and Trussell, 2014;Choudhury et al., 2016). Increases in Drosophila larval quantal size at the NMJ have been reported following >30 min of enhanced activity (Steinert et al., 2006). Conversely, high-frequency stimulation of the NMJ results in a strong decrease in quantal size (Doherty et al., 1984;Naves and Van der Kloot, 2001) illustrating the ability of this particular synapse which is under strong homeostatic control (Newman et al., 2017), but also others (see review Edwards, 2007), to modulate quantal release. It has been shown that for instance, overexpression of vesicular glutamate transporter (vGLUT) leads to larger quantal sizes and resulting reduced QC (Daniels et al., 2004). Further mechanisms include uptake of transmitter via the Na +dependent excitatory amino acid transporters (EAATs; Wang and Floor, 1998;Takayasu et al., 2005;Rose et al., 2018), glutamate recycling which includes the glutamine-glutamate cycle, and transmitter transport into the vesicle which involves the exchange of lumenal H + for cytoplasmic transmitter and hence depends on a H + electrochemical gradient, that is produced by the vesicular (H + )-vATPase (Cotter et al., 2015). In fact, it has been shown that DTT or reduced GSH reverse H 2 O 2 -induced inhibition of the vATPase, suggesting that the mechanism of its inhibition by H 2 O 2 involves oxidation of a reactive cysteine sulfhydryl group in the ATP binding site. Inhibition of vATPase activity would decrease the amount of transmitter stored in synaptic vesicles and thus reduce the quantal size during episodes of oxidative stress (Wang and Floor, 1998). EAAT transporters contain cysteine-associated sulfhydryl groups sensitive to free radical species. The actions of free radicals result in the formation of cysteine bridges, thereby inhibiting glutamate transport into the cells (Trotti et al., 1998), as demonstrated for superoxide anion, hydrogen peroxide, NO and peroxynitrite (Pogun et al., 1994;Volterra et al., 1994) or into vesicles by reducing vGLUT activities . Conversely, overexpression of SOD1 protected glutamate transporters from inhibition (Chen et al., 2000). Studies in SOD1 knock-out mice NMJs found reduced quantal size (equal to mEJC) following enhanced oxidative stress leading to weakening of the muscle (Ivannikov and Van Remmen, 2015). Conversely, by reducing oxidative stress in a mouse model of peripheral nerve injury, the authors found increases in expression of vGLUT (Cirillo et al., 2015), the predominate vGLUT responsible for vesicular glutamate filling at the Drosophila NMJ (Wu and Cooper, 2012) and mammalian CNS synapses (Fremeau et al., 2004). Previous studies confirmed that activation of the Nrf2 pathway led to upregulation of the neuronal EAAT3 in mice (Escartin et al., 2011). This transporter has a homologue in Drosophila (Besson et al., 2000) and its upregulation enhances antioxidant activity via increases in glutathione production in addition to allowing sustained presynaptic glutamate levels available for release. Conceivably, modulation of redox levels might impact on any of the above mechanisms and regulate transmitter release in a negative or positive direction and our data describe how reducing redox stress, either genetically or pharmacologically, leads to enhanced quantal size. However, this increase in quantal size resulted in a reduction of QC, likely due to homeostatic feedback regulation of this highly plastic synapse (Frank, 2014;Li et al., 2018) and future studies will have to evaluate the specific mechanisms by which redox signaling can alter vesicular transmitter release. The observed changes of neuronal function will have wide implications on animal behavior and together with reported redox-mediated regulation of circadian function, determined by Nrf2/CncC-Keap1 signaling, our data show further evidence how this cascade can influence circadian rhythm. Neurodegeneration causes abnormalities in sleep patterns, partially due to neuronal loss but likely also due to specific dysregulation of circadian circuits. We demonstrated that increases in CncC expression or its silencing results in an opposite alteration in sleep episode length in the day. However, both conditions resulted in similar effects on sleep length during the night time. Total 24-h activity following CncC overexpression was not altered, whereas CncC KD reduced overall activity. These observations are further complicated with the expression of Nrf2 falling under the transcriptional regulation of the Clock/Bmal1-complex (Xu et al., 2012;Pekovic-Vaughan et al., 2014). Clock/Bmal1-dependent Nrf2 regulation gives rise to diurnal patterns in Nrf2 signaling, which underlies the rhythmic expression of antioxidant and metabolic enzymes reported in different cellular systems (Xu et al., 2012;Wang et al., 2018;Ishii et al., 2019). In this context it has been reported that Nrf2 gain-and loss-of-function affect circadian gene expression and rhythmicity in mammalian cellular systems, indicating the coupling of Nrf2 and Clock and the role of Nrf2 to integrate cellular redox status into timekeeping (Wible et al., 2018). Collectively, this study and previous work illustrate a key mechanistic link between circadian oscillations in redox balance and Clock gene expression rhythms. However, key questions on the complex bidirectional regulation of ROS by circadian activity and vice versa remain to be answered in future studies.
7,681.8
2019-04-16T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Low-mass right-handed gauge bosons from minimal grand unified theories Prediction of low-mass $W_R$ and $Z_R$ gauge bosons in popular grand unified theories has been the subject of considerable attention over the last three decades. In this work we show that when gravity induced corrections due to dim.5 operator are included the minimal symmetry breaking chain of $SO(10)$ and $E_6$ GUTs can yield $W_R^{\pm}$ and $Z_R$ bosons with masses in the range $(3-10)$ TeV which are accessible to experimental tests at the Large Hordan Collider. The RH neutrinos turn out to be heavy pseudo-Dirac fermions. The model can fit all fermion masses and manifest in rich structure of lepton flavor violation while proton life time is predicted to be much longer than the accessible limit of Super-Kamiokande or planned Hyper-Kamiokande collaborations. Left-right symmetric gauge theory [1,2] originally suggested to explain parity violation as monopoly of weak interaction has also found wide applications in many areas of particle physics beyond the standard model including neutrino masses and mixings, lepton number and lepton flavor violations,CP-violation , K −K and B −B mixings and baryogenesis through leptogenesis. This theory is expected to make substantial new impact on weak interaction phenomenology if the associated W ± R , Z R bosons have masses in the TeV range. Finally the new gauge bosons can be detected at the Large Hardon Collider (LHC) where ongoing experimental searches have set the bounds M W R ≥ 2.5 and M Z R ≥ 1.162 [3]. Over the years considerable attention has been focussed on the S O(10) realisation of the left-right(LR) gauge symmetry, accompanied by the left-right discrete symmetry (g 2L = g 2R ) or without it (g 2L g 2R ) and this latter symmetry is denoted as G 2213A [4]. The purpose of this work is to show that these gauge boson masses can be realised quite effectively in the minimal symmetry breraking chain of S O (10) or E 6 GUT: To accunt for tiny neutrino masses we use inverse seesaw formula [6] for which we include three additional singlet fermions S i (i = 1, 2, 3), one per each generation in case of S O(10) but they are parts of standard fermion representations 27 F i of E 6 . We now give some details in S O(10) by using Higgs representations Φ 210 in the first step, χ 16 in the second step, and H 10 in the third step to achieve the low-1 energy symmetry. In addition to conventional renormalizable interactin, we also include the effect of nonrenormalisable dim.5 operator [5] induced by gravity effects for which M C ∼ M Planck leading to the GUT-scale boundary conditions on the two-loop estimated GUT-scale gauge couplings where α G is the effective GUT fine structure constant and here M U = solution to the GUT scale that includes corrections due to dim.5 operator and the above boudary condition emerges by beaking the GUT symmetry through the VEVs of < η(1, Table. 1. The right-handed doublet χ(1, 2, −1, 1) breaks the symmetry G 2213A −→ G 213 and also generates N − S mixing mass term M which results in the inverse seesaw mechanism for neutrino masses. This also contributes significantly towards lepton flavor violation. The predicted proton life time for the decay p → e + π 0 in our model turns out to be in the range ∼ 10 37 − 10 38 yrs. which is beyond the accessible ranges of Super-Kamiokande (τ p (p → e + π 0 ) ≥ 1.4 × 10 34 yrs) and proposed investigations at Hyper-Kamiokande (τ p (p → e + π 0 ) ≥ 1.3 × 10 35 yrs). As noted above,this model admits inverse seesaw formula for light neutrino masses [6] ) >> M D and µ S is small S O(10)-singlet-fermion mass term that violates a SM global symmetry . The Dirac neutrino mass matrix M D is determined by fitting the extrapolated values of all charged fermion masses at the GUT scale and running it down to the TeV scale following top-down approach [7,8]. We may have to use D h = 2 for this purpose but to achieve near TeV scale G 2213A symmetry D h = D χ = 1 is sufficient The heavy neutrinos in this model are three pairs of pseudo-Dirac fermions which mediate charged lepton flavor violating decays with predictions on branching ratios shown in Table.2. The present experimental limits on branching ratios are Br(µ → eγ) ≤ 2.4 × 10 −12 [9], Br(τ → eγ) ≤ 1.2 × 10 −7 and Br(τ → µγ) ≤ 4.5 × 10 −8 [10]. For verification of model predictions, improved measurements with accuracy upto 3−4 orders are needed. The inverse seesaw formula fits the neutrino oscillation data quite well for all the three types of light neutrino mass hierarchies. All corresponding Higgs representations being present in E 6 GUT, the same approach leads to identical results but now the three S O(10) fermion singlets are in 27 F i each of which has 10 non-standard fermions compared to 16 F + 1 F . We have discussed a novel method of realising low mass RH gauge bosons in minimal GUTs accessible to LHC using gravitational corrections through dim.5 operator. The model successfully accounts for the neutrino oscillation data through inverse seesaw mechanism. The heavy fermions in the model are pseudo Dirac particles which are also verifiable by their trilepton signatures at LHC. We have obtained similar solutions with D h = D χ = D T = 1 where 2 [11].
1,307.6
2015-10-05T00:00:00.000
[ "Physics" ]
Bit-wise Cryptanalysis on AND-RX Permutation Friet-PC . This paper presents three attack vectors of bit-wise crypt-analysis including rotational, bit-wise differential, and zero-sum distinguishing attacks on the AND-RX permutation Friet-PC , which is implemented in a lightweight authenticated encryption scheme Friet . First, we propose a generic procedure for a rotational attack on AND-RX cipher with round constants. By applying the proposed attack to Friet-PC , we can construct an 8-round rotational distinguisher with a time complexity of 2 102 . Next, we explore single-and dual-bit differential biases, which are inspired by the existing study on Salsa and ChaCha, and observe the best bit-wise differential bias with 2 − 9 . 552 . This bias allows us to practically construct a 9-round bit-wise differential distinguisher with a time complexity of 2 20 . 044 . Finally, we construct 13, 15-, 17, and 30-round zero-sum distinguishers with time complexities of 2 31 , 2 63 , 2 127 , and 2 383 , respectively. To summarize our study, we apply three attack vectors of bit-wise cryptanalysis to Friet-PC and show their superiority as effective attacks on AND-RX ciphers. Background Friet, which was proposed by Simon et al. at EUROCRYPT 2020 [26], is a lightweight authenticated encryption scheme with a 128-bit security level that is resistant to side channel and fault injection attacks. It adopts the authenticated encryption mode SpongeWrap based on the duplex construction [5]. The SpongeWrap mode is based on the concept of efficiently building an authenticated encryption scheme from cryptographic permutation; thus, designers who adopt SpongeWrap as the authenticated encryption mode have an important task of designing a lightweight cryptographic permutation with a high security level. The designers of Friet proposed a new design technique for ciphers with efficient fault-detecting implementations, and then designed new cryptographic permutations called Friet-PC and Friet-P for implementation in Friet. A previous version of the Friet-PC permutation, called Frit, was proposed by the same designers in 2018 [25]. It adopts the AND-Rotation-XOR (AND-RX) construction, which is much similar to the Addition-Rotation-XOR (ARX) construction. Shortly thereafter, Dobraunig et al. performed a key recovery attack against the full-round version in the use case of Frit as an Even-Mansour block cipher [9]. In addition, Qin et al. applied a cube attack on the reduced-round version in the use case of Frit as a duplex-based authenticated encryption mode [23]. Friet-PC was designed considering these attacks. The designers evaluated the security of Friet-PC against differential and linear attacks [26]. They first investigated the propagation properties to determine the minimum weights of differential and linear trails, and then experimentally obtained a 6-round differential trail with weight 59 and an 8-round linear trail with weight 80. These trails can be extended to a 6-round differential distinguisher with a time complexity of 2 59 and an 8-round linear distinguisher with a time complexity of 2 80 . As a security evaluation by a third party, Liu et al. proposed a new framework called a rotational differential-linear attack [19], which is inspired from the differential-linear attack proposed by Langford and Hellman [17]. Their proposed attack significantly improved the security evaluation by the designers, and allowed us to construct a 13-round rotational differential-linear distinguisher with a time complexity of 2 117.81 . To the best of our knowledge, the security evaluation for Friet-PC by a third party has not been reported except for that by Liu et al.; thus, the best attack on Friet-PC is the 13-round rotational differential-linear distinguisher. Our Contribution In this study, we evaluate the security of Friet-PC with three attack vectors of bit-wise cryptanalysis: rotational, bit-wise differential, and zero-sum distinguishing attacks. Although these vectors are widely used as generic attacks against ARX and AND-RX ciphers, no study appears to have applied these attacks to evaluate the security of Friet-PC as yet. If an adversary can efficiently perform these attacks on Friet-PC, they may threaten the security of not only the permutation Friet-PC but also the authenticated encryption scheme Friet. Table 1 summarizes the results of previous security evaluations and the evaluation in this study for Friet-PC. The proposed security evaluations sufficiently improve the existing best attack by Liu et al.; thus, we show their superiority as effective attacks on AND-RX ciphers. We remark that the proposed attacks are no practical threat to Friet-PC, however, it is recommended to use these attack vectors of bit-wise cryptanalysis to evaluate the security of AND-RX ciphers when designing the AND-RX ciphers in the future. The details of the proposed security evaluations are given in the following text. [26] Rotational Differential-Linear/Distinguisher 8 2 17.81 [19] Rotational Differential-Linear/Distinguisher 9 2 29.81 [19] Rotational Differential-Linear/Distinguisher 13 2 117.81 [19] Algorithm 1 Friet-PC end for 10: return (a, b, c) 11: end procedure input patterns, we succeed in constructing 13-, 15-, 17-, and 30-round zero-sum distinguishers [3] with time complexities of 2 31 , 2 63 , 2 127 , and 2 383 , respectively. To the best of our knowledge, these are the best distinguishers for reduced-round Friet-PC, given that the attacker has a full control over the internal state, which is a common assumption to analyze the security of a public permutation. Organization of the Paper The rest of the paper is organized as follows. In Section 2, we briefly describe the specification of the Friet-PC permutation. In Section 3, we first review the existing techniques for the rotational attacks, and propose a generic attack procedure for a rotational attack on AND-RX ciphers with round constants. Based on the proposed attack procedure, we provide a rotational distinguisher for the 8round Friet-PC. In Section 4, we first introduce the existing techniques for the bit-wise differential attacks, and then provide a bit-wise differential distinguisher for the 9-round Friet-PC. In Section 5, we first describe the how to search for integral distinguishers with the bit-based division property, and then provide the zero-sum distinguishers for the 13-, 15-, 17-, and 30-round Friet-PC. Finally, Section 6 concludes the paper. Specifications of Friet-PC Permutation Friet-PC has three limbs (a, b, c) ∈ {0, 1} 128 , and its round function consists of the following six steps: a round constant addition step δ i that is a limb adaptation, two non-native limb transposition steps τ 1 and τ 2 , two mixing steps µ 1 and µ 2 that are limb adaptations, and a nonlinear step ξ that is also a limb adaptation. We describe the procedure of the Friet-PC permutation as shown in Algorithm 1 and Fig. 1, and use the following notation for this procedure: x ⊕ y is the exclusive or (XOR) of two limbs x and y, x ∧ y is the bit-wise logical and (AND) of two limbs x and y, x ≪ n is the left rotation by n bits of a limb x, and rc i is the i-th round constant as listed in Table 2. We use this notation throughout the remainder of this paper. Rotational Distinguisher We analyze the security of Friet-PC against a rotational attack, which has been applied to ARX and AND-RX ciphers, such as block ciphers Threefish [13], Speck [1,18], Simon [20] and Simeck [20]; stream ciphers Salsa [12] and ChaCha [4]; hash functions Keccak [22], BLAKE2 [10,14] and Skein [14,15]; and message authentication code algorithm Chaskey [16]. In this section, we first review the generic techniques for the rotational attacks and subsequently explain a new technique for a rotational attack on AND-RX ciphers with round constants. Then, we describe the application of the proposed technique to Friet-PC and finally show a rotational distinguisher for the 8-round Friet-PC with a time complexity of 2 102 . Rotational Attacks In 2010, Khovratovich and Nikolić [13] explored the propagation of a rotational pair (X, X ≪ r) or (X, X ≫ r) throughout an ARX cipher, and generalized a new technique called rotational attack. In the following text, we discuss only the propagation of the rotation pair (X, X ≪ r), as the propagation of the rotation pair (X, X ≫ r) can be explained similarly. A rotational attack on an ARX or AND-RX cipher allows an adversary to analyze the rotational probability of the entire cipher by multiplying the individual rotational probabilities of all operations used in the cipher. In other words, the adversary can properly perform the rotational attack on an ARX or AND-RX cipher by computing the rotational probabilities of four distinct operations, i.e., modular addition, AND, rotation, and XOR. The rotational probabilities of AND, rotation, and XOR are given by while the rotational probability of modular addition is given by the following lemma. Lemma 1 ([8, Corollary 4.12]). If we suppose an n-bit word X to be fixed and an n-bit word Y to be chosen uniformly at random, then we obtain where X L = (x n−1 , . . . , x n−r ) and X R = (x n−r−1 , . . . , x 0 ) for X. On the other hand, if we suppose two n-bit words X and Y to be chosen uniformly at random, then we obtain It should be noted here that all inputs to an ARX or AND-RX must be rotational pair for the rotational attack to perform well, claimed by Khovratovich and Nikolić [13]. According to them, we cannot perform a proper rotational attack on an ARX or AND-RX cipher with round constants such as Friet-PC, because it is practically difficult to obtain a rotational pair of round constants. To solve this problem, some studies explored a rotational attack against ARX and AND-RX block ciphers Speck [1,18] and Simon [20] with constants that actually correspond to round keys. However, no study on a rotational attack against an ARX or AND-RX cipher with round constants specified in the specification, such as Friet-PC, has been reported as yet. Rotational Attack on AND-RX Ciphers with Round Constants To properly perform a rotational attack on an AND-RX cipher with round constants, we first demonstrate that the XOR operation in the presence of round constants can preserve the propagation of a rotational pair with a probability of one by introducing a XOR masking technique into a rotational attack. Then, we establish the rotational probability of the AND operation in the presence of round constants. Finally, we propose a generic attack procedure for a rotational attack on AND-RX ciphers with round constants. In the following text, we describe a rotational pair as (X, ← − X ) instead of (X, X ≪ r). XOR Masking Technique for the XOR Operation with Constants. We first introduce a XOR masking technique so that the XOR operation in the presence of round constants rc expressed in the form satisfies the equality. The left side of (6) is not XOR masked as it satisfies the same form as the left side of Eq.(3). Then, it can be seen from Eq.(3) that (6) satisfies the equality with a probability of one when In summary, the XOR operation in the presence of round constants can preserve the propagation of a rotational pair with a probability of one by XORing the mask value mask 1 = rc ⊕ ← − rc. Note that the XOR masking technique can be applied to both the input and output values of the target cipher. For example, when the adversary applies the XOR masking technique to the input value, he/she must choose (X, ← − X ⊕ mask 1 ) as the input rotational pair. XOR Masking Technique for the AND Operation with Constants. We examine whether the AND operation in the presence of a round constants, rc 1 and rc 2 , expressed in the form satisfies the equality. To reveal the differences between both sides of (8), we use Eqs. (1) and (3) to transform (8) to We then apply the XOR masking technique to the input value so that the AND operation in the presence of round constants expressed in the form satisfies the equality. Here, (10) satisfies the equality with a probability of one when (mask 2 , mask 3 This implies that the adversary must choose [(X, ] as the input rotational pair when he/she applies the XOR masking technique to the input value. Similarly, we apply the XOR masking technique to the output value corresponding to the input rotational pairs so that the AND operation in the presence of round constants expressed in the form satisfies the equality. However, it is practically difficult to determine the appropriate mask values so that (12) satisfies the equality. We will explain the reason after providing the following two examples. Let x i , y i , rc 1,i , and rc 2,i be the i-th bit of X, Y , rc 1 , and rc 2 , respectively. Example 1. We focus on the AND operation of the i-th bit in (9). We assume that either rc 1,i ⊕ ← − − rc 1,i = 1 or rc 2,i ⊕ ← − − rc 2,i = 1 holds. In this example, we assume that rc 1,i ⊕ ← − − rc 1,i = 1 holds for the sake of simplicity. Table 3 provides a truth table corresponding to (9). This table shows that the AND operation of the i-th bit holds with a probability of 2 −1 . Example 2. We also focus on the AND operation of the i-th bit in (9). In this example, we assume that both rc 1,i ⊕ ← − − rc 1,i = 1 and rc 2,i ⊕ ← − − rc 2,i = 1 hold. Table 4 provides a truth table corresponding to (9). This table shows that the AND operation of the i-th bit holds with a probability of 2 −1 . These examples show that the AND operation of the i-th bit in (9) holds with a probability of 2 −1 when at least either rc 1,i ⊕ ← − − rc 1,i = 1 or rc 2,i ⊕ ← − − rc 2,i = 1 holds. Moreover, these examples show bitwise independent events since (9) is a bit-wise operation; thus, we can compute a probability that the AND operation expressed in (9) satisfies the equality by simply counting the number of bits for which either rc 1,i ⊕ ← − − rc 1,i = 1 or rc 2,i ⊕ ← − − rc 2,i = 1 holds for each bit. These facts lead to the following theorem. be two rotational pairs where symbol '← −' represents the left rotation by r bits, and let rc 1 and rc 2 be round constants. Then, the rotational probability of the AND operation in the presence of round constants is given as follows: where hw[·] represents the hamming weight. Proof. As discussed earlier, the AND operation of the i-th bit in (9) holds with a probability of 2 −1 when at least either rc 1,i ⊕ ← − − rc 1,i = 1 or rc 2,i ⊕ ← − − rc 2,i = 1 holds. Moreover, we can compute a probability that the AND operation expressed in (9) satisfies the equality by simply counting the number of bits for which either rc 1,i ⊕ ← − − rc 1,i = 1 or rc 2,i ⊕ ← − − rc 2,i = 1 holds for each bit. We can achieve this by calculating the hamming weight such as hw[(rc 1 In summary, the rotational probability of the AND operation in the presence of round constants is given as shown in Eq. (13). Now, we explain why it is practically difficult to determine the appropriate mask values so that (12) satisfies the equality. This is because the mask values cannot be uniquely determined unless the adversary knows the correct values of X and Y , which are usually the intermediate information of the target cipher and are not available to the adversary (with exceptions). For example, from Table 3, if the values of ← − y i ⊕ ← − − rc 2,i and ← − y i ⊕ rc 2,i are 1, the adversary must apply the XOR mask to satisfy the equality, but he/she cannot decide whether to apply the XOR mask without knowing the value of ← − y i . Therefore, we should evaluate a rotational probability of the AND operation in the presence of round constants according to Theorem 1 without applying the XOR masking technique to the output value corresponding to the input rotational pairs. Attack Procedure. Based on the discussed XOR masking technique, we propose a generic attack procedure for a rotational attack on AND-RX ciphers with round constants. The proposed attack consists of offline and online phases. In the offline phase, we perform the following procedure: Step 1. We analyze the input and output mask values for the i-th round function of the target AND-RX cipher. In this step, we apply the XOR masking technique to the input rotational pair so that the influence of the round constant does not propagate to the output rotational pair. As shown in Fig. 2 (a), the input rotational pair is masked with a specific value X to cancel the influence of the round constant; then, we do not need to apply the XOR masking technique to the output rotational pair. Step 2. We explore the input mask value for the (i − r 1 )-th round function of the target AND-RX cipher by going back r 1 rounds from the i-th round function of the cipher. This is feasible because we can easily construct the inverse function of the AND-RX cipher. As shown in Fig. 2 (b), we obtain the input mask value W for the (i − r 1 )-th round function such that the output mask value of the (i − 1)-th round function becomes X. Step 3. We investigate the output mask value for the (i + r 2 )-th round function of the target AND-RX cipher. As shown in Fig. 2 (c), the input mask value of the (i + 1)-th round function is 0, as obtained in Step 1; then, we can obtain the output mask value Y for the (i + r 2 )-th round function by analyzing the influence of the round constants through the r 2 rounds of the target AND-RX cipher. We finally obtain the input mask value W and the output mask value Y for the (r 1 +r 2 +1)-round version of the target AND-RX cipher. Thereafter, in the online phase, by utilizing these mask values, we can construct a rotational distinguisher for the target AND-RX cipher in a manner similar to that in existing studies [1,4,10,12,13,14,15,16,18,20,22]. Application to Friet-PC We apply the proposed attack procedure to Friet-PC. We first perform the offline phase of the proposed attack procedure on Friet-PC and obtain the input/output mask values for each round. Then, we examine the techniques for mitigating the influence of the round constants. Finally, we perform the online phase of the proposed attack procedure on Friet-PC, and demonstrate a rotational distinguisher for the 8-round Friet-PC with a time complexity of 2 102 . c ) be the input mask variables for the r-round limbs (a, b, c), or the output mask variables for the (r − 1)-round limbs (a, b, c), respectively; let RC ≪t Based on the offline phase in the proposed attack procedure, we obtain the input/output mask values for each round of Friet-PC as follows: Step 1. We need to mask the i-round input rotational pair with a specific value to cancel the influence of the round constant. Algorithm 1 shows that the round constant rc i is used for the first operation in the round function of Friet-PC, such as c ← c ⊕ rc i ; thus, we can obtain the i-round input/output mask values (mask holds with a probability of one. The influence of the round constant is cancelled completely by using these input mask values. Step 2. We need to mask the (i−r 1 )-round input rotational pair with a specific value such that the output mask value of the (i − 1)-th round function becomes (mask ; thus, by going back r 1 rounds from the i-th round function of Friet-PC, we can obtain the (i−r 1 )round input mask value. Table 5 lists the input mask values by going back up to (i − 3) rounds. Step 3. In Step 1, we have obtained the i-round output mask value (mask , mask (i+1) c ) = (0, 0, 0), which is the (i + 1)-round input mask value. Thus, by analyzing the influence of the round constants through the r 2 rounds of Friet-PC, we can obtain the (i+r 2 )-round output mask values. Table 5 lists the output mask values by going up to (i + 4) rounds. Further Discussion for the Online Phase. According to Theorem 1, the lower the hamming weight in the rotational pair associated with the influence of the round constants, the higher is the rotation probability of the AND operation in the presence of round constants; thus, if we mitigate the influence of the round constants as much as possible, we can perform the online phase in the proposed attack procedure with a high probability. To mitigate the influence of the round constants in the online phase, we deliberate over the following three questions: Q1. Should we select the pattern (X, ← − X ) or (X, − → X ) as a rotational pair? Q2. What value should we select as a rotational amount r? Q3. How should we decide the target rounds? To answer these questions, we analyze the round constants of Friet-PC by using the following four examples: Example 3. We consider the case where exactly one bit is 1 in the round constants of Friet-PC, such as rc 9 , rc 10 , rc 11 . In this example, we use rc 9 for the sake of simplicity. Then, the hamming weight of [rc 9 ⊕ ← − rc 9 ] can be minimized regardless of the selection of the rotational pair and rotational amount, i.e., hw[rc 9 ⊕ ← − rc 9 ] = 2. Example 4. We consider the case where two or more bits are 1 in the round constants of Friet-PC and all of the bit strings 1 are continuous in hexadecimal notation, such as rc 0 , rc 1 , rc 6 . In this example, we use rc 0 for the sake of simplicity. Then, the hamming weight of [rc 0 ⊕ ← − rc 0 ] can be minimized when the rotational amount is selected as r = 4, regardless of the selection of the rotational pair, i.e., hw[rc 0 ⊕ ← − rc 0 ] = 2. If the rotational amount is selected as r = 1, the hamming weight of [rc 0 ⊕ ← − rc 0 ] can be maximized, e.g., hw[rc 0 ⊕ ← − rc 0 ] = 8. Example 5. We consider the case where two bits are 1 in the round constants of Friet-PC and the bit strings 1 are not continuous in hexadecimal notation, such as rc 3 , rc 4 , rc 8 . In this example, we use rc 3 and rc 8 for the sake of simplicity. In one case, the hamming weight of [rc 3 ⊕ ← − rc 3 ] can be minimized when the rotational amount is selected as r = 8, regardless of the selection of the rotational pair, i.e., hw[rc 3 ⊕ ← − rc 3 ] = 2. In another case, the hamming weight of [rc 8 ⊕ ← − rc 8 ] can be minimized when the rotational amount is selected as r = 12, regardless of the selection of the rotational pair, i.e., hw[rc 8 ⊕ ← − rc 8 ] = 2. Therefore, the distance between 2-bit strings 1 is the optimum rotational amount. Example 6. We consider the case where three or more bits are 1 in the round constants of Friet-PC and the bit strings 1 are not continuous in hexadecimal notation, such as rc 2 , rc 5 , rc 17 . In this example, we use rc 2 for the sake of simplicity. Then, the hamming weight of [rc 2 ⊕ ← − rc 2 ] can be minimized when the rotational amount is selected as r = 4, regardless of the selection of the rotational pair, i.e., hw[rc 2 ⊕ ← − These examples show that to mitigate the influence of the round constant, we need to change the rotational amount according to the value of the round constant though we can freely select the rotational pair; however, it is impossible to change the rotational amount while performing a rotational attack. Hence, we need to decide the target round that can mitigate the influence of the round constants without changing the rotational amount. Consequently, we choose the 9th to 16th round of Friet-PC as the target rounds in order to efficiently perform the online phase in the proposed attack on the 8-round Friet-PC. As discussed in Examples 3 and 4, for the round constants in the target rounds, the hamming weight can be minimized by selecting the rotational amount as r = 4. In addition, we select the pattern (X, ← − X ) as the rotational pair. Complexity Estimation. As discussed in Section 3.2, to perform a rotational attack on Friet-PC properly, we need to evaluate the rotational probability of the AND operation in the presence of round constants. When focusing on the round function of Friet-PC, only the output limb a is influenced by the AND operation. Further, according to Algorithm 1, the AND operation is executed in the final step of the round function of Friet-PC, and the output limbs (b, c) in each round become the input of its AND operation; thus, this situation implies that the output mask values (mask ) for the r-th round output limbs (b, c) influence a rotational probability of the AND operation in each round. Based on Theorem 1, we estimate a rotational probability of the AND operation in the round function of Friet-PC by calculating the hamming weight from (mask Table 6 lists the minimum hamming weights for the AND operation in the target round of Friet-PC. As discussed earlier, we can estimate the minimum hamming weights for each mask values, such as hw[RC ≪0 i ] = 2, by selecting the rotational amount as r = 4. To confirm the accuracy of our estimation, we have conducted an experiment to compute the rotational probability of the 10th to 14th round of Friet-PC; then, we have confirmed that the rotational probability of the target round can be approximated to 2 −38 . Herein, we explain that the minimum hamming weight in the 16th round of Friet-PC is 0. This is because the output limbs (b, c) in each round are not influenced by the AND operation; thus, when a complete rotational pair holds for all input limbs (a, b, c) in each round, a rotational distinguisher can be performed with a probability of one by masking properly the output limbs (b, c) with the mask values listed in Table 5 (experimentally verified over 2 32 trials). To summarize our results, we choose the 9th to 16th round of Friet-PC as the target rounds, and have demonstrated a rotational distinguisher for the 8-round Friet-PC with a time complexity of 2 102 . However, we cannot demonstrate a rotational distinguisher for 9 or more rounds of Friet-PC because it provides a 128-bit security level. Bit-wise Differential Distinguisher In this section, we investigate the security of Friet-PC against a bit-wise differential attack, which has been mainly applied to ARX ciphers, such as stream ciphers Salsa and ChaCha [2,6,24]. Specifically, we focus on single-and dualbit differential attacks, reported by Choudhuri and Maitra [6], and demonstrate a practical bit-wise differential distinguisher for the 9-round Friet-PC with a time complexity of 2 20.044 . Single-and Dual-bit Differential Attacks be an associated bit with the difference ∆x for r = 0 and the output difference ∆x (r) i for r > 0 are referred to as ID and OD, respectively. We note that x (r) 0 and x (r) 127 are the least significant bit (LSB) and most significant bit (MSB), respectively. For all possible choices of input limbs, single-and dual-bit differential probabilities are defined by where ϵ d denotes the bias of the OD. To distinguish the r-round limb x (r) computed by the reduced-round Friet-PC from true random number sequences, we use the following theorem proved by Mantin and Shamir [21]. Theorem 2 ([21, Theorem 2]). Let X and Y be two distributions, and suppose that the event e occurs in X with a probability p and Y with a probability p·(1+q). Then, for small p and q, O( 1 p·q 2 ) samples suffice to distinguish X from Y with a constant probability of success. Let X be a distribution of OD of true random number sequences, and let Y be a distribution of OD of the reduced-round Friet-PC. Based on single-bit and dual-bit differential probabilities, the number of samples to distinguish X and Y is O( 2 ϵ 2 d ) since p and q are equal to 1 2 and ϵ d , respectively. Experimental Results To find bit-wise differential biases of the reduced-round Friet-PC, we have conducted experiments with 2 28 randomly chosen samples. Our experimental environment is as follows: five Linux machines with 40-core Intel(R) Xeon(R) CPU E5-2660 v3 (2.60 GHz), 128.0 GB of main memory, a gcc 7.2.0 compiler, and the C programming language. Tables 7-9 list the single-and dual-bit differential biases for the 9-, 10-, and 11-round Friet-PC. As shown in Table 7, we obtain the best bit-wise differential bias for the 9-round Friet-PC, such that ID is ∆b (0) 40 , OD is ∆a (9) 121 ⊕ ∆c (9) 54 , and ϵ d is approximately 2 −9.360 . To obtain a more precise differential bias in this ID-OD pair, we have conducted an additional experiment with 2 36 randomly chosen samples. Thus, we obtain a more precise differential bias for the 9-round Friet-PC, such that ϵ d is approximately 2 −9.522 . According to Theorem 2, 2 20.044 samples are sufficient for distinguishing the 9-round Friet-PC from a true random number generator with a constant probability of success. For the 9-round Friet-PC, the best dual-bit differential bias, i.e., ϵ d = 2 −9.522 , provides a practical bit-wise differential distinguisher when ID is ∆b (0) 40 and OD Table 7. Single-and dual-bit differential biases (log 2 ) for the 9-round Friet-PC. Single-bit Dual-bit Table 8. Single-and dual-bit differential biases (log 2 ) for the 10-round Friet-PC. is ∆a (9) 121 ⊕ ∆c (9) 54 . Similarly, as shown in Tables 8 and 9, we obtain the best bit-wise differential biases for the 10-and 11-round Friet-PC, such that ϵ d are approximately 2 −11.501 and 2 −11.596 , respectively. These experimental results may indicate insufficient accuracy because the best differential biases for the 10-and 11-round Friet-PC are approximately equal. To obtain a more precise differential bias for the 10-round Friet-PC, we have conducted an additional experiment with 2 38 randomly chosen samples when ID is ∆a (0) 118 and OD is ∆b (10) 122 ⊕ ∆c (10) 45 . This is the best ID-OD pair for the 10-round Friet-PC. Consequently, we obtain the more precise differential bias for the 10-round Friet-PC, such that ϵ d is approximately 2 −18.634 ; thus, at least 2 38.268 samples are sufficient for distinguishing the 10-round Friet-PC from a true random number generator with a constant probability of success. In summary, our experiments have revealed that the practical bit-wise differential distinguisher for Friet-PC performs properly up to 9 rounds (out of 24 rounds in the original version). Table 9. Single-and dual-bit differential biases (log 2 ) for the 11-round Friet-PC. Zero-sum Distinguisher and Division Property The zero-sum distinguisher [3] a widely-utilized tool to evaluate the security of a public permutation, though it has never influenced the security of the corresponding hash or encryption schemes as far as we know. A critical reason exists in the attackers' capacity to control the whole internal state, which is impossible in the schemes adopting the sponge structure. However, it is still interesting if one could identify a non-trivial zero-sum distinguisher with better time complexity than those obtained with trivial algebraic degree evaluations. The bit-based division property [27] is a powerful technique to compute the increase of algebraic degrees for a bit-oriented public permutation, especially when combined with the automatic search method [28]. However, the usage of division property has not been discussed in the proposal of Friet [26] and we believe this is essential if non-trivial increase of algebraic degrees could be identified. Consequently, in the following part, we briefly introduce bit-based division property [27] and then report our findings. First, define the following functions before defining the division property. Then, the bit-based division property [27] can be defined as follows: Definition 2 (Bit-Based Division Property). Let X be a multiset whose elements takes a value of F n 2 . When the multiset X has the division property D 1 n K , where K denotes a set of n-dimensional vectors whose i-th element takes 0 or 1, it fulfills the following conditions: wt(u) is the hamming weight of u. If there k ∈ K and k ′ ∈ K satisfying k ⪰ k ′ in the division property D 1 n K , k can be removed from K because it is redundant. When we utilize MILP method to evaluate the division property propagation, we need to focus on the elements of K. Xiang et al. proposed new notations [28] called division trail to illustrate division property propagation, which can be defined as follows: Definition 3 (Division Trail). Let f r denote the round function of an iterated block cipher. Assume the input multiset to the block cipher has initial division property D n,m k , and denote the division property after i-round propagation through f r by D n,m Ki . Thus, we have the following chain of division property propagations: Moreover, for any vector k * i in K i (i ≥ 1), there must exist an vector k * i−1 in K i−1 such that k * i−1 can propagate to k * i by division property propagation rules. Furthermore, for (k 0 , k 1 , · · · , k r ) ∈ K 0 × K 1 × · · · × K r , if k i−1 can propagate to k i for all i ∈ {1, 2, · · · , r}, we call (k 0 , k 1 , · · · , k r ) an r-round division trail. Proposition 1. Denote the division property of input mulitset to an iterated block cipher by D n,m k , let f r be the round function. Denote the r-round division property propagation. Thus, the set of the last vectors of all r-round division trails which start with k is equal to K r . In general, we need to show that the Hamming weight of any vector of K r derived from the division property D K0 of input multiset is not less than or equal to 1, and then we need to prove that the division trail where K r is unknown does not exist. MILP Modeling In this subsection, we describe the MILP-based methods to search for the integral distinguishers [7] and explain how to express the division property propagation through the basic operations of Friet-PC based on the method proposed by Xiang et al. [28]. When evaluating the propagation of division property, it is necessary to consider the basic operations of a block cipher such as COPY and XOR. In the following, we will introduce the bit-based division property propagation through these basic operations and how to express the division property propagation through these operations as linear inequalities. Modeling COPY. COPY operation is the basic operation used in Feistel ciphers. A portion of the input copied into two equal parts, one of which is fed to the round function. Denote F an function taking x ∈ F 2 as input and (y 0 , y 1 ) = (x, x) as output. If the input multiset X has division property D n k , the output multiset Y will have division property D n,n K ′ , where Since we consider the bit-based division property, we only need to consider the division property propagation where k = 1. Thus, the division trails are (0) copy → (0, 0), (1) copy → (0, 1) and (1) copy → (1, 0). Let (a) copy → (b 0 , b 1 ) be the division trails through the COPY operation, the following inequalities are sufficient to describe the division property propagation of COPY [28]. Modeling XOR. Denote F an function taking (x 0 , x 1 ) ∈ F 2 × F 2 as input and y = x 0 ⊕ x 1 as output. If the input multiset X has division property D n,n K , the output multiset Y will have division property D n k ′ , where k ′ = min . Let (a 0 , a 1 ) XOR → (b) be the division trails through the XOR operation, the following inequalities are sufficient to describe the division property propagation of XOR [28]: Modeling AND. Denote F an function taking (x 0 , x 1 ) ∈ F 2 × F 2 as input and y = x 0 ∧ x 1 as output. If the input multiset X has division property D n,n K , the output multiset Y will have division property D n k ′ , where (1). Let (a 0 , a 1 ) AN D → (b) be the division trails through the AND operation, the following inequalities are sufficient to describe the division property propagation of AND [28]: The Initial Division Property. Since we search for integral distinguishers based on bit-based division property, it is necessary to set the input division property to ALL (A) or CONSTANT (C) for each bit independently. Assuming we have 2 s plaintext, we can set s bits in the initial division property as ALL (A). Stopping Rule. Let (a 0 n−1 , · · · , a 0 0 ) → · · · → (a r n−1 , · · · , a r 0 ) be a r-round division trail. If the trail where the output division property with only i-th bit (0 ≤ i < n) being 1 and the rest being 0 for a given initial division property does not exist, the i-th bit holds the BALANCE (B) property. We can check whether i-th bit holds BALANCE (B) or UNKNOWN (U) by checking if such a trail exists. This can be easily evaluated with MILP [28]. Specifically, if the model is infeasible for the given constraints, there is no such trail, and vice versa. Our Search We modeled the operations of the Friet-PC round as the MILP constraints and optimized the models using the MILP solver. All the models are solved with the Gurobi solver [11]. All the searches are performed on a machine equipped with an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz with HyperThreading enabled. From Fig. 1, it is clear that the input of limb b will not pass through the AND operation, which is the only non-linear transformation part in the round function. Therefore, for zero-sum distinguisher with a low time complexity, it is always better to choose as many active bits from limb b as possible. The obtained integral distinguishers are shown in Table 10. Zero-sum Distinguishers The above integral distinguisher can be converted into zero-sum distinguishers with a start-from-the-middle method as in [3]. Specifically, we view an internal state in a middle round as input and search for integral distinguishers in both backward and forward directions. As a result, the following four zero-sum distinguishers can be constructed: -30-round zero-sum distinguisher with 2 383 time and data complexity. In summary, a practical 13-round zero-sum distinguisher and a theoretical 17-round zero-sum distinguisher with time complexity below 2 128 are obtained. However, the full-round zero-sum distinguisher requires half of the total input space, i.e., it requires 2 383 time and data. Remark. It is in general difficult to compare distinguishers on a public permutation if the attacker has a control over the full internal state, as this is always impossible in schemes constructed with a public permutation and the sponge structure. Notice that the distinguishing attacks reported in [19] also require the capability to control the whole internal state of Friet-PC. Conclusion In this study, we evaluated the security of the Friet-PC permutation against bit-wise cryptanalysis including rotational, bit-wise differential, and integral attacks. First, we provided a generic procedure for a rotational attack on AND-RX ciphers with round constants and applied it to the Friet-PC permutation. Subsequently, we demonstrated an 8-round rotational distinguisher with a time complexity of 2 102 . Second, we explored single-and dual-bit differential biases of the reduced-round Friet-PC and extended one of them to a 9-round bit-wise differential distinguisher with a time complexity of 2 20.044 . Finally, we found 7-, 8-, 9-, and 15-round integral characteristics and extended these characteristics to 13-, 15-, 17-, and 30-round zero-sum distinguishers with time complexities of 2 31 , 2 63 , 2 127 , and 2 383 , respectively. We thus improved the best existing attack, which was evaluated by Liu et al. [19], against the reduced-round Friet-PC. We remark that the proposed attacks are no practical threat to Friet-PC, however, it is recommended to use these attack vectors of bit-wise cryptanalysis to evaluate the security of AND-RX ciphers when designing the AND-RX ciphers in the future.
9,781.8
2021-06-01T00:00:00.000
[ "Mathematics", "Computer Science" ]
Dawn-dusk asymmetries in the coupled solar wind-magnetosphere-ionosphere system: a review Dawn-dusk asymmetries are ubiquitous features of the coupled solar-wind-magnetosphere-ionosphere system. During the last decades, increasing availability of satellite and ground-based measurements has made it possible to study these phenomena in more detail. Numerous publications have documented the existence of persistent asymmetries in processes, properties and topology of plasma structures in various regions of geospace. In this paper, we present a review of our present knowledge of some of the most pronounced dawn-dusk asymmetries. We focus on four key aspects: (1) the role of external influences such as the solar wind and its interaction with the Earth's magnetosphere; (2) properties of the magnetosphere itself; (3) the role of the ionosphere and (4) feedback and coupling between regions. We have also identified potential inconsistencies and gaps in our understanding of dawn-dusk asymmetries in the Earth's magnetosphere and ionosphere. Introduction In recent years, increasing availability of remotely sensed and in situ measurements of the ionosphere, magnetosphere and magnetosheath have allowed ever-larger statistical studies to be carried out. Equally, advances in technology and methodology have allowed increasingly detailed and realistic simulations. These studies and simulations have revealed significant, persistent dawn-dusk asymmetries throughout the solar-wind-magnetosphere-ionosphere system. Dawn-dusk asymmetries have been observed in the Earth's magnetotail current systems and particle fluxes; in the ring current; and in polar cap patches and the global convection pattern in the ionosphere. Various authors have related these asymmetries to differences in solar illumination, ionospheric conductivity and processes internal to the magnetosphere. Significant dawn-dusk asymmetries have also been observed in the terrestrial magnetosheath, and there is evidence that plasma entry mechanisms to the magnetotail, for example, operate differently in the pre-and post-midnight sectors. The purpose of this review is to identify and collect current knowledge about dawn-dusk asymmetries, examining the solar-wind-magnetosphere-ionosphere system as a whole. Published by Copernicus Publications on behalf of the European Geosciences Union. A. P. Walsh et al.: Dawn-dusk asymmetries We consider the roles that coupling between the solar wind and magnetosphere, between the magnetosphere and ionosphere, and between different plasma regimes within the magnetosphere itself play in creating and supporting these asymmetries. We provide a schematic summary of current understanding of dawn-dusk asymmetries (Fig. 18), and also highlight inconsistencies and gaps in this knowledge, identifying possible directions for future work in this area. Observed asymmetries In this section we review the various dawn-dusk asymmetries that have been observed in the solar-windmagnetosphere-ionosphere system. Solar wind and interplanetary magnetic field The outer layers of geospace, from the foreshock inward through the magnetosheath to the magnetopause, are formed from the incident solar wind perturbed by the terrestrial magnetic field. A number of dawn-dusk asymmetries arise in these regions. The first asymmetry comes from the orbital motion of the Earth around the Sun. This motion causes the direction of the solar wind flow in a geocentric reference frame to be aberrated from the Earth-Sun line by roughly four degrees for a typical solar wind velocity. This provides a natural axis of symmetry for studies of dawn-dusk asymmetries in the magnetospheric system and is often called an "aberrated" coordinate system. The second upstream asymmetry comes from the average orientation of the interplanetary magnetic field (IMF) permeating the solar wind. The IMF vector is variable, but the average orientation follows the Parker spiral. Since the direction is typically not aligned with the solar wind flow, an asymmetry is introduced to the magnetospheric system due to a different orientation of IMF with respect to the bow shock normal in the dawn and dusk sectors. Figure 1 shows the average properties of the IMF; the two maxima in the B X and B Y histogram correspond to the inward and outward Parker spiral orientation. Foreshock The foreshock is the region of the solar wind magnetically connected to the bow shock. Its geometry, properties and location are mediated by the IMF. Under the typical Parker spiral IMF, the foreshock is formed on the dawn side, where the angle between IMF and the shock normal (2 Bn ) is small and the particles can more easily cross the shock front. Since the IMF and bow shock normal vector are close to parallel this region is called the quasi-parallel shock, as opposed to the quasi-perpendicular shock, where IMF is nearly tangent to the shock surface and the foreshock is not formed. The generation of the foreshock therefore provides an upstream "boundary condition" for magnetosheath processes that vary between the dawn and dusk sides. The foreshock differs from the pristine unperturbed solar wind by the presence of particles (electrons and ions) back-streaming away from the shock. These particles are responsible for the generation of various waves in the foreshock plasma. Both the particles and plasma oscillations can be convected back to the shock and drive shock or magnetosheath oscillation. A detailed review of foreshock properties can be found in Eastwood et al. (2005b); here we review only aspects relevant to asymmetries induced farther downstream. The foreshock region is conventionally divided into two parts -electron and ion. The electron foreshock, the upstream-most part adjacent to the IMF line tangent to the shock, populated by back-streaming electrons only and associated electron plasma waves (Filbert and Kellogg, 1979). The processes in the electron foreshock have very little influence on the shock and the magnetosheath. On the other hand, the processes in the ion foreshock, where reflected and back-streaming ions are also present (Meziane et al., 2004), influence the bow shock and the magnetosheath significantly. Figure 2 shows the geometry and magnetic field configuration of the ion foreshock, bow shock and magnetosheath. The distribution function plots show the diffuse hot ions leaking from the quasi-parallel shock back into the solar wind (Gosling et al., 1989). The ultra-low frequency (ULF) waves in the ion foreshock were identified as fast-mode magnetosonic waves, generated by the ion beams (Archer et al., 2005;Eastwood et al., 2005a). Note that the region populated by waves is a sub-section of the ion foreshock, separated by a clear boundary, called the foreshock compressional boundary (e.g. Omidi et al., 2009). The foreshock ULF waves are typically propagating upstream in the plasma rest frame, but are convected downstream by the solar wind and enter the quasi-parallel shock region, modulating the shock (Sibeck and Gosling, 1996) and possibly being transmitted in the magnetosheath (Engebretson et al., 1991), as discussed in Sect. 2.1.2. Since the foreshock only occupies the area upstream of the quasi-parallel shock, this transmission of foreshock oscillations in the magnetosheath only occurs on the quasi-parallel side of the magnetosheath (dawn side for Parker spiral IMF orientation), introducing a dawn-dusk asymmetry into the magnetosheath. Magnetosheath asymmetries Standing fast-mode waves known as bow shocks decelerate and deflect the supersonic and super Alfvénic solar wind, enabling it to pass around planetary and cometary obstacles throughout the heliosphere. The transition region between a bow shock and its obstacle is called the magnetosheath. Early theoretical considerations proposed dawn-dusk asymmetries of density, temperature, pressure and bulk flow within the magnetosheath (Walters, 1964). These predictions Each panel shows a histogram of one IMF component in the GSE coordinate system. The two maxima in the B X and B Y plots correspond to inward and outward Parker spiral direction, the most probable IMF orientation. Figure 2. Schematic view of the foreshock, bow shock and magnetosheath of the Earth. The ripples in the magnetic field represent foreshock ULF waves and turbulence downstream of quasi-parallel shock. Distribution function plots show the fieldaligned ion beams (close to the ion foreshock boundary) and the diffuse (close to the quasi-parallel shock) ions. Adapted from Balogh and Treumann (2013). were based on differing Rankine-Hugoniot shock jump conditions with a magnetic field parallel or perpendicular to the bow shock. A Parker spiral magnetic configuration incident upon the bow shock would introduce the necessary geometry for dawn-dusk asymmetries. Since these early theoretical predictions, a number of statistical studies have been conducted with a variety of spacecraft and have found a range of asymmetries in the magnetosheath (see summary in Table 1). One parameter that has been studied by a number of authors is the ion plasma density. Although higher ion density was observed in the dawn magnetosheath through a number of studies, the magnitude of this asymmetry varied from 1 to 33 %. Several studies proposed an IMF source of the asymmetry, but were unable to confirm this through binning the measurements by upstream IMF (Paularena et al., 2001;Longmore et al., 2005). One possible reason for this result is the limited statistics available for ortho-Parker spiral IMF, or an IMF when the quasiparallel bow shock is on the duskside. Walsh et al. (2012) proposed that the density asymmetry resulted from an asymmetric bow shock shape in response to the direction of the IMF. The bow shock is a fast-mode wave, which travels faster perpendicular to a magnetic field than parallel to it (Wu, 1992;Chapman et al., 2004). This results in a bow shock that is radially farther from the Earth on the duskside than the dawn when the IMF is in a Parker spiral orientation. Figure 3 shows the impact of the IMF angle on the bow shock position and Alfvénic Mach number through magnetohydrodynamics (MHD). An additional feature shown in the figure is that the asymmetry is a function of the Alfvénic Mach number. Since the average Alfvénic Mach number in the solar wind varies with the phase of the solar cycle (Luhmann et al., 1993), the magnitude of the density asymmetry in the average magnetosheath should also vary with phase of the solar cycle (larger asymmetry during solar minimum). Walsh et al. (2012) looked at the average Alfvénic Mach number during each of the past studies and found good agreement with the expected trend in the density asymmetry. An asymmetric bow shock position resulting from the Parker spiral IMF also explains the asymmetries observed in ion temperature and magnetic field (see Table 1). Waves and kinetic effects in the magnetosheath In addition to asymmetries in plasma moments and magnetic field magnitude in the magnetosheath, there are also observed asymmetries in the waves and kinetic effects. Since the first spacecraft observations, it has been known that the magnetosheath is populated by turbulent field and plasma oscillations covering the frequency range from the timescale of Chapman et al. (2004). The bow shock position and plasma density is shown from MHD simulations with varying Alfvénic Mach number and magnetic field orientation. From left to right the Alfvénic Mach number decreases. From top to bottom the orientation of the magnetic field changes from close to parallel to the flow direction to 90 from it. minutes to well above the ion plasma frequency. Early works suggested that magnetic field fluctuations can originate both from the upstream solar wind and foreshock, as well as from the magnetopause, while some are generated by plasma instabilities within the magnetosheath itself (for a review, see Fairfield, 1976). Fairfield and Ness (1970) noted a dawn-dusk asymmetry in the amplitude of magnetic field oscillations. Later systematic studies with the aid of an upstream solar wind monitor have established that the IMF B Y component and consequently the 2 Bn parameter of the upstream shock are important factors in determining the properties of magnetosheath fluctuations. Luhmann et al. (1986) demonstrated an increased level of magnetosheath field fluctuations (using 4 s resolution data) behind the quasi-parallel shock. Two decades later, Shevyrev et al. (2007) showed that the direction of the field varied much more in the quasi-parallel magnetosheath than in the quasi-perpendicular. This effect is visualised in Fig. 4 adapted from Petrinec (2013), who presented a global view of magnetosheath field fluctuations using median magnetic field measurements from Geotail observations, restricted to Parker spiral IMF direction. The above studies confirmed that the quasi-parallel shock is a more efficient source of magnetosheath oscillations at longer timescales (wave periods > 1 min) and that the oscillations resemble solar wind turbulence. Controversy remains concerning the precise generating mechanism of the turbulence at the quasi-parallel shock. Locally generated turbulence at the shock (Greenstadt et al., 2001;Luhmann et al., 1986) and transmission of upstream foreshock fluctuations (Engebretson et al., 1991;Sibeck and Gosling, 1996;Němeček et al., 2002) were proposed. Gutynska et al. (2012) investigated multi-spacecraft correlations between the magnetosheath and solar wind and concluded that fluctuations with wave periods larger than 100 s can often be traced back to solar wind fluctuations, while smaller-scale fluctuations are not correlated with upstream waves. Consistent with this result, field and plasma oscillations in the quasi-perpendicular magnetosheath are typically smaller in amplitude and more compressive in nature (e.g. Shevyrev et al., 2007). This can be explained by the dominance of locally generated kinetic waves and, most importantly, mirror modes. Magnetosheath ions are characterised by relatively high (> 1) and significant temperature anisotropy T ? /T k > 1, giving rise to two kinetic instabilities -ion cyclotron instability and mirror instability. In the magnetosheath plasma, these two instabilities often compete and both modes are frequently observed (for a review, see Schwartz et al., 1996;Lucek et al., 2005). These waves typically appear at shorter timescales, below one minute, and can grow to significant amplitudes. Anderson and Fuselier (1993) compared the occurrence rates of mirror and EMIC waves for quasi-perpendicular and quasi-parallel shock conditions. Wave character was identified by spectral analysis and the nature of the shock was identified by the content of energetic He ++ ions. Their results clearly indicate an increased wave (and in particular mirror mode) occurrence under quasi-perpendicular conditions. Génot et al. (2009) performed a statistical study of the occurrence of mirror structures over 5 years of Cluster observations using the GIPM (geocentric interplanetary medium) reference frame (Verigin et al., 2006), where fluctuations in the IMF direction are normalised away. Again, the results show a greater occurrence of mirror structures in the quasiperpendicular hemisphere. In summary, low-frequency field and plasma oscillations are ubiquitous in the magnetosheath and are organised according to upstream shock conditions. The quasiparallel magnetosheath (found on the dawn side for predominant Parker spiral IMF) is typically more turbulent with large-amplitude and long wave period oscillations. On the other hand, quasi-perpendicular (predominantly dusk) magnetosheath oscillations are dominated by EMIC and mirror waves with smaller amplitudes and shorter wave periods. While this distinction is clearly observed in statistical studies and often in case studies, a large percentage of magnetosheath observations include a superposition of both effects (Fuselier et al., 1994). The identified asymmetries in observed field and plasma oscillations are summarised in Table 1. Magnetopause asymmetries The magnetopause is a thin current sheet separating the shocked magnetosheath plasma and its embedded interplanetary magnetic field on one side and the geomagnetic field on the other side. The current in the magnetopause is primarily caused by the differential motion of ions and electrons as they encounter the sharp magnetic gradient of the geomagnetic field. For a comprehensive overview of the magnetopause and its properties, we refer to, for example, Hasegawa (2012), so below we only focus on dawn-dusk asymmetries in the magnetopause. Simultaneous measurements from both flanks of the magnetopause are rare. Also, the large variability in the thickness, orientation and motion of the magnetopause makes any direct comparison between the dawn and dusk flank magnetopause of little use. To our knowledge, the only study focusing explicitly on dawn-dusk asymmetries in macroscopic features of the magnetopause is the paper by Haaland and Gjerloev (2013). They used measurements from more than 5000 magnetopause traversals near the ecliptic plane by the Cluster constellation of satellites and reported significant and persistent dawn-dusk asymmetries in current density and magnetopause thickness. Figure 5 shows the distribution of observed current densities for the dawn (red bars) and dusk (blue bars) magnetopause crossings during disturbed geomagnetic conditions. Most of the dawn magnetopause crossings have a current density around 10-15 nA m 2 , whereas the typical current density at dusk is around 25-30 nA m 2 . Mean current densities are 18 and 27 nA m 2 for dawn and dusk, respectively. Haaland and Gjerloev (2013) noted that the dawn magnetopause was thicker, suggesting that the total current intensity on the two flanks were roughly equal. Two possible explanations for these dawn-dusk asymmetries are conceivable, both related to the boundary conditions. First, asymmetries in the magnetosheath as reported in Sect. 2.1.2 will influence the geometry and property of the magnetopause. A higher duskside magnetosheath magnetic field will cause a higher magnetic shear across the magnetopause, and thus a higher current density. Asymmetries in plasma parameters, in particular dynamic pressure, may also contribute, though simulations suggests that pressure enhancements are more likely to displace the magnetopause than compress it (Sonnerup et al., 2008). A second source of dawn-dusk asymmetry in magnetopause parameters are asymmetries in the ring current. In particular during disturbed conditions, the dusk sector of the ring current shows a faster energisation and higher current density than its dawn counterpart (Newell and Gjerloev, 2012). As a consequence, there will be a stronger magnetic perturbation at dusk and thus a higher magnetic shear across the magnetopause. Several potential mechanisms by which plasma can enter the magnetosphere through the flank magnetopause have been suggested. These are thought to be most important Anderson and Fuselier (1993) when the magnetosphere is exposed to northward IMF, when the Dungey cycle (Dungey, 1961) does not dominate. These processes include transport via kinetic Alfvén waves (e.g. Johnson and Cheng, 1997), gradient drift entry (Olson and Pfitzer, 1985) and through rolled-up Kelvin-Helmholtz vortices (e.g. Terasawa, 1994, 1995). Entry through double cusp (also known as dual lobe) type reconnection (Song and Russell, 1992) is also a possible mechanism during northward IMF. Asymmetries in reconnection at the dayside magnetopause under southward IMF, and the associated plasma entry, will be discussed in Sect. 3.1. Each of the mechanisms discussed above does not necessarily operate symmetrically with respect to the noonmidnight meridian, either because of their intrinsic properties or because of the dawn-dusk asymmetries in the magnetosheath as discussed in Sect. 2.1.2. This asymmetric plasma entry will also have consequences for the plasma sheet -see Sect. 2.3.2. ULF waves in the magnetosheath can generate kinetic Alfvén waves (KAWs) when they interact with the magnetopause boundary (Johnson and Cheng, 1997) and in so doing stimulate the diffusive transport of ions into the magnetosphere. A recent survey by Yao et al. (2011) has shown that the wave power associated with KAWs is enhanced at the dawn magnetopause, which suggests enhanced transport on that flank. KAWs can heat ions both parallel (Hasegawa and Chen, 1975;Hasegawa and Mima, 1978) and, when they have a sufficiently large amplitude, perpendicular to the magnetic field, suggesting that if KAW-driven transport does preferentially occur on the dawn flank magnetopause it would also be associated with a heating of the transported magnetosheath plasma. The growth of the Kelvin-Helmholtz instability may also have a dawn-dusk asymmetry. If finite Larmor radius effects are taken into account, growth is favoured on the duskside Mean, median and mode current density on dusk are significantly higher than their dawn counterparts. After Haaland and Gjerloev (2013). (Huba, 1996), while conditions in the magnetosheath under Parker spiral IMF conditions might favour growth on the dawn side (e.g. Engebretson et al., 1991). A statistical study of the occurrence of Kelvin-Helmholtz vortices on the flank magnetopause from Geotail data (Hasegawa et al., 2006) suggests no particular dawn-dusk asymmetry, although the majority of the detections were made antisunward of the terminator. An extension of this study by Taylor et al. (2012), including Double Star TC-1 data, did find an asymmetry with the occurrence of Kelvin-Helmholtz vortices favoured on the dusk flank magnetopause. However, this asymmetry was only present on the dayside. Simultaneous observations of Kelvin-Helmholtz vortices on both flanks are rare, and as such it is difficult to address any dawn-dusk asymmetry in their properties. However, Nishino et al. (2011) reported one observation of vortices occurring simultaneously on both flanks and showed that while their macroscopic properties were similar, on a microscopic level differences were observed, with more plasma mixing between magnetosheath and magnetospheric populations in the dawnside vortex than the duskside vortex. Gradient drift entry naturally provides a dawn-dusk asymmetry: ions drift into the magnetosphere through the magnetopause on the dawn side, while electrons enter on the duskside (Olson and Pfitzer, 1985). However the efficiency of gradient drift entry and hence its potential to contribute to observed asymmetries in magnetospheric plasma is not well constrained. Treumann and Baumjohann (1988) calculated that only 5 % of magnetosheath particles that come into contact with the magnetopause become trapped, while through test particle simulations Richard et al. (1994) showed double cusp reconnection provided a much more efficient entry process. Indeed it is thought that double cusp reconnection operating under northward IMF is one of the dominant formation mechanisms for the cold dense plasma sheet . MHD simulations suggest that any dawn-dusk asymmetry in solar wind entry by double cusp reconnection is related to ionospheric conductance (Li et al., 2008a). Magnetotail asymmetries Throughout this review we will, in general, consider asymmetries about the noon-midnight meridian. Whilst at the boundaries of the magnetosphere such asymmetries are readily identifiable, as most of the boundaries are located well away from the meridian, within the magnetosphere asymmetries may depend on the coordinate system used. For example, the solar wind flow is not necessarily radial in the frame of the Earth; any non-radial flow will deflect the location of the central axis of the magnetosphere away from the X GSM axis (GSM = Geocentric Solar Magnetic -see e.g. Hapgood, 1997, for some commonly used coordinate systems and their definitions). The aberrated GSM (AGSM) coordinate system attempts to correct for this and has, for example, been shown to reduce the apparent asymmetry in convective flows in the magnetotail (Juusola et al., 2011). Geometry and current systems The magnetotail current sheet is often considered to be a static, Harris-type (Harris, 1962) current sheet separating the oppositely directed magnetic fields in the lobes. There is now sufficient evidence, particularly from the Cluster spacecraft, that the current sheet is in motion (e.g. Ness et al., 1967;Zhang et al., 2005;Sergeev et al., 2006;Forsyth et al., 2009), is bifurcated , or shows embedded current sheet signatures and is not, in fact, Harris-like in a statistical sense Rong et al., 2011). Statistical studies have also shown that the current sheet tends to be thinner, with a greater current density, on the duskward side of the magnetotail. A number of multi-spacecraft analysis techniques have been developed to determine the current density within the current sheet and the sheet thickness (Dunlop et al., 1988;Shen et al., 2007;Artemyev et al., 2011). While the specifics of these techniques vary, they share a commonality that they all examine the currents based on magnetic field measurements by Cluster. Statistically, the magnetotail current density measured by Cluster was consistently observed to be higher on the duskside than the dawn side of the magnetotail (e.g. Artemyev et al., 2011;Davey et al., 2012b). However, the values observed and the extent of the asymmetry between them differed for each study. On the duskside, the current densities ranged from 6 to 25 nA m 2 and on the dawn side, the current densities ranged from 4 to 10 nA m 2 . In contrast, the current sheet thickness was shown to be greater on the dawn side than on the duskside, both in absolute terms and with respect to the local ion gyroradius (Rong et al., 2011). Rong et al. (2011) also showed that the probability of observing a thin current sheet was greater towards dusk. We note that the differences in current density and thickness tended to be comparable (⇠ 1.5-2.5 times difference), such that it appears that the total current flowing through the current sheet remains roughly constant. It should be noted that the above studies by , Artemyev et al. (2011), Rong et al. (2011) and Davey et al. (2012b) use different selection criteria to identify Cluster crossings of the tail current sheet. Rong et al. (2011) took any reversal of the B X component of the field to be a crossing, thus multiple small-scale fluctuations were identified as individual crossings, whereas Davey et al. (2012b) and required a change in B X between ±5 and ±15 nT respectively, with applying a further criterion that the duration of the field reversal was between 30 and 300 s. As such, Rong et al. (2011) identified 5992 crossings, Davey et al. (2012b) identified 279, and identified 78 events (although using only 1 year of Cluster data). Given the difference in the current sheet identifications and the number of events used in these studies, it is reassuring that the overall picture in their results is similar, even if the exact values differ. This difference may be a result of the different separations between the Cluster spacecraft throughout their lifetime Forsyth et al., 2011). Studies of the current sheet thickness and current density by Cluster rely on the phenomenon of "magnetotail flapping" (Speiser and Ness, 1967), whereby large-scale waves cause the current sheet to move locally in the Z GSM direction and to be tilted in the Y Z GSM plane. The occurrence frequency of flapping increases towards dusk , but the tilt of the current sheet is greater towards dawn (Davey et al., 2012b). Furthermore, flapping has been shown to increase with substorm activity, but decrease with enhancements in the ring current (Davey et al., 2012a). Given that the thinning of current sheets during substorms is a well documented phenomenon (e.g. McPherron et al., 1973;Pulkkinen et al., 1994;Shen et al., 2008) one might expect thinner current sheets on average in the region in which most substorms occur Frey and Mende, 2007). However, it is unclear from these results whether substorms are the cause or consequence of thin current sheets in this sector. Nightside plasma sheet properties Multiple ion populations exist in the magnetotail, including components with characteristic energies of 10s of eV (intense cold component), ⇠ 300-600 eV (cold component), ⇠ 3-10 keV (hot component), and ⇠ 10-100 keV (suprathermal). The higher ion density in the dawn flank magnetosheath leads to a higher density of cold component ions towards dawn in the magnetotail under northward IMF, as observed by C.-P. . These ions have also been found to have higher temperatures at dawn than at dusk during northward IMF, in particular they are heated perpendicular to the magnetic field (Wing et al., 2005) and during intervals of high solar wind velocity (Wang et al., 2007). Nishino et al. (2007a) found the cold component ions to have parallel anisotropy (T c k > T c ? ) at dusk, and conjectured that this is due to adiabatic heating during sunward convection. Wing et al. (2005) used Defense Meteorological Satellite Program (DMSP) satellites to infer plasma sheet temperatures and densities during periods of northward IMF. Their cold component density and temperature profiles are displayed in Fig. 6. The cold component density profile has peaks at dawn and dusk flanks, while the cold component temperatures are higher on the dawnside than the duskside, consistent with Hasegawa et al. (2003). This observation suggests that the magnetosheath ions have been heated in the entry process on the dawnside. The dawnside cold ion temperature is about 30-40 % higher than that on the duskside (see Fig. 6). Such asymmetric heating is consistent with the observed asymmetry in KAW transport described in Sect. 2.2. In contrast, the hot component ions have higher temperatures toward dusk, especially within ⇠ 20 R E of the Earth, due to the energy-dependent gradient-curvature drift. Spence and Kivelson (1993) developed a finite-width magnetotail model of the plasma sheet. In addition to a deep-tail source of particles, they found that including a particle source from the low-latitude boundary layer (LLBL) on the dawn side yields agreement with measurements of pressure and density. The model predicts a significant dawn-dusk asymmetry with higher ion pressure and temperature toward dusk for intervals of weak convection. Keesee et al. (2011) confirmed this model with average plasma sheet ion temperatures during quiet magnetospheric conditions calculated using energetic neutral atom (ENA) data from the TWINS mission, as seen in Fig. 7. This dawn-dusk asymmetry in ion temperatures has also been observed with in situ measurements by Geotail (Guild et al., 2008;C.-P. Wang et al., 2006). Using data from Geotail, Tsyganenko and Mukai (2003) derived a set of analytical models for the central plasma sheet density, temperature and pressure for ions with energies 7-42 keV in the XY GSM plane. Dawn-dusk asymmetries were found only within 10 R E , near the boundary of their measurements, so were not included in their models that cover 10-50 R E . The contrasting ion temperature asymmetries between the hot and cold ion components during northward IMF yields measurements of two peaks in the ion distribution (the hot and cold components) on the dusk flanks, and one broad peak measured on the dawn flank Hasegawa et al., 2003;Wing et al., 2005). C.-P. measured the total ion density to be higher toward dawn for northward IMF, primarily due to the cold component ions, yielding equal pressures at dawn and dusk. They showed that the density asymmetry weakens during southward IMF, but the temperature asymmetry remains, yielding higher pressures at dusk. The magnetosphere B Z has been observed to be greater at dawn than at dusk (Fairfield, 1986;Guild et al., 2008;C.-P. Wang et al., 2006). This asymmetry serves to provide pressure balance to the higher densities at dusk. Both dawn and dusk flanks have high flux of ions with energies < 3 keV, with high flux extending toward the midnight meridian only from the dawn flank for intervals of northward IMF longer than an hour. This asymmetry is reduced during southward IMF as the high flux in the dawn sector decreases. For ions with energies > 6 keV, flux is higher at the dusk flank than the dawn flank, with the asymmetry being stronger for higher energies and southward IMF. Both hot and cold components of the ions flow toward the midnight meridian under strong northward IMF conditions, due to (a) viscous interaction of the plasma sheet and the lobe and (b) vortical structures due to the Kelvin-Helmholtz instability (Nishino et al., 2007b). The average quiet time flow Figure 7. Ion temperatures calculated from TWINS ENA data mapped onto the XY GSM plane with the Sun to the right. A black disc with radius 3 R E , centred at the Earth, indicates the region where analysis is not applicable. Contours of constant ion temperature as predicted by the finite tail width model of Spence and Kivelson (1993) are overlaid on the image. The measurements and model indicate higher plasma sheet hot component ion temperatures toward dusk during quiet magnetospheric conditions due to the gradient-curvature drift. (Adapted from Fig. 4 in Keesee et al., 2011). pattern in the plasma sheet displays a dawn-dusk asymmetry, with slower, sunward-directed flows post-midnight and faster, duskward-directed flows pre-midnight (Angelopoulos et al., 1993). The asymmetry in flow direction is also observed when averaging over all flow speeds (Hori et al., 2000), though the picture becomes somewhat more complicated when fast flows alone are examined (Sect. 2.4.2). The asymmetry in perpendicular flows is most significant within 10 R E of the midnight meridian (C.-P. . The larger duskward component in the slow flow results from diamagnetic drift of ions due to the inward pressure gradient, which has a magnitude on the order of 25 km s 1 (Angelopoulos et al., 1993). Less is known about the intense cold component because ions in this energy range can only be detected when spacecraft are negatively charged as they pass through Earth's shadow. Seki et al. (2003) hypothesise that the intense cold component ions originate in the ionosphere because they have not undergone heating that would occur in the plasma sheet boundary layers. Similarly, measurements of the suprathermal component tend to be combined with the thermal component (Borovsky and Denton, 2010) or all components (Nagata et al., 2007), such that the specific dawndusk characteristics of this population have not been explored. The electrons in the plasma sheet also exhibit a dawndusk asymmetry. Like the ions, there are two components (Wang et al., 2007;A. P. Walsh et al., 2013). Unlike the ions, however, both electron populations have been observed under northward and southward IMF, although a two-component electron plasma sheet is more likely to be observed under southward IMF (A. P. . Under southward IMF the two-component electron plasma sheet is more likely to be observed in the pre-midnight sector than the postmidnight sector. Under northward IMF the occurrence follows the pattern of the large-scale Birkeland currents coupling the ionosphere and magnetosphere -a two-component electron plasma sheet is more likely to be observed mapping to lower latitudes in the pre-midnight sector and higher latitudes in the post-midnight sector. This suggests the cold electrons have their source in the ionosphere, rather than the solar wind, and are transported to the plasma sheet via downward field-aligned currents (Iijima and Potemra, 1978;A. P. Walsh et al., 2013). Substorms and other modes Southward-pointing IMF results in a circulation of magnetic flux in the magnetosphere -with dayside reconnection opening flux, transportation of open flux into the lobes, nightside reconnection closing flux to form the plasma sheet, and return of flux back to the dayside (Dungey, 1961). The magnetosphere is driven to many modes of response due to magnetic reconnection with the solar wind IMF. These include substorms, magnetic storms, steady magnetospheric convection, and sawtooth events, as well as smaller responses such as pseudobreakups and poleward boundary intensifications (for a full review of these modes, see e.g McPherron et al., 2008). These events with enhanced sunward convection in the plasma sheet will dominate over certain asymmetries discussed above, such as the quiet-time dawn-dusk thermal pressure asymmetry (Spence and Kivelson, 1990). The most common and well-studied mode of response is the substorm. Numerous researchers have found asymmetries in the average substorm onset location, with the most likely onset shifted duskward to 23:00 MLT (Frey and Mende, 2007, and references therein). The onset MLT of substorms is strongly influenced by the IMF clock angle, which shifts the dayside reconnection geometry in such a way as to create a "tilted" configuration away from direct noon-midnight reconnection (Østgaard et al., 2011). Internal factors, such as solar illumination and its effects on ionospheric conductivity, can also influence the average onset location in latitude and local time (Wang et al., 2005, see also Sect. 3.2). Sawtooth events also display dawn-dusk asymmetry, with intense tail reconnection signatures occurring premidnight (Brambles et al., 2011). The sawtooth asymmetry is attributed to ion outflow asymmetry which is in turn a result of ionospheric conductance asymmetry. Many dynamic signatures of enhanced convection, especially during substorms, also display a pre-midnight occurrence peak. These include magnetic reconnection, bursty bulk flows, transient dipolarisations and energetic particle bursts and injections, described in more detail below. Recently Nagai et al. (2013) surveyed a large data set including Geotail observations from 1996 to 2012 in the area of 32 < X AGSM < 18 R E and |Y AGSM | < 20 R E . Active reconnection events were selected using the following criteria: (1) |B X | <10 nT to select plasma sheet samples, (2) V i X < 500 km s 1 and B Z < 0 to select tailward fast flows, (3) earthward flow at V i x > 300 km s 1 and B Z > 0 observed within 10 min after the tailward flow to select the flow reversals, and (4) V eY < 1000 km s 1 during at least one sample within 48 s long interval around the flow reversal instant to select the active reconnection when electrons undergo substantial acceleration; 30 active reconnection events were selected. The analysis of occurrence rate distribution has shown that events may be found in the sector 6 < Y AGSM < 8 R E . The occurrence rate is considerably higher in the pre-midnight sector 0< Y AGSM < 8 R E . Slavin et al. (2005) used Cluster observations to study travelling compression regions (TCRs), which are commonly accepted to be remote signatures of a reconnection outflow in the magnetotail lobes at distances 19 < X < 11 R E , and noticed a dawn-dusk asymmetry in the event distribution in the XY AGSM plane with considerably larger number of events observed in the pre-midnight sector. Similarly, Imber et al. (2011) inferred the dawn-dusk location of the reconnection site from statistical studies of THEMIS observations of flux ropes and TCRs during the time period December 2008 to April 2009. Magnetic signatures, including a bipolar variation in B Z passing through B Z = 0 and an enhancement in B Y at B Z = 0 were used to identify a flux rope. A bipolar 1B Z signature relative to the background field and total field variation with (1B)/B > 1 % were used to identify TCRs; 87 events (both flux ropes and TCRs) were identified. Plotting the spacecraft location for all the events in the XY AGSM plane, Imber et al. (2011) have shown an obvious dawn-dusk asymmetry with 81 % of events observed in the dusk sector. The event probability (number of events per unit time) also showed strong duskward asymmetry: a peak of the Gaussian fit to the data is at Y AGSM = 7.0 R E and the full width at half maximum is 15.5 R E . In their survey of magnetotail current sheet crossings, Rong et al. (2011) found that 329 out of 5992 current sheet crossings by the Cluster spacecraft in 2001, 2003 and 2004 had a negative B Z component. These negative B Z current sheet crossings were predominantly found to occur at azimuths of 110 to 210 and had field curvature directions pointing away from the Earth. Given that B Z is expected to be positive on closed magnetic field lines in the magnetotail plasma sheet, Rong et al. (2011) interpreted these observations as showing that reconnection was "more inclined to be triggered in current sheet regions with MLT being ⇠ 21:00-01:00", thus showing a clear dawn-dusk asymmetry in the distance downtail at which reconnection occurs. Reconnection signatures observed in the distant tail and at lunar orbit also exhibit dawn-dusk asymmetry. Slavin et al. (1985) have studied average and substorm conditions in the distant magnetotail using ISEE-3 data. It was found that negative B Z and fast tailward flow was predominantly observed in the pre-midnight sector (0 < Y GSM < 10 R E at 100 > X > 180 R E ). Further tailward, at 180 > X > 120 R E , the region of predominant B Z < 0 and fast tailward flow expands azimuthally to a broad region between Y GSM = 0 and ⇠ 20 R E . It should be noted, though, that at those geocentric distances the GSM coordinate system may not be appropriate, and the broad distribution of B Z and V X maxima may be an apparent effect of averaging over different solar wind/IMF conditions. Recently reconnection outflows and plasmoid observations by two ARTEMIS spacecraft in lunar orbit have been statistically studied (Li et al., 2014). That study revealed a dawn-dusk asymmetry with occurrence rate of plasmoid observations higher within 2 < Y AGSM < 12 R E . The occurrence distribution has a similar but broader pattern compared with previous studies on plasmoids or reconnection flow reversals in the near-Earth region (Imber et al., 2011;Nagai et al., 2013). Fast flows in the plasma sheet Fast plasma flows in the magnetotail above a "background" convection velocity are often associated with substorm activity as a key device by which closed magnetic flux can be transported towards the inner magnetosphere and as a possible mechanism for the triggering of instabilities in the inner magnetosphere that lead to substorm onset (Baumjohann et al., 1990). Short (sub-minute) bursts of enhanced plasma flow (termed flow bursts) are most likely generated by impulsive magnetotail reconnection (see Sect. 2.4.1). The flow bursts are grouped into ⇠ 10 min events known as bursty bulk flows (BBFs) (Angelopoulos et al., 1992), although these terms are sometimes used interchangeably throughout the literature. Numerous statistical studies of BBFs, conducted during last the two decades, result in rather controversial conclusions on asymmetries in the azimuthal (MLT) dependence of BBF distribution. Comparison between them is complicated by the use of different selection criteria to identify individual events. A set of studies applying selection criteria based upon either magnetic field ((B 2 X + B 2 Y ) 1/2 < 15 nT) or > 0.5 to select plasma sheet samples and flow velocity magnitude (|V X | > 400 km s 1 ) to select flow bursts (FB) and BBF events did not reveal a pronounced dawn-dusk anisotropy in the event distribution (Baumjohann et al., 1990;Angelopoulos et al., 1994). Some asymmetry in velocity magnitudes with faster flows observed in the pre-midnight sector were considered apparent and attributed to orbital biases (Nakamura et al., 1991). On the other hand, studies of Geotail, WIND and THEMIS data with selection criteria differentiating convective flows (i.e. perpendicular to the instantaneous magnetic field) and field-aligned beams resulted in pronounced asymmetry in the convective flow distributions and symmetric field-aligned beam distributions (Nagai et al., 1998;Raj et al., 2002;McPherron et al., 2011). Statistical analysis of plasma bulk velocity observed by Cluster during neutral sheet (|B X | < 5 nT) crossings at radial distances R ⇡ 18 R E revealed dawn-dusk asymmetries in the horizontal velocity magnitude (V eq = (V 2 X + V 2 Y ) 1/2 ) with larger values (V eq > 400 km s 1 ) in the pre-midnight sector of the magnetotail within 0 < Y AGSM < 10 R E . The average equatorial velocity in the post-midnight sector did not exceed 200 km s 1 . Conversely, a study of the comprehensive data set that includes 15 years of Geotail, Cluster and THEMIS observations in the magnetotail applying the criterion > 0.5 to select plasma sheet samples revealed no asymmetry tailward of X = 15 R E in the aberrated coordinate system . Closer to Earth, the average convection at a velocity smaller than 200 km s 1 shows some duskward asymmetry. This asymmetry was attributed to the ion gradient drift close to the inner edge of the plasma sheet (see also Hori et al., 2000). The distribution of higher velocity remains fairly symmetric with respect to the midnight in AGSM . The dawn-dusk asymmetry in the magnetotail plasma flows also depends on the level and character of geomagnetic activity. Recent studies of Geotail and THEMIS observations over a span of 14 years comparing the convection patterns observed during periods of steady magnetospheric convection (SMC) and substorm phases have revealed that the probability of earthward fast flows (V XY > 200 km s 1 ) is fairly symmetric with respect to midnight for SMC but slightly asymmetric with a peak at ⇠ 23:00 MLT during substorm growth phases. This duskward asymmetry vanishes during expansion and recovery substorm phases (Kissinger et al., 2012). To summarise, the statistical studies of BBFs and plasma convection in the magnetotail conducted so far do not provide any definitive answer on the question on dawn-dusk asymmetry in the flow pattern. The results strongly depend on the selection criteria. More specifically, studies with criteria based upon the perpendicular velocity tend to show the duskward asymmetry. Conversely, the studies based upon |B XY | and -related criteria typically result in a fairly symmetric flow pattern. Another important issue is the selection of fast flow events and differentiation of them from the background convection. It was noticed in observations that BBFs (flow bursts) are typically associated with (1) increased northward (southward) magnetic field component (B Z ) and (2) decrease in the plasma density (Angelopoulos et al., 1992(Angelopoulos et al., , 1994Ohtani et al., 2004). These characteristics, attributed to so-called "plasma bubbles" (e.g. Chen and Wolf, 1993;Wolf et al., 2009), may be used to differentiate transient BBFs from the steady convection. The rapid increase in B Z and simultaneous decrease in the plasma density were recently found to be characteristics of dipolarisation fronts (Runov et al., 2011;Liu et al., 2013) that will be discussed in the next section. Russell and McPherron (1973) first reported observations of front-like, spatially and temporally localised, sharp increases in the northward magnetic field component B Z . Transient dipolarisations and dipolarisation fronts Timing of the two-point observations by OGO-5 (at X = 8.2 R E ) and ATS-1 (at X = 5.6 R E ) spacecraft indicated earthward propagation of this magnetic structure. Later it was found that the B Z enhancement is accompanied by BBFs (Angelopoulos et al., 1992;Ohtani et al., 2004). The enhanced V ⇥ B-electric field (magnetic flux transfer rate) appeared in the form of ⇠ 100 s long pulses, referred to as rapid flux transfer events (Schödel et al., 2001). For such structures, the B Z enhancements are spatial structures travelling with the flow. At other times, particularly in the inner magnetosphere, plasma flows are not observed during the B Z enhancements; in these cases the B Z enhancements do not contribute to local flux transport and are the result of non-local currents from a substorm current wedge, (e.g. McPherron et al., 1973) most often tailward of the observation point (a remote-sensing effect -see, e.g. Nagai, 1982). Both types of events have been intensely studied in the past under various names, such as nightside flux transfer events (e.g. Sergeev et al., 1992), flux pileup (Hesse and Birn, 1991;Shiokawa et al., 1997;Baumjohann et al., 1999) and current disruption (e.g. Lui, 1996). Treated as flowing spatial structures, the sharp B Z enhancements have been referred to as "dipolarisation fronts" (e.g. Nakamura et al., 2002;Runov et al., 2009). It has been shown that the earthward-propagating dipolarisation fronts are associated with a rapid decrease in the plasma density and embedded into the earthward plasma flow (Runov et al., 2009(Runov et al., , 2011. The fronts are thin boundaries (with the thickness of an ion thermal gyroradius), separating underpopulated dipolarised flux tubes, often referred to as "plasma bubbles" (e.g. Wolf et al., 2009), and the ambient plasma sheet population. Most likely, the dipolarisation fronts are generated in the course of impulsive magnetic reconnection in the mid or near magnetotail (see e.g. Runov et al., 2012, and references therein). Alternatively, the fronts may appear as a result of kinetic interchange instability in the near-Earth plasma sheet (Pritchett and Coroniti, 2010). Recently, Liu et al. (2013) statistically studied several hundred dipolarisation fronts observed by THEMIS probes in the plasma sheet at 25 < X < 7 R E and at variety of azimuthal (Y ) positions. The events were selected using a set of selection criteria based mainly upon magnetic field and rate of magnetic field changes. The selected events may, therefore, include those of all categories discussed above. The analysis has shown, however, that the increase in B Z was associated with the rapid decrease in plasma density and was embedded into earthward plasma flow. Thus, the majority of selected events were dipolarisation fronts. Figure 8 shows (a) the distribution of selected events and (b) the occurrence rate of the dipolarisation fronts in the XY GSM plane. The event distribution shows a pronounced dawn-dusk asymmetry with more events observed in pre-midnight sector within 0 < Y < 8 R E . The occurrence rate exhibits a maximum in 2 < Y < 6 R E bins in a range of 20 < X < 7 R E . Dipolarisation fronts are typically embedded into fast earthward flows (BBFs). However, as was shown in the previous section, contrary to that of the dipolarisation fronts, azimuthal distribution of BBF occurrence rate does not display any pronounced dawn-dusk asymmetry. Nonetheless, because of large B Z , the magnetic flux is transported mainly by the dipolarisation fronts (Liu et al., 2013). Thus, the magnetic flux transport is strongly asymmetric with respect to the midnight meridian with maximum of the occurrence rate distribution between 0 < Y < 8 R E . This sector of the magnetotail is also the area of maximum probability of magnetotail reconnection (see Sect. 2.4.1). Energetic particle injections Observations of energetic particles at geosynchronous orbit (GEO) revealed sudden increases in the particle fluxes that are typically observed during enhanced geomagnetic activity (substorms and storms) and referred to as "energetic particle injections" (e.g. McIlwain, 1974;Mauk and Meng, 1987;Birn et al., 1997aBirn et al., , 1998. The injections observed at GEO fall into two distinct categories: dispersionless and dispersed. In the former case, the enhancement in particle fluxes at different energies occurs roughly simultaneously, whereas in the latter case a pronounced delay between the flux enhancement at different energies is observed (see e.g. Birn et al., 1997a). A commonly accepted explanation for these two types of injections is that dispersionless injections are observed by a satellite situated in or near the source of accelerated particles, whereas dispersed injections are observed by a satellite that is azimuthally distant from the injection source region, so that gradient and curvature drifts are responsible for the delay in arrival times of particles of different energies (e.g. Anderson and Takahashi, 2000;Zaharia et al., 2000). A pronounced dawn-dusk asymmetry has been found in spatial distributions of ion and electron injection observed at GEO. It has been found that local time (LT) distribution of the occurrence frequency of high-energy (> 2 MeV) electron flux increase events is asymmetric with respect to midnight with a larger rate in the dusk sector (Nagai, 1982). The dawndusk asymmetry in the MeV electron fluxes was explained by an increase in ion pressure in the duskside inner magnetosphere during enhanced convection that leads to a magnetic field decrease due to diamagnetic effect and, therefore, to the adiabatic decrease in electron flux. Lopez et al. (1990) studied dispersionless ion injections observed by AMPTE as a function of local time and radial distance. They found an occurrence peak near midnight, with asymmetry towards pre-midnight local times. A similar study, but using electron injection measurements from the CRRES satellite was conducted by Friedel et al. (1996) Their analysis showed that the region of dispersionless injections is sharply bounded in magnetic local time and can have a radial extent of several R E . Birn et al. (1997a) studied properties of the dispersionless injections observed at GEO by Los Alamos 1989-046 satellite, situated near the magnetic equator in the midnight sector of the magnetotail. Their analysis revealed a significant asymmetry in the injection properties with respect to the Magnetic Local Time (MLT): proton-only injections are predominantly observed in the evening and pre-midnight sectors (18:00-00:00 MLT), whereas electron-only injections are observed in the post-midnight sector (00:00-05:00 MLT). Near midnight, the probability of both ion and electron injection observations maximises. Another finding is that the probability to observe first proton then electron injections maximises between 21:00 and 23:00 MLT, whereas the probability to observe first electron then proton injections is larger at midnight and in the post-midnight sector (23:00-03:00 MLT). The azimuthal offset of ion and electron dispersionless injections was confirmed by the simultaneous observations by two closely spaced synchronous satellites . Similar results were also obtained by Sergeev et al. (2013), who compared MLT distributions of proton and electron dispersionless injections and auroral streamers. It was shown that proton (electron) injections are seen exclusively at negative (positive) 1 MLT, where 1 MLT is the difference between MLTs of injection and streamer observations (MLT sc -MLT str ). Test particle tracing in magnetic and electric fields resulting from MHD simulations of magnetotail reconnections also showed that ion and electron dispersionless injection boundaries spread azimuthally duskward and dawnward, respectively (Birn et al., 1997b;Birn et al., 1998). It is important to emphasise that dispersionless injections were studied in the above discussed works. Thus, the spatial dawn-dusk asymmetry in ion and electron injections cannot be attributed to the gradient and curvature drifts in the background quasi-dipole field that will lead the energy dispersion. Recent studies, both observation-and test-particlesimulation-based, have revealed that the dawn-dusk asymmetry appears within the fast-flow channel, where B Z is larger than in the surrounding plasma sheet, and therefore, in the steady-state reference frame, the electric field (mainly V ⇥ B) is enhanced (Birn et al., 2012;Gabrielse et al., 2012;Runov et al., 2013). Although this asymmetry is due to ion (electron) duskward (dawnward) drift within the channel, because of finite channel cross-tail size (1-3 R E , Nakamura et al., 2004) it does not lead the significant energy dispersion. Injections have also been observed in the outer magnetotail. Bursts of high-energy protons and electrons with durations varying from 100 s to 100s of minutes were observed by IMP-7 at geocentric distance ⇠ 35 R E (e.g. Sarris et al., 1976). Proton bursts were observed equally frequently in the dawn-and dusksides of the magnetotail. However, a strong dawn-dusk asymmetry in the distribution of the intense proton bursts > 500 (cm 2 s sr MeV) 1 with majority of these occurring in the dusk magnetotail was revealed. To our knowledge, no dawn-dusk asymmetry in high-energy electron bursts has been found in the outer magnetotail. THEMIS observations of ion and electron dispersionless injections at geocentric distances from 6 to ⇠ 20 R E were recently statistically studied by Gabrielse et al. (2014). That study demonstrated (see Fig. 9) that the injections observed far beyond geosynchronous orbit exhibit a pronounced dawn-dusk asymmetry. Specifically, (1) at all distances both ion and electron injections are more frequently observed in the pre-midnight sector with a peak in probability at ⇠ 23:00 MLT, (2) at radial distances larger that 12 R E (outer region) the probability to detect ion and electron injections is quite similar with the electron injection probability offset slightly dawnward of the 23:00 MLT peak, (3) within 12 R E (inner region) the probability distributions for both i + and e injections are broader than that in the outer region; the electron injection probability being shifted notably towards dawn from the 23:00 MLT peak. Magnetotail asymmetries -summary Numerous observations suggest that dynamic processes in the magnetotail occur predominantly on the duskside and, typically, localised within several R E in the pre-midnight sector (Table 4). The localisation of convective fast flows, dipolarisation fronts and dispersionless particle injections, plasmoids and TCRs can be understood by considering these events as direct or indirect consequences of magnetic field energy release via magnetotail reconnection. Reconnection, in turn, is more probable within the pre-midnight sector because the cross-tail current density is higher and the current sheet is thinner. What determines the reduced current sheet thickness in the pre-midnight sector remains an open question. Inner magnetosphere asymmetries The inner magnetosphere is the region of the magnetosphere closest to the Earth, reaching out from the ionosphere to the magnetopause on the dayside and ⇠ 8-10 R E on the nightside (exclusive of the polar regions). The structure and dynamics of the inner magnetosphere are driven by input from the ionosphere and magnetotail and the interaction of this material with the dipole magnetic field lines. Energetic particles are trapped in this region and undergo a variety of drift motions due to the gradient and curvature of the magnetic field (e.g. Schulz and Lanzerotti, 1974), with electrons drifting eastward/dawnward and ions westward/duskward. We detail asymmetries that occur in the radiation belts, ring current, and plasmasphere regions. Many are likely the result of a zoo of wave-particle interactions, which are discussed separately. Ring current symmetries Dusk-dawn asymmetries in the ring current have been known since 1918 when Chapman (1918) observed a more pronounced disturbance in the north-south (H ) component of Earth's magnetic field at dusk. The stronger storm-time disturbance at dusk is generally attributed to the partial ring current (Harel et al., 1981). Love and Gannon (2009) found the difference between the dusk and dawn disturbance to be linearly proportional to the Dst index. modelled the storm-time disturbance of Earth's magnetic field using satellite-based magnetometer data for events with Dst minimum at least 65 nT and found a stronger disturbance at dusk. Newell and Gjerloev (2012) GEO Birn et al. (1997a) stations centred at four local times: SMR-00, SMR-06, SMR-12 and SMR-18. In a superposed epoch analysis of 125 storms, they found a consistently stronger perturbation at dusk, as seen in Fig. 10. Using an enhanced TS04 model, Shi et al. (2008) modelled the perturbation in the H of the lowto mid-latitude geomagnetic field to determine the contributions of various currents, including the region 1 and 2 fieldaligned currents, currents that close the Chapman-Ferraro current in the magnetopause and through the partial ring current, respectively. For a weak partial ring current, they found a day-night asymmetry with negative H perturbation around noon and positive H perturbation around midnight, primarily caused by region 1 field-aligned currents. During storm main phase, the partial ring current tended to be stronger, pushing the negative H perturbations toward dusk, yielding a dawn-dusk asymmetry. Solar wind dynamic pressure enhancements tend to increase the partial ring current and fieldaligned currents, resulting in nearly instantaneous measurements of the dawn-dusk asymmetry in H perturbations. The strength of the partial ring current during a storm depends on preconditioning based on northward or southward IMF B Z . Using simulations, Ebihara and Ejiri (2003) explained that the asymmetry in the magnetic field causes protons with small pitch angles to drift toward earlier local times than protons with larger pitch angles. Ring current ions move along equipotential surfaces while the first and second adiabatic invariants are conserved, leading to adiabatic heating toward dusk and cooling toward dawn (Milillo et al., 1996). Skewed equatorial electric fields produced by the closure of the partial ring current during active periods cause the peak in the proton distribution function to occur between midnight and dawn, as observed in ENA images such as Fig. 11. Radiation belt asymmetries Dawn-dusk asymmetries in radiation belt particle fluxes are not well studied; instead much research has focused on the source and loss processes that do preferentially act at certain local times (see recent reviews by Millan and Thorne, 2007;Thorne, 2010, for example). Many of these source and loss processes are related to wave-particle interactions and hence Figure 11. Images from two energy channels, 27-39 keV (top row) and 50-60 keV (bottom row), from the High Energy Neutral Atom (HENA) instrument on the IMAGE mission at two times during the 12 August 2000 geomagnetic storm, 08:00 UT (just before minimum Dst, left column) and 11:00 UT (just after minimum Dst, right column). The limb of the Earth and dipole field lines (L = 4 and L = 8) at 00:00, 06:00, 12:00 and 18:00 MLT are shown in white. The proton distribution peak occurs in the midnight-dawn sector due to skewed equatorial electric fields produced by the closure of the partial ring current during active periods. (Adapted from Fig. 7 in Fok et al., 2003, .) occur in the regions to be described in Sect. 2.5.4. Changes in radiation belt particle fluxes can also be observed, not as a result of particle acceleration or loss to the atmosphere, but instead through the displacement of the drift shells on which the particles travel. This displacement is dependent on the 720 A. P. Walsh et al.: Dawn-dusk asymmetries geometry of the magnetic field in the inner magnetosphere and hence on the strength of the ring current -the so-called Dst effect (McIlwain, 1966;Williams et al., 1968). Thus, any asymmetries in ring current strength can alter the drift paths of radiation belt electrons which manifests as an asymmetry in electron flux. There is also evidence for a dawndusk asymmetry in radiation belt electron flux caused by substorm-related changes in the inner magnetospheric magnetic field: a more tail-like magnetic field in the dusk sector shifts the drift path of energetic electrons, effectively moving the radiation belt to lower latitudes (Lazutin, 2012). Plasmasphere asymmetries The upward extension of the cold, dense plasma from the Earth's ionosphere forms the plasmasphere. Motion of the plasmaspheric population is governed by an electric field made up of two potential components, corotation and convection. The first potential dominates close to the Earth and is an effect of Earth's own rotation. The second comes from the coupling of the solar wind and the magnetosphere and is a result of sunward return of plasma sheet flow. Figure 12 shows how cold particles drift under such potentials. During geomagnetically quiet times, the plasmaspheric particles travel on closed E ⇥B drift shells around the Earth (within the separatrix), maintaining a fairly steady population. During disturbed times, when dayside reconnection increases, the convection potential is enhanced. An increase in the convection potential will cause an inward motion of the edge of the plasmasphere, or the plasmapause, and erosion of the outer material (Grebowsky, 1970;Chen and Wolf, 1972;Carpenter et al., 1993). Erosion of the outer plasma forms a sunward convecting drainage plume or the plasmaspheric plume. Recent spacecraft measurements with Cluster and THEMIS as well as imaging from IMAGE have provided insight to the morphology of plumes. During storm onset the dayside plasmasphere surges sunward over a wide extent in local time. As time progresses during the disturbance, the extension narrows on the dawn side while staying relatively stationary in the dusk extension Goldstein et al., 2005). When dayside reconnection decreases the narrow plume typically rotates eastward and wraps itself around the plasmasphere (Goldstein et al., 2004;Spasojević et al., 2004). The extension of cold dense plasma from the plume transports a large amount of mass to the outer magnetosphere. Borovsky and Denton (2008) estimates that 2 ⇥ 10 31 ions (34 tonnes of protons) are transported via plumes in the life of a storm. Spatially the plume extends sunward in the dusk sector of the dayside magnetosphere (Chen and Moore, 2006;Borovsky and Denton, 2008;Darrouzet et al., 2008), introducing a dawn-dusk asymmetry in the mass loading of the dayside outer magnetosphere. The effect of this asymmetry on solar-wind-magnetosphere coupling is discussed in Sect. 3.1. Inner magnetosphere wave populations Inner magnetospheric wave populations also exhibit dawndusk asymmetries. The spatial distribution of some inner magnetosphere wave populations is illustrated in Fig. 13, reproduced from Thorne (2010). Whistler mode chorus waves (Tsurutani and Smith, 1974) are typically found on the dawn side of the magnetosphere ) just outside the plasmapause and are linked to cyclotron resonant excitation of injected plasma sheet electrons (Li et al., 2008b). Thus the dawn-dusk asymmetry can be explained by considering the drift paths of the injected electrons (see Sects. 2.4.4 and 2.5.2). Electrostatic electron cyclotron harmonic waves are also linked to the injection of plasma sheet electrons into the inner magnetosphere (Horne and Thorne, 2000) and have a similar spatial distribution (Meredith et al., 2009). Plasmaspheric hiss is another whistler-mode emission that is mostly observed within the plasmasphere. Hiss also exhibits a dawn-dusk asymmetry: while average amplitudes of hiss are strongest on the dayside, emission extends into the pre-midnight sector at higher amplitudes than those observed in the post-midnight sector (Meredith et al., 2004). The generation of plasmaspheric hiss has recently been linked to the presence of chorus waves (Chum and Santolík, 2005;Bortnik et al., 2008;Bortnik et al., 2009), so one might expect them to have the same asymmetry. However, ray-tracing simulations have suggested that chorus-mode waves that are generated on the dayside can propagate eastwards and generate hiss in the dusk sector (Chen et al., 2009). Electromagnetic ion cyclotron (EMIC) waves are excited as a result of temperature anisotropy in ring current ions and also exhibit a dawn-dusk asymmetry. They typically occur in two frequency bands, just below the hydrogen and helium gyrofrequencies, respectively. The helium band waves dominate at dusk and are found between 8 and 12 R E whereas at dawn the hydrogen band waves dominate and are observed between 10 and 12 R E (Anderson et al., 1992;Min et al., 2012). EMIC wave power is typically larger at dusk than dawn (Min et al., 2012). EMIC waves have also been observed in the plasmaspheric plumes in the afternoon sector (Morley et al., 2009). Plumes can extend over a wide range of L-shells, and wave-particle interactions within them have been suggested as a source of asymmetric precipitation of ring current and radiation belt particles (Borovsky and Denton, 2009). While EMIC waves may scatter energetic particles during individual storms (e.g. Yuan et al., 2012), statistically EMIC waves are present only 10 % of the time in plasmaspheric plumes (Usanova et al., 2013). Equatorial magnetosonic waves are another class of whistler-mode emission that are strongly confined to the equatorial plane. They have frequencies partway between the proton gyrofrequency and the lower hybrid frequency (e.g. Santolík et al., 2004). Equatorial magnetosonic waves have been observed both within and outside the plasmapause. Inside the plasmapause they are most intense at dusk . Outside the plasmapause they are strongest in the dawn sector (Ma et al., 2013). The spatial distribution of the whistler-mode chorus wave shown in Fig. 13 can be compared with the DMSP observations of diffuse aurora electron precipitation in Fig. 14 (top) (after Wing et al., 2013). The diffuse electron aurora has a strong dawn-dusk asymmetry and can be observed mainly between 22:00 and 10:00 MLT. As the plasma sheet electrons E ⇥ B convect earthward, they also curvature and gradient drift eastward toward dawn. The field-aligned component of these electrons is quickly lost through the loss cone, but they are replenished by pitch-angle scattering. A leading mechanism for pitch-angle scattering is very low frequency (VLF) whistler-mode chorus wave and electron interactions (e.g. Thorne, 2010;Reeves et al., 2009;Summers et al., 1998). Studies have shown that whistler-mode chorus waves are excited in the region spanning pre-midnight to noon. At around 10:00 MLT the diffuse electron flux decreases, which may suggest that the whistler-mode chorus waves start weakening. In the magnetosphere, the electrons continue to drift eastward, circling the Earth, but they are only observed in the ionosphere when and where there are whistler-mode chorus waves to pitch-angle scatter them. Contrast this with the asymmetry in monoenergetic auroral precipitation (Fig. 14, bottom) which peaks in the pre-midnight sector. This distribution will be discussed in more detail in Sect. 3.2. Asymmetries in the thermosphere and ionosphere The ionosphere has often been regarded as a projection of magnetospheric processes that are, in turn, driven by the solar wind, with the aurora as the most prominent manifestation. However, the ionosphere and its dawn-dusk asymmetries in particular can also have an impact on the magnetosphere. It is also important to bear in mind that in the thermosphere, up to approximately 1000 km altitude, the neutral density is still significantly higher than the ion density. Collisions between ions and neutrals cause exchange of momentum between the two species, so motion and dynamics of ions and neutrals influence each other. Below, we show examples of dawn-dusk asymmetry in both neutrals and ions of the thermosphere and its embedded ionosphere. The neutral atmosphere In the thermosphere, i.e. the altitude range from approximately 85 up to 600 km, the dynamics are mainly dominated by dayside solar heating which drives a diurnal circulation of neutrals from the dayside to the nightside (e.g. Rees, 1979;Manson et al., 2002). Due to a combination of the Earth's rotation (which introduces an opposite effect of the Coriolis force at dawn and dusk) and the fairly slow transport, the induced noon-midnight asymmetry in neutral density and temperature becomes shifted towards a dawn-dusk asymmetry. Figure 15 reproduced from Kervalishvili and Lühr (2013) shows maps of the relative thermospheric mass density enhancements ( rel = / model ) for three local seasons: winter, combined equinoxes and summer (measurements from Northern and Southern Hemisphere are combined). The dawn-dusk density asymmetry is most pronounced during local winter, when the solar illumination is minimum and the transport slower. Asymmetries in the neutral population also affect the ionosphere: due to collisions between neutrals and Figure 14. The spatial distribution of electron precipitation responsible for the diffuse aurora (top) and monoenergetic aurora (bottom). Note the different sense in the asymmetry of auroral emission (after Wing et al., 2013). ions, a higher neutral density causes enhanced drag and thus reduced plasma convection (e.g. Förster et al., 2008). Also, higher neutral densities, as shown in Fig. 15, shift production levels of O + to higher altitudes, where reactions with other constituents such as O 2 and NO 2 are less frequent, thus increasing the escape probability. A comprehensive discussion about the interaction between the neutral atmosphere and the ionosphere is given in Bösinger et al. (2013) Ionospheric convection Embedded in the thermosphere is the ionosphere, with the highest ion concentrations around 200-400 km (the ionospheric F layer) where solar ultraviolet radiation (10-100 nm wavelength) induced ionisation of atomic and molecular oxygen is the dominant formation process. The ionosphere is magnetically coupled to the magnetosphere, and the interaction between the solar wind with the dayside magnetopause will therefore also directly affect ionospheric convection. In particular, during a southward oriented IMF, a large-scale fast circulation of plasma in the magnetosphere is set up (Dungey, 1961). In the polar ionosphere, this circulation is manifested as two large-scale convection vortices. A cross-polar electric field is set up between the foci of the two vortices. Since this electric field is essentially the projection of the solar wind electric field across the reconnection line on the dayside, this cross-polar potential is often used as a proxy solar wind input energy to the magnetosphere. Figure 16 shows maps of ionospheric convection in the Northern Hemisphere, in the form of potential plots. These synoptic maps were constructed from electric field measurements from the Cluster Electron Drift Instrument (EDI -see Paschmann et al., 2001) mapped down to 400 km altitude in the ionosphere, and converted to electric potentials by using the relation E = r8. Ground-based studies based on the Super Dual Auroral Radar Network (SuperDARN -see e.g. Greenwald et al., 1995) give similar results. Southern Hemisphere patterns are similar, but are essentially mirrored with respect to dawn and dusk. For purely southward IMF conditions (middle panel), the two large-scale convection cells are clearly apparent. The flow is mainly antisunward across the central polar cap, but skewed towards the pre-midnight sector behind the terminator. The dawn-dusk asymmetry is perhaps best seen in Ann. Geophys., 32, 705-737 It is hard to envisage magnetospheric processes as the only source of these asymmetries. Atkinson and Hutchison (1978) attributed the lack of mirror symmetry to nonuniformities in ionospheric conductivity. They noted that a steep conductivity gradient across the day-night terminator tended to give a stronger squeezing of the plasma flow toward the dawnside of the polar cap. Tanaka (2001) used simulations with a realistic conductivity distribution to reproduce the observed asymmetries, and also noted that a uniform conductivity yielded symmetric convection cells. The fact that the dawn-dusk mirror symmetry breaking can be explained by nonuniformities in ionospheric conductivity implies that magnetospheric convection is not simply the result of processes at the magnetospheric boundaries or in the magnetotail, but that it is modified by ionospheric effects. Yau et al. (1984) found that upflow of both O + and H + with energies of 0.01 to 1 keV and pitch angles of 100-160 was larger at dusk. They also found a minimum in outflow in the post-midnight sector. They also noted that the asymmetry was altitude related, which they attributed to ion conic or beam acceleration. In a study by Pollock et al. (1990), however, the density of upwelling ions with low energies (0-50 eV/q) was found to have only a weak relation with magnetic local time, whereas the upwelling velocities differed for different ion species. Even with no asymmetry in the ionospheric source, transport of ionospheric plasma can cause asymmetric deposition in the magnetosphere. For example, Howarth and Yau (2008) used Akebono measurements to study trajectories of polar wind ions. They found a strong IMF B Y dependence, with deposition primarily in the dusk sector of the plasma sheet when IMF B Y was positive, and a more even distribution when IMF B Y was negative. Their study also suggested that ions emanating from the noon-dusk sector of the ionosphere could travel further in the 724 A. P. Walsh et al.: Dawn-dusk asymmetries tail, since the magnetic field lines are more curved. Likewise, Liao et al. (2010) examined the transport of O + (mainly from the cusp region) to the tail lobes. For IMF B Y positive, O + from the Northern Hemisphere cusp was found to be more likely to be transported to the dawn lobe, whereas O + from the Southern Hemisphere cusp/cleft region was transported to dusk. Ionospheric outflow The IMF B Y -induced asymmetry and opposite effects for Northern Hemisphere and Southern Hemisphere can probably be explained by corresponding asymmetries in the dayside reconnection. This, again, leads to an asymmetric convection for the hemispheres (e.g. Haaland et al., 2007) and consequently in the transport of cold plasma from the ionosphere via the tail lobes to the plasma sheet. In addition to the IMF B Y -induced asymmetries, observations also indicate the presence of a persistent dawn-dusk asymmetry in plasma transport. Both Noda et al. (2003) and Haaland et al. (2008) noted a persistent duskward convection, unrelated to IMF direction. In Haaland et al. (2008) this asymmetry was related to the above-mentioned day-night conductivity gradient in the ionosphere (see Sect. 2.6.2). Furthermore, Yau et al. (2012) extended the single-particle simulation for the O + outflow in storm cases and found a clear dawn-dusk asymmetry. During five geomagnetic storms investigated, they found that the deposition of O + was on average ⇠ 3 times higher in dusk than dawn plasma sheet. A similar result, but using cold ion outflow (mainly protons with thermal and kinetic energy lower than 70 eV), was reported by Li et al. (2013). Figure 17, from this study, illustrates the persistent asymmetry. There is a larger deposition of cold ions of ionospheric origin in the dusk sector. In addition, there is also a strong IMF B Y modulation (not shown). Using the same data set, Li et al. (2012) also determined the source area for the cold ions, and found the polar cap regions to be the dominant contributors of cold plasma. Interestingly, no significant dawn-dusk asymmetry was found in the source. Solar wind -magnetosphere coupling The impact of the solar wind on the Earth's magnetosphere drives activity in the magnetospheric system. The most significant coupling of the solar wind to the magnetosphere is via reconnection. While reconnection itself is most efficient under southward IMF B Z , the orientation of the IMF B Y strongly influences asymmetries in the reconnection process. For a given event, a non-zero IMF B Y will result in many asymmetric signatures in the magnetosphere and ionosphere, by imposing a torque on the magnetic field flux tubes and their transport from dayside to nightside (Cowley, 1981). Such a torque leads to tail flux asymmetry and shifted nightside reconnection, and therefore asymmetries in particle Figure 17. Maps of the deposition of cold ion flux from the ionosphere to the plasma sheet during periods with southward IMF conditions. The top panel shows the deposition of cold ions traced back to Cluster observations in the Northern Hemisphere polar cap and lobes, the lower panels shows the corresponding maps of ions traced back to the Southern Hemisphere. There is a clear dawn-dusk asymmetry with a higher fluxes, and thus larger deposition in the dusk sector. Adopted from Li et al. (2013). populations and plasma convection in the plasma sheet. The lobes of the magnetosphere also experience density asymmetries under non-zero IMF B Y , with the northern lobe having higher dawnside density under IMF + B Y . The IMF B Y field penetrates to geosynchronous orbit, creating an asymmetry in geosynchronous B Y of 30 % (Cowley et al., 1983). The twisted open flux tubes also result in skewed ionospheric convection patterns (Ruohoniemi and Greenwald, 2005;Haaland et al., 2007, see also Fig. 16). Even when large statistical studies are used with average IMF B Y = 0, many dawn-dusk asymmetries remain. IMF data are usually presented in the geocentric solar ecliptic (GSE) or the geocentric solar magnetospheric (GSM) systems, where the x axis is defined as pointing from the Earth toward the Sun. The large majority of magnetospheric studies are presented in such coordinate systems. They are useful for displaying satellite trajectories, solar wind velocity and Table 6. Ionospheric and thermospheric dawn-dusk asymmetries. Process/property Asymmetry Explanation Reference Large-scale convection clockwise rotation ionospheric conductivity Atkinson and Hutchison (1978); Tanaka (2001) of convection cells Ridley et al. (2004) Thermospheric density anomaly higher densities solar illumination, on dusk local heating, transport Kervalishvili and Lühr (2013) Coriolis force opposing ion drift on dawn, enhancing on dusk Kervalishvili and Lühr (2013) magnetic field measurements, magnetopause and bow shock positions, magnetosheath and magnetotail magnetic fields and plasma flows, etc. A solar wind velocity flowing straight from the Sun to the Earth would only have a V X component in such a system, with V Y = V Z = 0. However, this does not take into account the aberration, or rotation, of the solar wind due to the Earth's motion through space orbiting the Sun. Since the Earth is moving in the Y GSE direction, a small rotation of the coordinate system is required to identify the true flow direction impacting on the Earth's magnetopause. The aberrated GSE coordinate system (AGSE) removes this small bias with the rotation angle ✓ aberr = tan 1 (V E /V sw ) where V E is the velocity of the Earth around the Sun (30 km s 1 ). Many studies that present dawn-dusk asymmetries do not utilise the AGSE or AGSM coordinate systems. Magnetosheath asymmetries are a direct result of solar wind driving. The motion of dayside reconnected flux tubes is asymmetric based on the IMF direction (Cooling et al., 2001) such that the IMF clock angle controls the location of flux transport event (FTE) signatures (Fear et al., 2012). In general, more FTEs are observed on the dusk sector of the magnetopause. Initially, this was attributed to stronger duskside magnetic field in the magnetosheath due to Parker spiral IMF draping (Kawano and Russell, 1996). However, recent results found that the differences in FTE occurrence by IMF spiral angle sector are not consistent with the Parker spiral IMF orientation (Y. L. . The magnetopause boundary becomes more asymmetric under strongly driven southward IMF B Z , such that geosynchronous spacecraft are more likely to encounter the magnetopause on the dawn side rather than the duskside. Dmitriev et al. (2004) suggested that this could be due to either more intensive magnetopause erosion on the pre-noon/dawn sector, or the asymmetric ring current effect "pushing" the duskside magnetopause farther out. While the asymmetric ring current during storms is a result of ion drift toward dusk, solar wind pressure enhancements can increase the asymmetry of an already asymmetric ring current by inducing an azimuthal electric field that locally energises particles . The coupling does not only operate in one direction; magnetospheric conditions can also change the solar-windmagnetosphere coupling. Borovsky and Denton (2006) have proposed that the plasmaspheric plume will decrease solar-wind magnetospheric coupling or the geoeffectiveness of solar wind structures. When a plume extends to the magnetopause (Elphic et al., 1996;McFadden et al., 2008;B. M. Walsh et al., 2013) it will mass load a spatial region at the magnetopause, typically on the duskside. As the density increases, the localised reconnection rate will decrease causing a decrease in coupling . It is uncertain whether this localised decrease can be significant enough to impact the magnetospheric convection system. Magnetosphere-ionosphere coupling The ionosphere plays an active role in determining the state of magnetospheric convection, providing closure for the magnetospheric currents. The amount of current that can be carried through the ionosphere is determined by ionospheric conductivity. It has been noticed that the day-night gradient of the ionospheric conductivity produces the dawndusk asymmetry in the polar cap convection (Atkinson and Hutchison, 1978). Observations and modelling suggest that the two-cell ionospheric convection pattern is rotated clockwise with respect to the noon-midnight meridian even for IMF B Y ' 0 conditions (e.g. Ridley et al., 2004;Ruohoniemi and Greenwald, 2005;Haaland et al., 2007;Cousins and Shepherd, 2010, see also Sect. 2.6.2 and Fig. 16). The dawn-dusk asymmetry in ionospheric convection resulting from the conductance gradient (e.g. Atkinson and Hutchison, 1978;Tanaka, 2001;Ridley et al., 2004) may affect the geometry of magnetotail lobes and, therefore, the geometry of plasma and current sheet. Zhang et al. (2012) use three-dimensional global MHD Lyon-Fedder-Mobarry (LFM) model to simulate a magnetosphere response on solar wind/IMF driving. The realistic model of the ionospheric conductance included effects of electron precipitation and solar UV ionisation. The numerical experiment was controlled to eliminate all asymmetries and variability in the solar wind to isolate an effect of the ionospheric state on magnetotail activity. These controlled simulations by Zhang et al. (2012) suggest that the ionospheric conductance can regulate the distribution of fast flows in the magnetotail so that the flows are more intense in the pre-midnight plasma sheet. The simulations by Zhang et al. (2012) have revealed that gradients in Hall ionospheric conductance are necessary to create the dawn-dusk asymmetry (note that neither IMF B Y 726 A. P. Walsh et al.: Dawn-dusk asymmetries nor solar wind V Y were included). These simulations are confirmed by observations; the observed distributions of Hall conductance lead to a rotation in the polar cap convection in order to preserve current continuity. The rotation results in the displacement of the symmetry axis of the two-cell convection from the noon-midnight meridian to the 11:00-23:00 LT as shown in Fig. 16. The clockwise rotation of the convection pattern causes more open flux to be diverted towards the duskside of the magnetotail. This results in dawndusk asymmetry of loading and, consequently, reconnection of magnetic flux in the plasma sheet (Smith, 2012). Numerical tests including clockwise as well as (unrealistic) anticlockwise rotation of the polar cap convection pattern have shown a linear correlation between a degree of convection pattern rotation and a degree of reconnection asymmetry. The ionospheric outflow may also influence the processes in the magnetotail plasma sheet. It has been argued by Baker et al. (1982) that asymmetries in the distribution of enhanced density of O + may define regions in the plasma sheet where tearing mode growth rate are increased and the instability threshold is lowered. They pointed out that statistical studies of O + concentration in the plasma sheet revealed significant dawn-dusk asymmetry with larger occurrence rate in the pre-midnight sector. Adopting the criterion for onset of the linear ion tearing instability (Schindler, 1974), Baker et al. (1982) studied the possible role of the ionospheric O + ions in the development of plasma sheet tearing. Their analysis resulted in maximum tearing growth rate in the range of 15 < X GSM < 10 R E and Y GSM ⇠ 5 R E . Recent statistical studies of Geotail/EPIC data have confirmed that average energy of the O + ions increases toward dusk (Ohtani et al., 2011). The observed asymmetry in monoenergetic auroral electron precipitation (Fig. 14, bottom) is also thought, in part, to be a result of magnetosphere-ionosphere coupling. The precipitating energy flux can be associated with the upward region 1 field-aligned currents, which are mostly located in the pre-midnight sector (e.g. Wing et al., 2013, and references therein). Plasma sheet and the inner magnetosphere As geomagnetic activity increases, the boundary between open and closed drift paths moves closer to Earth. Thus, protons and electrons from the plasma sheet are able to access geosynchronous orbit during storms. Using LANL-MPA (Los Alamos National Laboratory Magnetospheric Plasma Analyzer) measurements, Korth et al. (1999) found higher densities toward dawn for both electrons and ions (with energies 1 eV-40 keV) at geosynchronous orbit during periods of higher geomagnetic activity. For low geomagnetic activity, the electron and ion densities peak at midnight, but the reasons for lower densities at dawn and dusk differ. For electrons, the duskside region is dominated by closed drift paths for electron plasma sheet energies while plasma sheet electrons are lost to precipitation on the dawn side. For protons, the ions take longer to drift toward the duskside, allowing more losses to precipitation. Temperatures also exhibit an asymmetry -with hotter ion temperatures toward dusk. In addition to the gradient-curvature drift yielding higher ion temperatures toward dusk in the magnetotail, higher energy ions that drift toward dawn are preferentially lost to particle precipitation . During a geomagnetic storm, ion temperatures toward dusk increase while those toward dawn decrease, yielding a more pronounced asymmetry around minimum Dst. Such cold temperatures in the dawnnoon sector have been observed during geomagnetic storms with in situ measurements at geosynchronous orbit and with remote TWINS ENA measurements (Keesee et al., 2012). During enhanced geomagnetic activity, plasma sheet ions penetrate deep into the inner magnetosphere (e.g. Ganushkina et al., 2000;Runov et al., 2008). The lowenergy (< 10 keV) part of this population is subject to the co-rotation drift and drifts dawnward, whereas the highenergy (> 10 keV) part drifts duskward following gradientand curvature-drift paths (see Fig. 12). A population with energy ⇠ 10 keV often becomes "stagnant", forming the socalled "ion nose structures" because of a characteristic shape of the energy spectrogram (e.g. Ganushkina et al., 2000). Statistical studies of ion nose structures observed by Polar/CAMMICE revealed dawn-dusk asymmetry in the event distribution with larger occurrence rate in the dusk sector. In general, enhanced plasma sheet convection and energetic plasma sheet particle injections build up an asymmetric pressure in the inner magnetosphere with stronger enhancement on the duskside that results from asymmetric drifts of energetic ions and electrons. Duskward gradient and curvature drifts of energetic ions lead to localised pressure increases. Open Issues and inconsistencies Many of the dawn-dusk asymmetries discussed in the previous sections can be explained by asymmetries in the input. In particular, the IMF interaction with the magnetosphere is known to impose significant asymmetries in the plasma entry and flux transport. On the other hand, the difference in behaviour/motion of ions and electrons in nonuniform fields is another source of asymmetries. However the relative importance of these two mechanisms is largely unknown. Below, we try to identify some still-open issues in our understanding of dawn-dusk asymmetries observed in the Earth's magnetosphere and ionosphere. External versus internal influence As seen in Sects. 2.1 and 2.1.2, pronounced dawn-dusk asymmetries exist in the magnetosheath. A still open question is the degree to which this asymmetry translates into a corresponding asymmetry inside the magnetopause, and whether this can explain e.g. the observed asymmetries in observed properties and processes in the nightside plasma sheet. The relative importance of the ionosphere for magnetospheric dawn-dusk asymmetries is also largely unknown. Conductivity effects as discussed in Sects. 2.6.2 and 3.1 are believed to cause a local ionospheric asymmetry in the ionospheric plasma transport, but their effect on magnetotail flows is still disputed. Likewise, neutral density and wind can influence both ion outflow and ionospheric drag, but the role of the thermosphere for large-scale magnetospheric dawndusk asymmetries is still largely unknown. Ring current closure One of the first scientific observations of a dawn-dusk asymmetry in geospace was reported by Chapman (1918). He noted that ground magnetic perturbations associated with geomagnetic storms were larger at dusk. The first direct observations of an asymmetric ring current were made in the early 1970s (e.g. Frank, 1970) as spacecraft observations became available. An asymmetry in the ring current naturally raises the question of current closure. Initially, the observed dawn-dusk asymmetry, or partial ring current, was mainly attributed to divergence either through field-aligned currents into the ionosphere, through the cross-tail current or as local current loops within the magnetosphere (e.g. Liemohn et al., 2013). The recent results from Haaland and Gjerloev (2013) indicate a mutual influence between the ring current and magnetopause current, although a clear current loop connecting the ring current with the magnetopause current has not been firmly established. The impact of the plume on magnetospheric driving As discussed in Sect. 2.5 the plasmaspheric plume is capable of transporting large amounts of plasma from the dense plasmasphere to the outer magnetosphere, primarily in the dusk sector. Mass loading of the dayside magnetopause in this region has been shown to impact reconnection (B. M. and could impact the efficiency of solar-windmagnetosphere coupling. Borovsky et al. (2013) predict that the plume can reduce reconnection by up to 55 % during coronal mass ejections (CMEs) or high-speed streams. On a larger scale, Borovsky and Denton (2006) looked at geomagnetic activity with and without a plume present at geosynchronous orbit and concluded that the impact of the plume is significant enough to reduce geomagnetic activity. By contrast, Lopez et al. (2010) argue that although the plume may reduce the reconnection rate locally where highdensity material contacts the magnetopause, the total reconnection rate integrated across the full X-line should not change significantly. In the Lopez et al. (2010) model, the (1) The foreshock shows a greater occurrence of ULF waves in the quasi-perpendicular shock region towards dawn; (2) the magnetosheath is thinner, more turbulent and denser at dawn, but magnetic field strength is greater at dusk; (3) the magnetopause is thicker at dawn, but the magnetopause current density is greater at dusk; (4) the plasmasphere extends out to the magnetopause in plumes, typically seen on the duskside; (5) the ring current is asymmetric and stronger on the duskside; (6) high energy particle injections at geosynchronous orbit are more common on the duskside; (7) magnetotail ions are made up of hot and cold populations -the hot population is colder and the cold population is hotter towards dawn (distributions shown in differential energy flux); (8) the occurrence of convective fast flows in the tail shows no dawn-dusk asymmetry, but flows towards dusk are faster; (9) the magnetotail current sheet is thicker towards dawn and the current density is greater towards dusk; (10) signatures of reconnection are more commonly seen towards dusk. Summary and conclusions Asymmetries are ubiquitous features of the Earth's magnetosphere and plasma environment. Noon-midnight asymmetries are mainly governed by solar illumination resulting in strongly asymmetric ionisation in the nightside and dayside. Magnetic gradients due to the compressed sunward-facing magnetosphere on noon and the corresponding stretched magnetotail tail in the nightside also introduces a significant noon-midnight asymmetry. Similarly, north-south asymmetries can often be explained by seasonal differences in illumination of the two hemispheres, and consequently differences in ionospheric conductivity. Differences in the geomagnetic field between the two hemispheres will also create northsouth asymmetries in ionospheric plasma motion. Persistent dawn-dusk asymmetries, on the other hand, have received less attention and are not always easy to explain. In this paper, we have tried to give an overview of prominent dawn-dusk observational features and their possible explanations. Figure 18 gives a schematic overview of some of the dawn-dusk asymmetries discussed in this paper. We have focused on four key aspects: (1) the role of external influences such as the solar wind and its interaction with the Earth's magnetosphere; (2) properties of the magnetosphere itself; (3) the role of the ionosphere for magnetospheric dynamics, and (4) the coupling between the solar wind, magnetosphere and ionosphere. As reviewed in Sect. 2.1, external factors such as bow shock geometry and direction of the interplanetary magnetic field, labelled (1) and (2) in Fig. 18, are important for dawndusk asymmetries. The shock geometry creates an asymmetry in plasma properties at dawn and dusk of the magnetosheath. In addition, the IMF orientation exerts significant control of both magnetospheric and ionospheric processes. A key element here is the dayside interaction between the IMF and the geomagnetic field, and IMF B Y is perhaps the strongest driver of dawn-dusk asymmetry in the magnetosphere. This interaction is also manifested in the ionosphere where the large-scale plasma convection pattern shows a systematic response to IMF orientation. Asymmetries in the magnetosheath are also reflected inside the magnetosphere. In Sect. 2.3 we pointed out the role of plasma entry from the magnetosheath along the magnetopause flanks. Differences in dawn and dusk magnetosheath plasma properties will consequently influence geometry (9), plasma properties (7) and processes in the magnetotail (8), (10). External drivers are not fully able to explain all dawn-dusk asymmetry, though. As discussed in Sect. 2.5, a noticeable dawn-dusk asymmetry arises as a consequence of gradient and curvature drift of particles; electrons and ions are deflected in opposite directions. This is most pronounced for the inner magnetosphere, where the magnetic gradients are stronger. A prominent example is the asymmetric ring current (5), with a stronger net current on the duskside. In Sect. 2.6 we discussed dawn-dusk asymmetries in the thermosphere and its embedded ionosphere. In addition to asymmetries imposed by the magnetosphere, these regions also possess locally induced dawn-dusk asymmetries. Differences in thermospheric heating and conductivity gradients in the ionosphere are two prominent examples. In order to fully understand the dynamic behaviour of geospace, including mechanisms responsible for dawn-dusk asymmetry, we must treat the solar wind, magnetosphere and ionosphere as a fully coupled system. As seen in Sect. 3, key aspects in regulating the response of this coupled system are the degree of feedback provided by the magnetosphere to the solar wind input, and the feedback from the ionosphere to the magnetosphere. The feedback from the ionosphere, both in the form of ion outflow (discussed in Sect. 2.6.3) and the role of ionospheric conductivity (discussed in Sect. 3.2) have been studied extensively, and are believed to influence the magnetosphere. Magnetospheric feedback to the magnetopause and bow shock regions, for example the effect of the plume (labelled (4) in Fig. 18) on dayside reconnection (discussed in Sect. 4.3) is still largely unexplored, however. It is therefore fair to say that there are still major gaps in our understanding of phenomena that introduce asymmetries in geospace.
21,898.8
2014-07-01T00:00:00.000
[ "Physics", "Environmental Science" ]
Understanding the development of teaching and learning resources: a review This paper is a literature review of research concerned with the production of learning resources in higher education (HE). It forms part of a larger research project in progress. DOI: 10.1080/0968776020100202 Introduction In the next five to ten years there will be a sector-wide need in HE to produce a wider range of teaching and learning materials, for example, more resource-based learning (RBL) materials for self-directed study. The context for this is rapid technological advancement, increasing numbers and diversity of students and a decline in the unit of resource (NCIHE, 1997). In particular, networked learning resources are increasingly presented as a cost-efficient means of maintaining teaching quality (NCIHE, 1997). However, the relationship between cost-effectiveness and quality is contested. This future scenario has significant implications for university staff, whose time represents the largest cost element in the production of learning resources (Chiddick, Laurillard, Quigley and Wolf, 1997). Some commentators (such as Noble, 1998) present a pessimistic view of the future status of university staff given the increasing automation of teaching and intrusion of commercial interests. The increase in student-centred learning and use of technology may also lead to a significant change in the role of the teacher, from the 'sage on the stage' to 'guide on the side' (Jones, 1999;Salmon, 2000). An extensive survey of the literature has identified few studies which specifically investigate the production of learning resources in campus-based higher education institutions (HEIs). We suggest that the production of teaching and learning resources has not been considered a legitimate research topic, since it is an implicit and routine aspect of academic staff duties. Therefore, there have been few in-depth studies which record everyday accounts of learning resources production and the factors affecting this. Firstly we introduce the background to the research project, of which this paper is one part. We then critically review six models of the learning resources production process and its organization. Finally, we suggest that a phenomenographic account of the everyday practices of learning resources production is needed. Background to the research project The review presented here is part of a research project funded by the Higher Education Funding Council for England (HEFCE)'s Teaching Quality Enhancement Fund (TQEF). The project aims to support the implementation of the institution's Learning and Teaching Strategy, which aims to promote effective student learning. For a national overview of Learning and Teaching Strategies see Gibbs, Habeshaw and Yorke (2000) and HEFCE (2001a;2001b). The research project will develop an understanding of the needs of academic staff for support in the production of a variety of teaching and learning resources, including networked learning resources, for use in undergraduate teaching across all subject disciplines within the university (Plewes and Issroff, 2002). The project will make recommendations about possible future structures for learning resources production at the university, including changes to the support and training infrastructure. Literature review In this section we critically review six prominent models which document and cost the learning resources production process. We focus specifically on two aspects: firstly, the level at which the analysis operates -for example, institution, course, faculty or department, member of staff, or student; secondly, the levels at which the costs of production of course materials are experienced. We also consider the purpose, structure and content of the models, critically analyse the advantages and disadvantages of their applications and issues arising from this. We discuss: • Open University (OU) Course Materials Production Models (Rumble, 1976;Bates, 1994); . • Costs of Networked Learning Course Lifecycle Model (Bacsich, Ash, Boniwell, Kaplan, Mardell and Caven-Atack, 1999); • The Pedagogic Toolkit Model (Oliver and Conole, 1999); • Cost Structures of Teaching Methods Model (NCIHE / Chiddick et al., 1997); • Student Preferences / Consumption of Learning Resources Model (Hobbs and Boucher, 1997;Boucher, 1998), and • The Course Resource Appraisal Model (Laurillard, 1999). As described previously, there is a lack of published literature describing the course materials production process in campus-based HEIs. However, there is an established literature on the course material production process for distance learning, in particular the UK Open University (OU) (Rumble, 1976;1992;Bates, 1994;1995;. Recent developments in online distance education and virtual or 'mega-universities' (Daniel, 1996) also build upon this OU model. We therefore use this model as a starting point for our review. However, since the OU is not representative of most HEIs, the conclusions may not be readily transferable to other contexts. OU course materials are generally developed by a course team of thirty people, who meet regularly. The team includes subject experts, educational technologists, a BBC producer, designers, editors and a course assistant. Since there can be several drafts, the development and approval of course materials is time-consuming (for example, 18-20 months; Rumble, 1976), and therefore also expensive (£450-500 for one hour of printed study materials in 1988; Bates, 1994). Such a course production system can only be justified by producing materials for large numbers of students (some OU science foundation courses have student numbers of 30,000; Bates, 2000), and/or by using the same materials for many years. Typical course life is 4-8 years (Rumble, 1976). Course materials are maintained by one to two members of the course team over the course life, at the end of which they are remade or replaced with new materials (Rumble, 1997). This type of mass, industrialized production with a highly specialized division of labour (Lewis, 1971a(Lewis, , 1971bRumble, 1992;Fames, 1993) is specific to the OU and is not common in campus-based HEIs where learning resources production is more of an ad hoc 'cottage industry' (Fames, 1993;Bates, 1997;Peters, 1998). Oliver, Bradley and Boyle (2001) have recently described the organizational and pedagogical difficulties in a collaborative, iterative approach to the distributed authoring of course materials for a virtual university. This builds upon the OU course materials production process described above, and again includes a highly specialized division of labour (Peters, 1998). Rumble (1997) has developed a weighting technique which expresses the staff time required to develop OU course materials. The unit of analysis is the faculty, but there is some interfaculty variation which reflects disciplinary differences in the types of learning resources developed (e.g. Becher, 1989;Smeby, 1996;Neumann, 2001). Sufficient course materials to keep the average learner studying for 10-12 hours per week are termed 1 unit. The ratio of course development time to course maintenance time is 10:1. That is, staff could maintain ten units of existing materials in the time taken to develop one new unit from scratch. Where staff act as consultants and contribute to the development of units by other people this is weighted at 0.5 of a new unit. Rumble (1976) and Wagner (1977) have also conducted macro-analyses of staff time taken to develop course materials at the level of the whole institution. Rumble and Wagner have produced formulae to calculate the total cost of course material production to the institution. This is a function of the number of courses in development and presentation, student numbers, average cost of a course in development and presentation, average delivery cost per student, institutional overhead costs (fixed) and average course life (Wagner, 1977). The purpose of this analysis was to inform an ongoing debate about the comparative cost of educating one student by the OU rather than a traditional HEI. Course Lifecycle Model The JlSC-funded 'Costs of Networked Learning' Project (Bacsich, Ash and Heginbotham, 2001;Bacsich et al, 1999) developed a three-stage cyclic course lifecycle model and advocated the use of activity-based costing (ABC) methods to investigate the costs of networked learning. The purpose of this model is to make bidden costs explicit and to promote the use of a standard costing methodology for the HE sector. The three stages of the course model are: planning and development; production and delivery; maintenance and evaluation. The model is based on course planning frameworks from the distance education sector (see previous section), designed to be equally applicable to both traditional and electronic learning resources. The model considers the course in the full context of pre-course (R & D) and post-course (evaluation) activities. ABC is an accounting method where the cost of a product or service is determined by the activities involved in its production, not by volume-related allocation of overheads such as staff hours. ABC methods may be applied at a number of different levels, such as department, faculty or institution (Bacsich et al., 2001); Application of the model involves use of a nested set of spreadsheets to examine costs at a variety of levels and to identify which of three stakeholders (the institution, staff or students) meets these costs. The student perspective is rarely considered, although the Hobbs and Boucher (1997) model considers student preferences (see later section). However, this model does not investigate the non-economic issues surrounding learning resources production. Also, specialist training and software are required before the ABC methods promoted may be used. Therefore, this approach may not be accessible to those without training in ABC methods. The results, however, should be comprehensible to all university staff and the standard costing methodology should enable comparisons to be made. The Pedagogic Toolkit Model This project produced an online course design toolkit (Oliver and Conole, 1999), designed to operate flexibly at a variety of levels -session, week, term and course. The toolkit enables teachers to input teaching activities and educational processes and to plan and cost the course development process. Although targeted at implementation of learning technology into courses, the toolkit can also be used for traditional teaching methods and resources. The toolkit has three elements. Firstly, 'media rater' looks at the teaching activities to be used. Secondly, 'course modeller' allocates student time between these teaching activities and finally, 'media selector' considers the costs of staff time and resources required to support the course. Estimates of the time taken to develop various types of materials from scratch are presented, although there is no indication of how these data were derived. The toolkit is based on Laurillard's (1993) 'conversational framework' which also informs two other models discussed here (Chiddick et al, 1997;Laurillard, 1999). The toolkit is easy to use, encourages staff to reflect on the mix of media and teaching activities in their courses and to consider alternatives in terms of potential implications for their workload. Cost Structures of Teaching Methods Model NCIHE (1997) noted the recent shift in UK HEIs from a mixture of small-group teaching and lectures towards increased use of RBL materials for self-directed study. The implications of this scenario for staff workloads and costs to institutions were examined in Appendix 2 of NCIHE (1997) by Chiddick et al. (1997). Chiddick et al. (1997) analyse how a student's time might be distributed across three combinations of teaching methods each based on different ratios of small group teaching, lectures and RBL. Assumptions are made about staff time for preparation and presentation required to produce one hour of student learning for each method (Table 2). However there is no indication of the source of these time estimates or how it has been established that these would produce 'one hour of student learning'. Teaching method Preparation time (hours) Presentation time (hours) Total time (hours) I 1.5 0 2 0 20 Table 2: Assumptions of preparation and presentation staff time required to produce one hour of student learning for four different teaching methods (Chiddick et al., 1997) The effect of increasing student numbers on staff time spent on learning resources production is analysed graphically as cost curves for each of the three scenarios. The report recommends the use of fixed-cost (for example, RBL) rather than variable-cost (for example, small-group) teaching methods. Fixed-cost teaching methods are insensitive to increased student numbers and the high production costs can be amortized over large numbers of students. Externally developed RBL material which can be adapted quickly is a more cost-effective option than developing RBL materials in-house, and a shift towards this is promoted (Table 3). Traditional study hours Lectures (max. 100 students) Groups (max. Chiddick et al. (1997) developed their model in the context of continuing rises in student numbers. However, recent data (HEFCE, 2001c) suggests that large increases in student numbers between 1988 and 1994 (a 67 per cent increase) were followed by a levelling off of demand for full-time undergraduate HE places between 1995 and 2001 (a 6 per cent increase). Also, in practice, the shift away from small-group teaching has not occurred in all HEIs. Some have continued to offer small-group teaching while reducing costs by employing lower unit cost staff (such as hourly-paid postgraduate students). Hobbs and Boucher (1997) suggest that student attitudes towards teaching methods should be considered, especially given the increasingly 'consumer-led' nature of HE. Therefore, they develop an economic analysis, similar to that of Chiddick et al, using cost curves but incorporating 'end user preferences', that is, the consumption of learning resources by students. They suggest that according to NCIHE (1997) students prefer traditional, nontechnology-based teaching methods such as lectures and small-group teaching, therefore the lowest cost option (RBL) may be unacceptable to students. However, relatively little is known about student preferences and this assumption may be anecdotal. The literature on learning resources in general concentrates on the production of resources by staff, rather than consumption by students, although student feedback on teaching may form part of revised quality assurance procedures. Further research into and explanation of student preferences would therefore be useful. Student Preferences / Consumption of Learning Resources Model The model examines the amount of teaching that can be delivered for a fixed cost output, rather than considering input (for example, the staff time to deliver a fixed amount of teaching). The analysis focuses on capital rather than labour as the key factor in learning resources production. Increasing use of networked learning resources relies upon the substitution of capital for labour, but it is debatable whether it reduces total costs. Boucher (1998) argues that Chiddick et al. do not adequately define what type of costs, total or average, are considered, and neglect other fundamental economic concepts such as economies of scale, marginal costs and diminishing returns. This model makes no distinction between internally and externally developed RBL materials which both Chiddick et al. (1997) and Laurillard (1999) consider to be fundamental. Hobbs and Boucher suggest that the optimal solution of student preferences coincident with lowest cost requires improvements in the quality of RBL materials. Laurillard's (1999) Course Resource Appraisal Model is a simple framework which tabulates the allocation of student study hours between various learning activities. These relate to different media forms, which in turn map on to particular technologies as outlined in Table 4. Technologies PrintTV.Video, DVD Library, DVD, Web Seminar; Online group Laboratory, Simulation Essay, Product Model The model looks at how student workload is distributed across various learning activities and media and assesses the implications for the workload of three groups of OU staff, termed academic, production and presentation. In this model the radical change in the role, workload and relations between technical and academic staff when producing electronic learning resources are significant, as discussed below. The production of electronic learning resources is especially labour-intensive in terms of technical staff time used in development. This is the main factor in their expense. For instance, at the OU, changing 20 per cent of course material to electronic learning resources increases academic staff time by 40 per cent but doubles technical staff time compared with production of traditional learning resources (Laurillard, 2001), assuming that all electronic learning resources are created from scratch. This is a specific example for Laurillard (2001); in general the degree of change in workload is dependent on the nature of material developed. An alternative option to creating electronic learning resources from scratch is to use generic, customizable resources. If 60 per cent generic, customizable resources are used, academic staff time increases by only 10 per cent and technical staff time increases by 20 per cent. Therefore it is not a cost-effective use of staff time for an individual to produce their own materials, as they do for traditional learning resources (Laurillard, 1999). In traditional campus HEIs, where staff are unlikely to work with dedicated teams of production staff, the entire burden of resource development and production would probably fall on academic staff. This is confirmed by a national survey of the production of electronic learning resources (HEFCE, 1999), which found that staff were mostly developing their own materials for their own needs. In addition, there are complex cultural reasons why generic, customizable learning resources are rarely used by academics in traditional, campus-based HEIs. While the review so far has mainly considered individual models, we now discuss some common features of the models. All of the models, by definition, are highly abstract and diagrammatic. Several have a strong accounting or economic focus and are concerned with costs. Few of the models explicitly consider staff except as passive variables, and even fewer consider students, with the exception of Bacsich et al. (1999). Most of the models are based on OU course production team methods which, although similar to team production of online distance teaching materials, do not correspond well to learning resources production practices in traditional campus-based teaching. Laurillard's (1993) 'conversational framework' provides the theoretical basis underlying several of the models (Chiddick et al.,\991;Oliver and Conole, 1999;Laurillard, 1999). Discussion: cultural factors influencing development and reuse of learning resources There are two ways in which staff may acquire learning resources without producing them themselves. These are (i) inter-institutional collaboration in the production of resources and (ii) the use or adaptation of generic, customizable resources (Laurillard, 1999;JISC, 2002). Neither of these options is perceived as attractive by academic staff (HEFCE, 1999), indicating a relatively widespread and entrenched autonomous and decentralized model of resource production. Bates (2000) has coined the term the 'Lone Ranger and Tonto' approach to describe actual practice in the development of learning resources in North American HE. Here the academic ('Lone Ranger') works with or employs an IT-literate graduate student ('Tonto') to develop electronic learning resources (Bates, 1997). However, this practice does not often produce a usable end product as constant revisions to keep pace with technological change are necessary (Bates, 1997). In contrast to the six models reviewed, this approach describes actual practice and should, we argue, be developed further. If, as it seems from the examples given above, academic staff are mostly developing learning resources themselves, why is this the case? HEFCE (1999) identifies a series of well-known 'constraining and enabling factors' to the development of electronic learning resources. These include lack of IT skills, lack of time for IT training, conflicting priorities, management pressure, incentives and rewards, and lack of suitable examples (Tait and Mills, 1999). The opportunity cost of the use of academic staff time is a key issue. HEFCE (1999) suggests that it is not cost-effective to institutions for staff to develop electronic materials. However, inter-institutional collaborative production can reduce development costs to a single institution and spread the risk (Laurillard, 1999). Especially at first-year undergraduate level, where the core curriculum is common to many institutions, there is clear potential for collaboration (Laurillard, 1999). However, at higher levels where courses are research-led, difficulties arise which explain the lack of interest in this activity. The use of commercial resources is also possible. Purchasing generic commercial material is cheaper than in-house development or commissioning (Hunt and Clarke, 1997;Laurillard, 1999), but has tended to be a victim of the 'not-invented-here syndrome' where staff are reluctant to use materials which they feel do not bear their own personal stamp (Laurillard, 1999). However, Hammond et al. (1992;160) suggest that the 'not-inventedhere' label is flawed as it underestimates and undervalues the conventional process by which lecturers prepare and update their material. They refer to textbooks, monographs, the research literature, and their own research and experience for sources of factual information, ideas, representations, and organisation during the preparation of lecture notes, seminars and tutorials. While use of commercial materials saves development time, additional time is needed to evaluate and adapt resources (Laurillard, 1999). Also, there are few commercial resources developed for or targeted specifically at HE, and, conversely, there is little interest by commercial publishers in resources developed by HEIs (HEFCE, 1999). These issues relate to teaching staff reluctance to cede autonomy over teaching content and delivery (Ryan, Scott, Freeman and Patel, 2000). Electronic learning resources in particular raise these issues, given their potential to change radically the nature of academic work and university organization (see Noble, 1998). For example, the trend towards automation and commodification of HE (Noble, 1998) and publicly available online materials may result in increased scrutiny of these materials, and potentially the substitution of materials for teachers (Jones, 1999). Conclusions This paper has identified and discussed six models which may be used to understand the development and re-use of learning resources. We have focused specifically on aspects which these models do not address such as actual staff practices and the cultural context of development. There are few in-depth empirical studies of the everyday practices of academic staff in terms of learning resources production. Since these are rarely articulated, recorded or documented there has been little systematic investigation of the ways in which learning resources production is conceptualized. Unless we explore and document these conventional forms of work in the production of traditional learning resources, it is difficult to understand the value attached to them and how the increased use of electronic learning resources represents a threat to these traditional resource production practices. As a result of these concerns, there is a need to develop a phenomenographic account of staff everyday practices in the production of a variety of teaching and learning resources. Phenomenography is an emerging qualitative research method which is increasingly applied in educational research (such as Marton, 1981;Marton, Hounsell and Entwistle, 1984;Entwistle, 1997;Brew, 2001a;2001b) to study empirically descriptive conceptions of the world around us. This includes the different ways in which people experience, perceive, understand and conceptualize the same phenomena.
5,110
2002-06-01T00:00:00.000
[ "Education", "Computer Science" ]
Optimization of orthogonal adaptive waveform design in presence of compound Gaussian clutter for MIMO radar In this paper, an adaptive algorithm is proposed to develop an orthogonally optimized waveforms with good correlation properties that are suitable for the detection of target in the presence of strong clutter. The joint optimization both at the transmitter and receiver is adapted based on the secondary data and clutter to maximize signal to interference noise ratio (SINR) with target and clutter knowledge. The result shows good correlation properties and better SINR and signal to clutter ratio (SCR) compared to the existing iterative algorithm. The proposed algorithm also shows improved detection even for lower SCR when implemented with GLRT. A multi-objective optimization (MOO) algorithm (Sen et al. 2013) was proposed to maximize the SINR using the orthogonal frequency division multiplexing (OFDM) radar signal with the prior knowledge of target and noise covariance. The two-stage waveform optimization (Nijsure et al. 2013) algorithm maximizes signal-to-clutter-plus-noise ratio (SCNR) for adaptive distributed MIMO radar. An optimal transmit waveform was derived by maximizing the signal-to-noise ratio (SNR) (Friedlander et al. 2006) of the transmitted signal by controlling the space-time distribution to obtain significant improvement in the detection performance. The optimization of both waveforms and the receiving filters by iterative algorithm (Chen and al 2009) maximizes signal-to-interference noise ratio (SINR). An adaptive OFDM (Sen and Glover 2012) radar signal was designed to detect a target employing spectral weights for the next transmitting waveform to maximize SNR. Adaptive MIMO radar waveform (Zhang et al. 2009) algorithm was designed to improve the target detection by maximizing the MI between the target impulse response and the received echoes and also minimize the MMSE in estimating the target impulse response. From the literature it is understood that the algorithms have considered either orthogonality or optimality for the design of waveforms but not both. In this paper, ortho-optimal waveforms with good correlation properties that are suitable for the detection of various targets in the presence of clutter with prior knowledge of target and clutter are presented using adaptive algorithm. This algorithm is based on continuous training of the receiver and the transmit waveform on the basis of environment change to suit best the dynamic radar scene. The performance measures used in this paper are SINR, SNR and signal-to-clutter ratio (SCR). Rest of paper is organized as follows. In "Signal model" we formulate orthogonally optimized algorithm for the DFCW waveform design in order to minimize the cost function. In "System model" we introduce system model and orthogonally adaptive optimization algorithm. Design results from the proposed algorithms are discussed in "Results". Finally conclusions are drawn in "Conclusion". Signal model Consider MIMO radar system with N transmitting antennas, each represented by a sequence of M samples and R receiving antennas. A modified ant colony optimization algorithm (M_ACO) is used to generate orthogonal discrete frequency waveforms (DFCW) with good correlation properties. To achieve this objective, the cost function was based on peak sidelobe and integrated sidelobes level ratio is considered for minimizing objective function. Discrete frequency coding waveform Discrete frequency coding (DFC) sequence is represented as {0, 1, 2, 3… M − 1} randomly. The waveform with adjacent subpulses of time duration modulated with DFC sequences is called as DFCW. Each pulse is divided into number of subpulses in the waveform which are equal to the number of code sequences. The DFCW_LFM waveform is defined as (Liu et al. 2008) where B is DFCW bandwidth, T is the subpulse time duration and k is the frequency slope, k = B/T. (Liu et al. 2008) so that the grating lobes can be eliminated. Cost function The cost function is the key parameter for the waveform optimization. The peak sidelobe ratio (PSLR) and the integrated side-lobe ratio (ISLR) determines the correlation properties. The PSLR is a ratio of the amplitude of the peak sidelobe to the main lobe and is expressed in decibels. This parameter ensures the detection of weak targets when covered by strong ones. The autocorrelation and crosscorrelation PSLR is given by where t ≠ q, t = 1, 2,… N, q = 1, 2,… N, n = 1,2,..N for PLSR An and n = 1, 2, … N(N − 1) for PLSR Cn , A(S t , n) and C(S t , S q , n) are the aperiodic autocorrelation function of t-th waveform, the crosscorrelation function of t-th and q-th waveforms, respectively. The ISLR is a ratio of sum of the energy side lobes to the energy of the main lobe in the pulse compression function. The autocorrelation and crosscorrelation ISLR is given by where t ≠ q, t = 1, 2,… N, q = 1, 2, … N, n = 1, 2, … 2N − 1. System model The N waveforms of length M are transmitted and reflected by a target and clutter. In the receiver N × R waveforms are recovered and to further detect the target detection by a receiving filter. K (K ≥ N) secondary data vector and primary data share the same covariance structure. The covariance matrix is a trained matrix of clutter statistics for K secondary data. r i and r iK , i = 1, 2… r, K = 1, 2 … K are primary and secondary data of the received signal, respectively. The primary data received by the radar at the i-th antenna are given by where, S N×M = [a 1 , . . . , a N ] ∈ C N×M is the transmit code matrix and a n = [a n1 , a n2 , . . . , a NM ] T ∈ C Nx1 , the transmit codeword of the antenna with M as the length of the code word where, the superscript T stands for the transpose of a matrix. The target scattering properties are represented by which is the target scattering coefficient generated randomly and complex and those of the clutter by c i , which is the clutter vector. An additive complex Gaussian noise vector is n i . The target scattering is given by α = Ƙ δ(t − τ) where τ < 2d/c, the radial span of the target is d and speed of light is c. The reflection coefficient of individual scatters is Ƙ which are generated randomly and are complex values. Additionally, a set of K(K ≥ N) secondary data vectors is necessary to trained clutter statistics for K secondary data for the implementation of orthogonal adaptive optimization algorithm. The secondary data vectors are defined as where, is an additive complex Gaussian noise of secondary vector is n iK and those of the clutter c iK is the secondary clutter vector. As the resolution of radar system increases, the clutter model no longer acts as Gaussian distribution (Fay et al. 1997;Trunk 1973;Jakeman and Pusey 1976;Gini et al. 2002;Hu et al. 2006). The model of sea clutter is a challenge to fit various distributions. The models proposed are Weibull (Fay et al. 1997), log-normal (Trunk 1973), k (Jakeman and Pusey 1976) and compound Gaussian (Gini et al. 2002) distributions. These models do not satisfactorily match to real sea clutter. The limitation of these models is due to non stationary characteristic. The Tsallis distribution (Hu et al. 2006) is used to model the sea clutter, known as K-distribution clutter. This K-distribution sea model is verified with original amplitude data of sea clutter. This is the best distribution for sea clutter (Ward 1981). The compound Gaussian random vector, c i is given as, i.e., The texture α i is non negative random variable and the speckle component and β i is correlated complex circular Gaussian vector. The compound Gaussian clutter is sample from K-distribution. Noise covariance matrix is given by where, H is transpose conjugate of a matrix and E[.] is expectation operator. Matched filter output at the receiver is expressed as where h is the impulse response of the matched filter at the receiver of size (1 × N). The matched filter output at the receiver y is of size (1 × N) Thus, the SINR, SNR, SCR at the filter can be expressed as The objective is to maximize SINR subjected to the constraint ||s|| 2 ≤ 1. An orthogonal adaptive optimization algorithm The design of extended target based waveform is different from the design of other types of waveform. It requires the prior information of clutter and target statistics. The transmitted waveform needs to adapt to the changing environment in real time scenario. The clutter information is estimated by the received signals before the target appears. The information is collected from K secondary data. The aim is to design a waveform which is best suited for the detection of the target of interest. The orthogonal waveforms have better correlation properties which are critical to reduce mutual interference and to increase range resolution. The adaptive waveforms have the capacity to mitigate clutter statistics and increase the detection capabilities. The orthogonal adaptive (optimal) waveform is developed from the proposed adaptive algorithm. The orthogonal adaptive (optimal) waveforms have better probability of detection and better resolution. This proposed algorithm guarantees the improved SINR. The technique applied here is to optimize the filter based on the covariance matrix of clutter and noise. The target statistics and waveform (orthogonal waveform initially) are also considered. The covariance matrix is a trained matrix of clutter statistics for K secondary data. Here, the clutter information is estimated by the received signals before the target appears. The covariance matrix of filter and clutter statistics are estimated. Using this covariance matrix, the signal covariance matrix is estimated from target, noise, clutter and filter covariance matrix. Then this waveform covariance matrix is normalized and transmitted by NxR MIMO radar system. Thus, obtained waveform is orthogonal optimal waveform. The objective is to maximize SINR subjected to the constraint ||S|| 2 ≤ 1 and to optimize by first solving h in terms of S (Pillai et al. 2000). The optimization problem becomes. R c,s = E C S S H C H and R n = E n n H are estimated from clutter covariance matrix in (7) and noise covariance matrix in (8). The maximization of h is possible by minimizing min h h H R c,s + R n h such that h H αS = 1. The solution to this is (Capon 1969) where, µ is a scalar which satisfies the equality constraint. This term can be neglected as it has no effect on the objective function. The objective function now becomes S H T H R c,s + R n −1 TS which is a function of S only. Subjected to ||s|| 2 ≤ 1. The adaptive algorithm is discussed below: Step 1 Initialization The transmitting matrix of the DFCW waveforms as shown in Eq. (14) is modeled by optimized code set sequences using M_ACO optimization algorithm(Reddy and Uttarakumari 2014) (not in the scope of this paper). The objective function Eq. (4) is considered to minimize ASP and CP values. S is a matrix of (MxN). The extended target matrix is given by Clutter covariance of size (1 × N) is estimated using the Eq. (16) where, K is the number of secondary data, N is the length of the code set sequence of the waveform, r ik is the received signal from the primary and secondary data. The clutter covariance matrix of size (M × N) is estimated using Eq. (16) and is as shown in Eq. (17) Step 2 Training of waveform and filter The filter coefficients are trained based on the covariance matrix of clutter and noise. The R c,s is obtained by using clutter covariance matrix given in Eq. (18) and transmitting matrix using Eq. (14) for DFCW waveforms. The transmitting waveform is adapted to the dynamic environment using Eqs. (20) and (21). Using Eq. (22), the waveform is normalized. The covariance matrix of clutter and filter is estimated using Eq. (20) and finally, the waveform matrix is estimated using Eq. (22). Thus, obtained waveforms are orthogonal adaptive waveforms developed using adaptive algorithm. Step 3 The obtained orthogonal adaptive waveform (S) is substituted in Eq. (10) to obtain SINR, SCR and SNR values. The S matrix and values of SINR, SCR and SNR are noted and step 2 is repeated. Out of these two values, the one with highest CF waveforms is noted. The process is repeated for 100 simulations. This adaptive algorithm has orthogonally optimized DFCW_LFM waveforms with the prior knowledge of the channel and clutter, i.e., environment. The results of this algorithm are better than the iterative algorithm (Chen and al 2009) due to the collection of the secondary data for K samples. Using Eq. (17), the filter is initially trained for clutter statistics without target statistics. Figure 1 illustrates the adaptive algorithm to generate adaptive waveform. Initially the orthogonal waveforms are generated using optimization algorithm (Reddy and Uttarakumari 2014) (not in the scope of this paper) and then transmitted through the channel. The performance at the receiver degrades due to the clutter. Hence to increase the performance, the waveforms are modified based on the clutter and target statistics and also the filter coefficients are adapted based on the covariance matrix of clutter, target statistics and waveforms. The waveforms are orthogonally optimized based on the covariance matrix of noise, clutter and filter with target sta- tistics. The adaptive algorithm adapts the waveform to the rapidly changing environment by increasing the SINR, SCR and SNR values. GLRT In low resolution maritime radar system, the model of clutter is a complex Gaussian process. As the radar resolution increases, the clutter no longer acts as a Gaussian model and it can be described as a non-Gaussian clutter model for heavy-tailed clutter distributions. Using maximum likelihood estimation (MLE) method, the unknown parameters like the clutter power level and RCS of the target are estimated. To cancel the clutter and make the detector fully adaptive, the primary data covariance matrix is assumed to be known initially. Then, the secondary data covariance matrix is derived and placed in place of covariance matrix. Thus, the dynamic decision-based detector, Generalized GLRT detector (Cui et al. 2010) is developed. The GLRT detector shows excellent detection performance against the compound Gaussian clutter for high resolution MIMO radars. The clutter has exponential correction structure covariance matrix Ro, the (i,j) element of which is ρ |i−j| , here ρ is the one-lag correlation coefficient. The Power Spectral Density of clutter is generally located in low frequency region and Clutter spread is controlled by v. v is the parameter ruling the shape of the distribution. To analyze the probability of detection with orthogonal and adaptive waveforms, the parameters considered are P fa = 10 −4 , N = 4, M T = 4, M R = 4, ρ = 0.9, K = 32 and v = 0.5. The GLRT & Gaussian clutter GLRT (GC-GLRT) detectors are used to check the performance of orthogonal optimal waveforms in terms of probability of detection when SCR is low. Results The simulation was carried out in MATLAB for 4 × 4 MIMO for extended target. 4 sets of orthogonal DFCW_LFM codes were generated using Modified Ant Colony Optimization (M_ACO) algorithm for sequence length of 4 (Reddy and Uttarakumari 2014). The orthogonal waveforms are then optimized using adaptive algorithm as explained in section III. The PSLR and ISLR are evaluated for each sequence and are tabulated along with the sequences in Table 2. The autocorrelation sidelobe peak (ASP) and crosscorrelation peak (CP) for the corresponding waveforms are shown in Table 3 These results show that the waveforms are also orthogonal with better autocorrelation and crosscorrelation when clutter is very spiky. The correlation properties are very important to best suit best for MIMO applications. Low crosscorrelation sidelobe levels are very critical for reducing mutual interference maximizing independent information and also to facilitate high range resolution. In Table 4, the average ASP and CP values are tabulated for orthogonal, iterative and adaptive waveforms. These results clearly show an improvement in ASP and CP values when compared with orthogonal and iterative method. There is a drastic reduction in sidelobes and also waveforms are more uncorrelated in presence of compound Gaussian clutter and extended target. The proposed adaptive algorithm is compared with the existing iterative algorithm (Chen and al 2009) based on SINR, SCR and SNR performance measures for extended target model and are shown in Figs. 2, 3 and 4, respectively, in presence of compound Gaussian clutter. The oscillations in Figs. 2, 3 and 4 are due to the random behavior of compound Gaussian clutter and extended target scatterings. Improvement by 3, 4 and 25 dB are observed in SINR, SCR and SNR, respectively, using adaptive algorithms over iterative method (Chen and al 2009) in presence of compound Gaussian clutter and with extended target. From Table 5, it can be concluded that the SINR, SNR and SCR values of orthogonally optimal waveforms generated using adaptive algorithm is better than the iterative algorithm in presence of compound Gaussian clutter and extended target. The orthogonal adaptive waveforms generated by adaptive algorithm in presence of clutter and extended target are subjected to GLRT and GC-GLRT (Cui et al. 2010) to check the performance of these waveforms in terms of probability of detection when Fig. 4 The SNR plot of adaptive and iterative algorithm for extended target in clutter SCR is low. The waveform developed by adaptive algorithm show better detection performance even for lower SCR when GLRT and GC-GLRT (Cui et al. 2010) is adapted and is clearly shown in Figs. 5 and 6. So, this algorithm shows better result compared to iterative for lower SCR also.
4,117.4
2015-12-22T00:00:00.000
[ "Engineering", "Computer Science" ]
Quantized current blockade and hydrodynamic correlations in biopolymer translocation through nanopores: evidence from multiscale simulations We present a detailed description of biopolymer translocation through a nanopore in the presence of a solvent, using an innovative multi-scale methodology which treats the biopolymer at the microscopic scale as combined with a self-consistent mesoscopic description for the solvent fluid dynamics. We report evidence for quantized current blockade depending on the folding configuration and offer detailed information on the role of hydrodynamic correlations in speeding-up the translocation process. Biopolymer translocation through nanoscale pores holds the promise of efficient and improved sensing for many applications in biotechnology, and possibly ultrafast DNA sequencing [1,2,3].Recent advances in fabrication of solid-state nanopores [4,5] have spurred detailed experimental studies of the translocation process, with DNA as the prototypical biopolymer of interest [6].Computer simulations that can account for the complexity of the biomolecule motion as it undergoes translocation, as well as its interaction with the environment (the nanopore and the solvent), are crucial in elucidating current experiments [7,8] and possibly inspiring new ones.Here, we study the dynamical, statistical and synergistic features of the translocation process of a biopolymer through a nanopore by a multiscale method based on molecular dynamics for the biopolymer motion and mesoscopic lattice Boltzmann dynamics for the solvent.We report evidence for quantized current blockade depending on the folding configuration (single-or multi-file translocation) in good agreement with recent experimental observations [7].Our simulations show the significance of hydrodynamic correlations in speeding-up the translocation process. Nanopores are an essential element of cells and membranes, controlling the passage of molecules and regulating many biological processes such as viral infection by phages and inter-bacterial DNA transduction [9].The last two decades have witnessed the emergence of artificial solid-state nanopores as potential devices for sensing biomolecules through novel means [6].One of the most intriguing possibilities is ultra-fast sequencing of DNA by measuring the electronic signal as the biomolecule translocates through a nanopore decorated with electrodes [3].While this goal still remains elusive, a number of detailed studies on DNA translocation through nanopores have been reported recently [7,8].These experiments typically measure the blockade of the ion current through the nanopore during the time it takes the molecule to translocate, which provides statistical information about the biomolecule motion during the process. Numerical simulation of the translocation process provides a wealth of information complementary to experiments, but is hindered by the very large number of particles involved in the full process: these include all the atoms that constitute the biomolecule, the molecules and ions that constitute the solvent, and the atoms that are part of the solid membrane in the nanopore region.The spatial and temporal extent of the full system on atomic scales is far beyond what can be handled by direct computational methods without introducing major approximations.Some universal features of translocation have been analyzed by means of suitably simplified statistical schemes [10], and non-hydrodynamic coarse-grained or microscopic models [11,12,13] or other mesoscopic approaches [14].Many atomic degrees of freedom, and especially those of the solvent and the membrane wall, are uninteresting from the biological point of view.The problem naturally calls for a multi-scale computational approach that can elucidate the interesting experimental measurements while coarse-graining the less important degrees of freedom. We have developed a multiscale method for treating the dynamics of biopolymer translocation [15] and performed an extensive set of numerical simulations, combining constrained molecular dynamics (MD) for the polymer motion with a Lattice-Boltzmann (LB) treatment of the solvent hydrodynamics [16].The biopolymer transits through a nanopore under the effect of a localized electric field applied across the pore, mimicking the experimental setup [8].The simulations provide direct computational evidence of quantized current blockade and confirm the experimentally surmised multiplefile translocation: the molecule passes through the pore in a multi-stranded fold configuration when the pore is sufficiently wide.The simulations offer detailed information about several experimentally difficult issues, in par-ticular the role of hydrodynamic correlations in speedingup the translocation process. A three-dimensional box of size N x h × N y h × N z h lattice units, with h = ∆x the spacing between lattice points, contains the solvent and the polymer.We take N x = 2N y , N y = N z ; a separating wall is located in the mid-section of the x direction, x = hN x /2.We use N x = 100 and N 0 = 400, where N 0 is the total number of beads in the polymer.At the center of the separating wall, a cylindrical hole of length l hole = 10h and diameter d p is opened.Three different pore sizes (d p = 5h, 9h, 17h) have been used in the current simulations.Translocation is induced by a constant electric field acting along the x direction and confined to a cylindrical channel of the same size as the hole, and length l p = 12h along the streamwise (x) direction.All parameters are measured in units of the lattice Boltzmann time step and spacing, ∆t and ∆x, respectively, which are both set equal to 1.The MD time step is five times smaller than ∆t.The pulling force associated with the electric field in the experiments is q e E = 0.02 and the temperature is k B T /m = 10 −4 .The monomers interact through a Lennard-Jones 6-12 potential with parameters σ = 1.8, and ǫ = ×10 −4 and the bond length among the beads is set at b = 1.2.The solvent is set at a density ρ LB = 1, with a kinematic viscosity ν LB = 0.1 and a drag coefficient γ = 0.1. We chose the separation d between the beads to be equal to the persistence length of double-stranded DNA, that is 50 nm, and define the lattice spacing to be d/1.2= 40 nm.The hole diameters is 3 ∆x.The repulsive interaction between the beads and the wall (with parameter σ w = 1.5 ∆x [17]) leaves an effective hole of size equal to ∼ 5 nm.Having set the value of ∆x, we choose the time step so that the kinematic viscosity is expressed as: ∆t , with ν w the viscosity of water (10 −6 m 2 /s) and ν LB the numerical value of the viscosity in LB units; this procedure gives ∆t ∼ 160 ps, with ν LB = 0.1.In order to ensure numerical stability, the relation γ∆t < 1 must be satisfied.Having established the value of ∆t, we need to adjust the value of the drag coefficient accordingly, γ < 6 • 10 9 sec −1 .This is significantly smaller than an estimate of the friction based on Stoke's law for DNA [18], which is equivalent to an underdamped system, or an artificially inflated bead mass.This approach is consistent with the coarse graining of the time evolution in the coupled LB-MD scheme. We focus on the fast translocation regime, in which the translocation time, t x , is much smaller than the Zimm time, which is the typical relaxation time of the polymer towards its native (minimum energy, maximum entropy) configuration.This corresponds to the strongfield condition q e Eb/kT > 1.In this regime, simple onedimensional Brownian models [19], or Fokker-Planck representations, cannot apply because the various monomer units do not have time to de-correlate before completing translocation.The ensemble of simulations is generated by different realizations of the initial polymer configuration, to account for the statistical nature of the process.Initially, the polymer is generated by a three-dimensional random walk algorithm with different random numbers for each polymer configuration and one bead chosen randomly constrained at the pore entrance.Then, the polymer is allowed to relax for ∼ 10 4 molecular dynamics steps without including the fluid solvent in the relaxation, while keeping the bead at the pore entrance fixed.We define as time zero (t = 0), the time after the relaxation, when the fluid motion is also added, the pulling force begins to act and the translocation process is initiated; at this moment the bead at the pore entrance is also allowed to move.At this stage we do not include any electrostatic interactions within our model for reasons of computational simplicity.As far as the biopolymer motion in the bulk of the solvent is concerned, this may actually be a good approximation of experimental conditions with high salt concentration, which leads to strong screening of electrostatic interactions.The situation at the pore region may require more refined treatment, beyond the scope of the present work. In Fig. 1(a) we present the number of pore-resident beads N r (t) as a function of time, for a narrow (d p = 5h), mid-sized (d p = 9h) and large (d p = 17h) pore, with h the mesh spacing of the lattice Boltzmann simulation, for representative (fastest, slowest and average speed) trajectories.Simulations are repeated over an ensemble of 400 realizations of different initial conditions and for total polymer lengths up to N 0 = 400.Time is measured in units of t E 1 , the time it would take for the polymer to translocate if the monomers were to proceed in singlefile configuration at the drift speed; this speed is given by v E = q e E/γm, with q e and m the charge and mass of the monomer, E the external electric field and γ the hydrodynamic drag.This gives t E 1 = bN 0 /v E = 12N 0 and the number of monomers in the pore for single file translocation is N 1 = 10 for the parameters used here. Fig. 1(a) clearly shows the highly non-linear dynamics of the translocation process: In the initial stage of the translocation, the nanopore gets populated, with the number of resident monomers significantly overshooting the single-file value N 1 , the horizontal dashed lines at heights qN 1 indicating q-file (q = 1, 2, 3, . ..) translocation.The range of q explored by the translocation trajectories grows approximately with the cross-section of the pore, going from q ∼ 2 for the smallest pore d p = 5h up to q ∼ 8 for the largest one d p = 17h.Note that these values correspond to about half the maximum allowed q-number, q max ∼ d p /b.The fastest events correspond to the largest q value observed, while the slowest events correspond to essentially q = 1 throughout the translocation.It is also noteworthy that the translocation time typically exceeds the single-file value, t E 1 , except for the fastest events; for the most probable events q ∼ 2 for all pore sizes indicating that conservative monomermonomer interactions produce an effective slow down compared to a single Langevin particle subject to a constant electric drive and frictional drag γ. Fig. 1(a) also presents the current blockade in all three pores for the most probable event in each case, which is the event with a translocation time close to the peak of the distribution over all translocation times.The current blockade is proportional to the number of monomers in the pore per unit area and appears to occur in well defined steps (quantized).Specifically, these blockades are calculated from the difference between the area of the resident beads, π(σ/2) 2 , and the total area of the pore, π(d p /2) 2 .In order to investigate the quantization of the current blockade we monitored the distribution of N r (t) at various time frame intervals of 100 steps.The resident monomers block the current across the channel, so that N r (t) conveys a direct measure of the current drop associated with the biopolymer passage through the nanopore.The corresponding histograms P (N r , t) for three pore sizes are shown in Fig. 1(b).At early times, these histograms exhibit a multi-peaked structure, which is a clear signature of multi-file translocation.As time passes, the multiple peaks recede in favor of a single-peak distribution, close to the single-file value N 1 = 10.This was found to be a stable-attractor for every simulated Collecting all results for the average number of resident monomers Nr as a function of the translocation time t x , for the three pore sizes studied, we find a simple relationship, shown in Fig. 2. Experiments [7] have reported that the average number of resident atoms Nr in each translocation event varies approximately inversely with the duration of translocation t x : The single-file asymptote N 1 = 10 (q = 1) at longtimes, t ≫ t E 1 , is evident.The short-time asymptote, reaching up to 4 < q < 5, corresponds to ultrafast translocations (t < t E 1 ) occurring in the case of the large-diameter pore, d p = 17h.These results are intuitively reasonable, since large resident numbers imply that more monomers cross the pore per unit time, hence the translocation becomes faster.The results also support the notion of N r (t) as a measure of the timerate of the translocation, dN T /dt ∝ N r , from which the inverse-proportionality between Nr and t x is a direct consequence of tx 0 [dN T /dt] dt = N 0 = const.In this expression, N T (t) is the number of translocated monomers at time t. The simulations reveal that solvent correlated motion makes a substantial contribution to the translocation energetics.The role of hydrodynamic correlations is best highlighted by computing the work done by the moving fluid on the polymer (we call this the synergy, W H ) over the entire translocation process as compared to the case of a passive fluid at rest: where v i is the velocity of monomer i and u i is the fluid velocity at the position of monomer i.For the sake of comparison, it is also instructive to contrast W H with the corresponding work done by the electric field where the sum extends over the resident monomers only, since the electric field is applied at the pore region only. These statistically averaged values of W H and W E reveal a number of interesting features (see Fig. 3).First, W H is always positive, clearly showing that hydrodynamic correlations provide a cooperative background, as compared to the case of a passive "ether" medium ( u = 0).Second, we observe that the W E has a much narrower distribution of values than W H , reflecting the ordered structure of the biopolymer as it passes through the nanopore, as compared to its off-pore morphology.It is useful to introduce the work done by the electric field on molecules which translocate single-file and proceed through the pore at speed v E , W E 1 = q e EN 1 v E t E 1 = bq e EN 1 N 0 .In the absence of any other interaction, a q-file translocation at speed v E would complete in a time t x (q) = t E 1 /q under an electric work qW E 1 .In the present simulations, W E 1 = 0.12N 0 , thereby W E 1 = 48 for N 0 = 400.Interestingly, the distribution of W E values is highly peaked at a value very close to W E 1 .The observation that W E ∼ W E 1 implies that qv x (q)t x (q) ≃ v E t E 1 and since the simulations show that t x (q) > t E 1 , the conclusion is that v x (q) < v E /q, indicating that collective motion of the monomers slows down the process. A major asset of numerical simulations for the study of translocation processes is the direct access to visualization of the morphology of the translocating chain.As an example, we show in Fig. 4 a typical "snapshot" at a time when about 65% of the monomers have already passed through the pore of a translocating 2-folded chain of N 0 = 400 beads.In the same figure we show for comparison an event for the same length, but for singlefile translocation (unfolded chain) through a very narrow (d p = 3h) and shallow (l p = 1) pore.In addition to the polymer conformation, we show isocontours of the magnitude of the hydrodynamic synergy density which is a local (in both space and time) version of the total synergy W H defined in Eq.( 2), with B( r) a grid cell centered around location r = (x, y, z).The contours of w H ( r) illustrate the cooperative nature of the hydrodynamic field, with regions of high co-moving flow surrounding the translocating polymer and assisting its motion.This is suggestive of the notion of an "effective" polymer, dressed with the hydrodynamic synergy field, which acts as a self-consistent lubricant, helping the polymer to negotiate a faster passage through the nanopore. We further investigate this issue by inspecting the average (over an ensemble of 400 realizations) translocation time, t x , as a function of the polymer length, with and without hydrodynamics.The results are shown in Fig. 5: hydrodynamics consistently accelerates the translocation by roughly 30% percent.More intuitively, hydrodynamics literally re-normalizes the diameter of the pore: as is clearly visible in Fig. 5, a pore of diameter d p = 5h for a bare polymer (without the hydrodynamic field) is essentially equivalent to a pore of almost double diameter d p = 9h for the hydrodynamically-dressed polymer.In order to assess the degree of correlation between the translocation dynamics of the "dressed" polymer versus the actual one, we have measured the translocated specific synergy (synergy per monomer), defined as with N T (t) the number of translocated monomers at time t.Clearly, any implicit time-dependent functional dependence of the form w H (t) = w H (N T (t)) would indicate that mass and synergy translocate in a synchronized manner.We find that the ratio u • υ /kT ∼ 5, reflecting the fact that the solvent locally "follows" the monomer and providing a measure of the relative importance of synergistic versus thermal forces.Our results show that w H (t) is essentially constant throughout the translocation process.This implies a direct proportionality between the translocated synergy and the number of translocated beads and supports the notion that the "dressed" and the actual polymer proceed in full synchronization across the nanopore. In conclusion, by using a new multiscale methodology based on the direct coupling of constrained molecular dynamics for the solute biopolymers with a lattice Boltzmann treatment of solvent dynamics, we have been able to confirm a number of experimental observations, such as a direct relation between quantized current blockades and multi-folded polymer conformations during the translocation process.In particular, the simulations reveal an intimate connection between polymer and hydrodynamic motion which promotes a cooperative background for the translocating molecule, thus resulting in a significant acceleration of the translocation process.Such an acceleration can also be interpreted as the outcome of a renormalization of the actual polymer geometry into an effective one, more conducive to translocation.This opens up exciting prospects for the development of optimized nano-hydrodynamic devices based on the finetuning of hydrodynamic correlations.As an example, one may envisage multi-translocation chips, whereby multiple molecules would translocate in parallel across membranes with an array of pores.The optimization of such devices will require control of solvent-mediated moleculemolecule interactions to minimize destructive interference between translocation events. 1 EFIG. 1 : FIG. 1: Number of resident beads with time for three different pore sizes dp = 5h, 9h, 17h (h = lattice spacing) and N0 = 400.(a) The fastest (minimum time, blue), slowest (maximum time, red) and average speed (most probable time, green) translocation events; the insets show the current blockade for the duration of an event with average speed (green) with the current normalized to the open pore value (1).(b) Histogram P (Nr, t) of the distribution of Nr with time: short-time trajectories show multi-file character, reaching up to q ∼ 2 − 8 in the initial stage of the translocation depending on the pore size; long-time trajectories show little departure from the single-file configuration. FIG. 2 : FIG.2: Scatterplot of the average resident number Nr versus translocation time (in units of t E 1 ) for the ensemble of translocation events for three values of the pore diameter, dp = 5h, 9h, 17h and N0 = 400. FIG. 3 : FIG. 3: Statistical distribution of the work performed by the hydrodynamic and electric field during translocation events.The vertical dotted line corresponds to the work W E 1 done by the electric field on polymers that translocate single-file. FIG. 4 : FIG. 4: Left panel: a typical two-folded polymer configuration (dp = 9h), at time where 65% of the N0 = 400 total beads have already translocated from right to left; colored contours show the magnitude of the corresponding hydrodynamic synergy field (only five of the nine wall-layers are shown).Right panel: a single-file translocation event for a narrow and shallow pore (dp = 3h, lp = 1h) with 60% of the beads translocated and the corresponding magnitude of the synergy.
4,846.8
2008-02-08T00:00:00.000
[ "Physics" ]
Expression of genes involved in carbohydrate‐lipid metabolism in muscle and fat tissues in the initial stage of adult‐age obesity in fed and fasted mice Abstract C57Bl mice exhibit impaired glucose metabolism by the late adult age under standard living conditions. The aim of this study was to evaluate white adipose tissue (WAT), brown adipose tissue (BAT), and skeletal muscle expression of genes involved in carbohydrate‐lipid metabolism at postpubertal stages preceding the late adult age in C57Bl mice. Muscle mRNA levels of uncoupling protein 3 (Ucp3) and carnitine palmitoyltransferase 1 (Cpt1) (indicators of FFA oxidation), WAT mRNA levels of hormone‐sensitive lipase (Lipe) and lipoprotein lipase (Lpl) (indicators of lipolysis and lipogenesis), muscle and WAT mRNA levels of the type 4 glucose transporter Slc2a4 (indicators of insulin‐dependent glucose uptake), and BAT mRNA levels of uncoupling protein 1 (Ucp1) (indicator of thermogenesis) were measured in fed and 16 h‐fasted mice in three age groups: 10‐week‐old (young), 15‐week‐old (early adult), and 30‐week‐old (late adult). Weight gain from young to early adult age was not accompanied by changes in WAT and BAT indexes and biochemical blood parameters. Weight gain from early to late adult age was accompanied by increased WAT and BAT indexes and decreased glucose tolerance. Muscle Ucp3 and Cpt1 mRNA levels and WAT Lipe and Slc2a4 mRNA levels increased from young to early adult age and then sharply decreased by the late adult age. Moreover, BAT Ucp1 mRNA level decreased in the late adult age. Fasting failed to increase muscle Cpt1 mRNA levels in late adult mice. These transcriptional changes could contribute to impaired glucose metabolism and the onset of obesity in late adult mice during normal development. Introduction In humans and rodents, the prevalence of obesity increases from birth until middle age (Barzilai et al. 1998;Facchini et al. 2001;Mizuno et al. 2004). In laboratory mice, middle age includes the period from 9 to 12 months (Flurkey et al. 2007;Jacobson 2002;Rusli et al. 2016). However, long before middle age in the late adult age (6 months) mice demonstrate impaired glucose metabolism in which insulin and blood glucose concentrations increase (Stenbit et al. 1997) and sensitivity to insulin decreases (Mizuno et al. 2004). The physiological mechanisms causing dysregulation of carbohydrate metabolism in the late adult age during normal development are unknown. White and brown adipose tissues and skeletal muscle are known to be the main peripheral metabolic organs. It has been reported, that fasting regulates the transcription of many genes involved in carbohydrate-lipid metabolism in these tissues in rodents (Camps et al. 1992;De Lange et al. 2006;S anchez et al. 2009). One can assume that both the basal transcription and transcriptional response to fasting of these genes change with age. To understand the complex series of events that occur during postpubertal development, we examined age-related changes in gene expression in metabolic organs (white and brown adipose tissues and skeletal muscles) in fed and fasted C57Bl mice at postpubertal stages preceding the middle age. Body weight, triacylglyceride (TG), free fatty acid (FFA), insulin and glucose plasma concentrations, and glucose blood concentrations by oral glucose tolerance test (OGTT) were considered as indicators of carbohydrate and lipid metabolism. The following parameters were measured in fed and 16 h-fasted mice: muscle mRNA levels of uncoupling protein 3 (Ucp3) and carnitine palmitoyltransferase 1 (Cpt1) (indicators of FFA oxidation); WAT mRNA levels of hormone-sensitive lipase (Lipe) and lipoprotein lipase (Lpl) (indicators of lipolysis and lipogenesis); muscle and WAT mRNA levels of the glucose transporter Slc2a4 (indicator of insulin-dependent glucose uptake); and BAT mRNA levels of uncoupling protein 1 (Ucp1) (indicator of thermogenesis). This study shows that C57Bl mice during normal development have increased WAT and BAT indexes and decreased glucose tolerance at the late adult stage. Our findings suggest that these changes may be caused by age-related decline in enzyme system activities for b-oxidation of FFA in muscles, TG lipolysis in WAT, and thermogenesis in BAT. Our data provide new information on the mechanisms underlying obesity and dysregulation of carbohydrate-lipid metabolism prior to middle age in mice. Ethics approval All experiments were performed in accordance with the "European Convention for the Protection of Vertebrate Animals used for Experimental and other Scientific Purposes" and the Russian national instructions for the care and use of laboratory animals. The protocols were approved by the Independent Ethics Committee of the Institute of Cytology and Genetics, Siberian Branch, Russian Academy of Sciences (protocol No 35 of October 26, 2016). All efforts were made to minimize animal suffering and reduce the number of animals used. Animals and experimental protocol C57Bl mice were bred in the vivarium of the Institute of Cytology and Genetics (Siberian Branch, Russian Academy of Sciences, Novosibirsk). The mice were housed under a 12:12-h light-dark regimen at an ambient temperature of 22°C. The mice were provided ad libitum access to commercial mouse chow (Assortiment Agro, Moscow region, Turacovo, Russia) and water. Male mice were separated from mothers at 28 days old and housed individually until treatment. The animals were randomly divided into three age groups: 10 weeks (young mice), 15 weeks (early adult mice), and 30 weeks (late adult mice) according to the classification scheme by Flurkey et al. (2007). At the appropriate age, mice were weighed at 10:00, food deprived from 18:00 to 10:00, weighed again after night fasting (access to water remained ad libitum) and subjected to OGTT (12-13 mice per group). To perform the OGTT, glucose was administered orally (2 mg/g body weight) after fasting from 18:00 to 10:00, and blood was sampled from the tail vein before and 15, 30, 60, and 120 min after glucose administration. Blood glucose concentrations were determined as described below. Four days after OGTT, mice of each age group were divided into two subgroups: control and fasting (6-7 mice per subgroup). In the fasting group mice were deprived of food from 18:00 to 10:00 (access to water was ad libitum) and killed by decapitation within a few seconds without anesthesia. Trunk blood was collected in plastic tubes with EDTA and chilled on ice. Plasma was separated by centrifugation and frozen at À20°C until assays. Perigonadal WAT and interscapular BAT were immediately dissected, weighed, and quickly frozen in liquid nitrogen for later measurement of gene expression. Adipose tissues mass indexes were calculated as the ratio of adipose tissue mass to body weight (BW). Samples of thigh muscle were also collected and frozen. Biochemical assays Blood glucose concentrations were determined with a glucometer (One Touch Basic Plus, Lifescan, Russia). Plasma concentrations of glucose, insulin, FFA, and TG were measured using commercial kits (Fluitest GLU, Analyticon Biotechnologies, Lichtenfels, Germany, for glucose; Rat/Mouse ELISA kit, EMD Millipore, Missouri, USA, for insulin; DiaSys Diagnostic Systems GmbH, Holzheim, Germany, for FFA; and DAIKON-DC, Pushchino, Russia, for TG). Insulin concentrations were not measured in plasma of fasted mice as some samples were lost due to a technical failure (power shortage) beyond our control. Statistical analysis All data were expressed as the means AE SEM. Two-way ANOVA was used to compare all data (excepting blood glucose from OGTT and plasma insulin levels) with age (10, 15, and 30 weeks) and experimental conditions (fed and fasted) as explanatory factors. Blood glucose concentrations during OGTT were analyzed by two-way repeated measures ANOVA with age (10, 15, and 30 weeks) and time after glucose administration (0, 15, 30, 60, and 120 min) as explanatory factors. Plasma insulin concentrations were measured only in fed animals and were analyzed by one-way ANOVA with age (10, 15, and 30 weeks) as the explanatory factor. Duncan's multiplerange test was used for post hoc comparisons between groups. The STATISTICA 6 software package (StatSoft) was used for all analyses. Differences were considered statistically significant at P < 0.05. Plasma TG concentrations increased with age (F 2,27 = 7.6, P < 0.01) in fed and fasted 30-week-old-mice and were higher than in fed and fasted 10-week-old mice (P < 0.05 in both cases) (Fig. 1D). Fasting showed a trend toward age-dependent effects on plasma TG concentrations based on the age 9 fasting interaction (F 2,37 = 3.0, P < 0.07). Fasting significantly decreased plasma TG concentrations (P < 0.05) only in early adult mice ( Fig. 2A). Age and fasting did not affect plasma FFA concentrations (Fig. 1E). Plasma glucose concentrations in fed mice were unchanged with age. Compared to the fed group, plasma glucose levels were significantly decreased in fasted animals of all ages (P < 0.001, in all cases) (Fig. 1D). Plasma insulin concentrations in fed mice increased from 10 to 30 weeks of age (F 2,26 = 6.6, P < 0.05) and were higher in 30-week-old mice than in 10-week-old mice (P < 0.01) ( Fig. 2A). Blood glucose concentration dynamics at GTT depended on the age of mice (time 9 age interaction; F 8,170 = 3.3, P < 0.01) (Fig. 2B). In 30-week-old mice blood glucose levels were higher than in 15-week-old mice at time points 30 and 60 min (P < 0.01 in both cases). Age influenced the mean area under the curve (AUC), which represents the index of glucose intolerance (F 2,26 = 3.0, P < 0.06, tendency). Moreover in 30-week-old mice the mean area under the curve (AUC), which represents the index of glucose intolerance, was somewhat higher than in 15-week-old mice (F 2,26 = 3.0, P < 0.06, tendency). These data collectively suggest that glucose tolerance was reduced in late adult mice compared to younger mice. Discussion Researchers have accumulated considerable data showing that during normal development in humans (Daviglus et al. 2003;Mizuno et al. 2004;Yoneshiro et al. 2011) and rodents (Gruenewald and Matsumoto 1991;Jacobson 2002;Sasaki 2015), middle aged individuals (defined as 40-50 years in humans [Mizuno et al. 2004] and 9-12 months in mice [Flurkey et al. 2007;Gruenewald and Matsumoto 1991]) exhibit higher incidence of obesity and impaired glucose metabolism compared to younger ages. Our study of metabolic indexes in mice from three age groups, including young (10 weeks), early adult (15 weeks) and late adult (30 weeks) (Flurkey et al. 2007) revealed that age-related changes in carbohydrate-lipid metabolism occur in mice long before middle age, that is, in the late adult stage. During normal development, late adult mice show evidence of the onset of obesity, including a twofold increase in WAT and BAT indexes, increased plasma triglyceride and insulin levels, and decreased glucose tolerance. The mechanisms causing changes in energy metabolism at late adult age have yet to be identified. The results obtained in this study revealed that in fed mice, mRNA levels for genes controlling FFA oxidation in muscles (Ucp3, Cpt1) and lipolysis (Lipe) and glucose uptake (Slc2a4) in WAT showed considerable age-related changes: that is, increased from young to early adult age and decreased from early to late adult age. Similar agerelated changes in gene expression in muscles and WAT in mice have not yet been reported, and the mechanisms are unknown. Increased expression of these genes in muscles and WAT of early adult mice compared to young mice is likely caused by age-related activation of several hormone systems, especially androgenic testis function. In male C57Bl mice, there was a significant increase in plasma androgen level from 10 to 15 weeks of age (Osadchuk et al. 2016). Androgens are known to activate the growth hormone (GH)/insulin-like growth factor (IGF1) axis (Cummings and Merriam 1999). Together androgens and the GH/IGF1 axis hormones activate anabolic pathways (Yakar and Isaksson 2016) and energy substrate metabolism (Davidson 1987;Kelly and Jones 2013;Richelsen et al. 2000;Varlamov et al. 2015). However, an age-related decrease in gene expression in muscles and WAT cannot be caused by androgenic activity as blood testosterone levels and testicular production in late adult mice remains as high as in early adult mice (Osadchuk et al. 2016); thus, these factors will require additional study. Interestingly, enhancement and subsequent decrease in expression of the study genes in muscles and WAT completely correspond to age dynamics of voluntary physical activity described for C57Bl mice (Figueiredo et al. 2009). According to Figueiredo et al., voluntary physical activity levels peak at 14-15 weeks of age and then decline during the subsequent 10 weeks, reaching a new plateau of activity at approximately 25-30 weeks of age. Correlation between physical activity, Cpt1, and Ucp3 gene expression in muscles as well as intensity level of lipolysis processes in WAT was shown in several studies. In rats and C57Bl mice, physical activity increases Cpt1 mRNA and protein expression (Niu et al. 2010;Shen et al. 2015) and Ucp3 mRNA and protein expression (Tsuboyama-Kasaoka et al. 1998;Watt et al. 2004); in WAT, it enhances lipolysis and the level of phosphorylated HSL (Higa et al. 2014;Ogasawara et al. 2010). Apparently, up-regulation of gene expression in muscle (Ucp3, Cpt1) and WAT (Lipe Slc2a4) in early adult mice was an adaptive response that aimed to increase energy production in the period of intensive growth, reproductive and physical activity. Down-regulation of gene expression in late adult mice contributed to decreased FFA oxidation and induction of fat storage in WAT and BAT. The decrease in BAT mRNA levels of Ucp1 (molecular marker of energy expenditure for thermogenesis) detected in late adult mice (as compared to younger mice) could also contribute to obesity (Keipert et al. 2014). Our study revealed that fed late adult mice concurrently with starting obesity demonstrated impaired glucose metabolism including increased plasma insulin levels, decreased glucose tolerance compared to younger mice, and decreased expression of Slc2a4 gene in WAT (compared to early adult mice). Glut4 is an insulin-dependent glucose transporter whose activity influences glucose uptake at the level of the whole organism. It was shown that age-related reduction in WAT Slc2a4 gene expression is associated with the development of decreased glucose tolerance ) and insulin resistance (Hofmann et al. 1991), while Slc2a4 overexpression improves insulin resistance (Carvalho et al. 2005). It can be assumed that in the late adult stage, glucose tolerance in mice was also reduced due to an increase in the proportion of WAT because adipocyte products are known to suppress intracellular insulin signaling pathways (Ye 2013). Data on impaired glucose metabolism at late adult ages are in agreement with the results of other studies demonstrating increased insulin and blood glucose concentrations (Stenbit et al. 1997) and decreased sensitivity to insulin in 30-week-old C57Bl mice (Mizuno et al. 2004). The age-related decrease in Ucp3 gene expression in muscles and Slc2a4 in WAT observed in fed late adult mice had no effect on the transcriptional responses of these genes to fasting compared to response in other age groups. During fasting, Ucp3 gene expression in mice sharply increased in all ages, which corresponds to data obtained in other studies with rodents (De Lange et al. 2006;S anchez et al. 2009) and humans (Millet et al. 1997). Ucp3 gene expression activation is an important component of adaptation to fasting since UCP3 lowers mitochondrial membrane potential, protects muscle cells against an overload of fatty acids, and reduces excessive production of reactive oxygen species (Amat et al. 2007). Fasting is known to cause not only metabolic system load but also emotional stress and also increases glucocorticoid levels in peripheral blood (Bazhan et al. 2017;Viscarra and Ortiz 2013) stimulating Ucp3 gene expression in mice (Amat et al. 2007;Nagase et al. 2001). It seems that the response to fasting-induced stress in mice at the studied ages did not differ, resulting in similar increases in Ucp3 gene expression in muscles. Transcriptional responses to fasting for the Cpt1 gene in skeletal muscle in late adult mice differed considerably from that of younger mice. Young mice in response to fasting significantly increased muscle Cpt1 gene expression. This result is in accordance with findings from other studies of rat muscles (De Lange et al. 2006). For early and late adult mice, fasting failed to change Cpt1 mRNA levels. We believe that in early adult mice the absence of stimulation can be explained by the "ceiling" effect, i.e., Cpt1 mRNA level in fed mice was as high as in young fasted mice. Fasting did not increase muscle Cpt1 mRNA level in early adult mice because the gene transcription was already at, or near, a pinnacle point, due to activation induced by some age-related factors. The absence of change in late adult mice is believed to represent agerelated impairment of reaction to a specific factor that is activated during fasting, probably to FFA. Conservation of low levels of muscle Cpt1 gene expression during fasting could prevent adaptation due to the use of fatty acids as an energy source in muscles. Adult C57Bl mice are widely used to study different aspects of carbohydrate-lipid metabolism. It should be noted that considerable age-related changes in gene expression in muscles as well as WAT and BAT at postpubertal stages precedes the late adult age and that the onset of obesity at late adult age can influence these results. Thus, this study for the first time revealed that during normal development even at late adult age expression of genes involved in TG hydrolysis, WAT glucose uptake, muscle FFA oxidation, and BAT thermogenesis sharply decreased (compared to early adult ages). However, the conclusion of the role of enzymes of peripheral metabolic organs in impaired lipid and glucose metabolism at late adult age can only be drawn after measuring expression of the respective proteins and enzyme activity. The study of mechanisms triggering obesity during normal development in mice is relevant because obesity at a later age predisposes them to life-threatening conditions such as insulin resistance, type 2 diabetes, and cardiovascular disease.
4,055.8
2017-10-01T00:00:00.000
[ "Biology", "Medicine" ]
Enzymatic treatment of soy protein isolates: effects on the potential allergenicity, technofunctionality, and sensory properties Abstract Soybean allergy is of great concern and continues to challenge both consumer and food industry. The present study investigates the enzyme‐assisted reduction in major soybean allergens in soy protein isolate using different food‐grade proteases, while maintaining or improving the sensory attributes and technofunctional properties. SDS‐PAGE analyses showed that hydrolysis with Alcalase, Pepsin, and Papain was most effective in the degradation of the major soybean allergens with proteolytic activities of 100%, 100%, and 95.9%, respectively. In the course of hydrolysis, the degree of hydrolysis increased, and Alcalase showed the highest degree of hydrolysis (13%) among the proteases tested. DSC analysis confirmed the degradation of major soybean allergens. The sensory experiments conducted by a panel of 10 panelists considered the overall improved sensory properties as well as the bitterness of the individual hydrolysates. In particular, Flavourzyme and Papain were attractive due to a less pronounced bitter taste and improved sensory profile (smell, taste, mouthfeeling). Technofunctional properties showed a good solubility at pH 7.0 and 4.0, emulsifying capacity up to 760 mL g−1 (Flavourzyme) as well as improved oil‐binding capacities, while the water‐binding properties were generally decreased. Increased foaming activity for all proteases up to 3582% (Pepsin) was observed, whereas lower foaming stability and density were found. The hydrolysates could potentially be used as hypoallergenic ingredients in a variety of food products due to their improved technofunctional properties and a pleasant taste. Introduction Due to its considerable amounts of high-quality proteins, soy has found wide usage in processed foods during many years. It is applied in numerous food products such as baked, cereal, and meat-based products as well as hypoallergenic infant formula and vegetarian foods to provide specific functional properties such as improved texture, moisture, fat retention, emulsifying and protein fortification (Sun 2011). However, one of the major drawbacks of soy-containing food products is the allergenic potential of soy. Soybean is listed among the "big 8" most allergenic foods comprising those foods that cause 90% of all immunoglobulin E (IgE)-mediated food allergenic reactions (FDA 2004). Soy allergies can provoke mild symptoms but can also be the cause of life-threatening reactions, ranging from severe enterocolitis atopic eczema to immediate IgEmediated systematic multisystem reactions (Shriver and Yang 2011). Small regions of allergenic proteins, known as epitopes, are responsible for the allergenic reaction by acting with a corresponding antigen (FDA 2004). Even though 42 reactive proteins allergenic proteins have been identified as related to soybean allergy, just the two storage proteins glycinin and βconglycinin are considered as major soybean allergens (Holzhauser et al. 2009;Amnuaycheewa and de Mejia 2010). Numerous investigations in the elimination or hypoallergenization of soy ingredients and products have been conducted in recent years. Various thermal and nonthermal processing steps have been applied to combat soybean allergy, including microwave, ultrafiltration, high ORIGINAL RESEARCH Enzymatic treatment of soy protein isolates: effects on the potential allergenicity, technofunctionality, and sensory properties pressure processing, pulsed ultraviolet light, pulsed electrical fields, irradiation, high intensity ultrasound, genetic or chemical modifications (Shriver and Yang 2011;Verhoeckx et al. 2015). However, most of these methods could not destroy the responsible allergenic epitopes sufficiently or the methods have not yet been investigated in detail. A more effective approach to reduce the allergenicity of soy proteins is their enzymatic hydrolysis, which has been successfully proven in different studies (Yamanishi et al. 1996;Wilson et al. 2005). Besides the reduction or elimination of the allergenic potential, the destruction of soy proteins due to enzymatic hydrolysis is also accompanied by a loss or change in their functional properties such as solubility as well as foaming, emulsifying, and gelation properties (De la Barca et al. 2000;Ortiz and Wagner 2002;Jung et al. 2004;Tsumura et al. 2005;Yin et al. 2008). In addition, enzymatic hydrolysis could lead to the formation of bitter-tasting peptides, which also impedes the utilization of hydrolysates in food (Ishibashi et al. 1988;Saha and Hayashi 2001). Up to now, a feasible technology to reduce soy allergenicity is not implemented in the food industry. As a consequence, total avoidance of soy-containing products is mandatory to prevent allergenic reactions. However, this is difficult due to the ubiquitous presence of soy proteins in food products. As enzymatic hydrolysis is one of the most effective approaches, it should be investigated in more detail. Former studies have described the effects of proteases either on the level of allergenicity and organoleptic properties or technofunctionality. Literature data about the simultaneous determination of the reduction in the allergenic potential and the alteration of the functional as well as organoleptic properties are not available. This knowledge is a prerequisite for the development of a high-quality soy-based food ingredient. The present study was conducted to (1) assess the effectiveness of different proteases on the degradation of the major soybean allergens (glycinin, βconglycinin), (2) investigate the effects on the sensory perception with a specific emphasis on the bitter taste, and (3) determine the denaturation profile (DSC) and technofunctional properties of the resulting hydrolysates. The degree of allergenic protein degradation was evaluated and quantified by SDS-PAGE and by the analysis of the degree of hydrolysis. The organoleptic characteristics with a specific emphasis on the bitter taste were identified. The technofunctional characteristics (protein solubility, emulsifying, foaming, water-and oil-binding capacity) of the obtained hydrolysates have been investigated and their correlation with the observed degradation of the major soybean allergens was examined. Preparation of soy protein isolates (SPI) Soybeans were dehulled with an underflow peeler (Streckel & Schrader KG, Hamburg, Germany), classified in an air-lift system (Alpine Hosakawa AG, Augsburg, Germany) and flaked using a roller mill (Streckel & Schrader KG). Soybean flakes were defatted with n-hexane in a percolator (volume 1.5 m 3 , e&e Verfahrenstechnik GmbH, Warendorf, Germany) and flash desolventized with nhexane (400-500 mbar) prior to steam desolventation. For the preparation of SPI, soy flakes were mixed with acidic water (pH 4.5; 1:8 w/v flakes to water ratio). The suspension was stirred for 1 h at room temperature and separated with a decanter (4400 U min −1 ) for 60 min at 4°C. For protein extraction, the solid phase was stirred in alkaline water (1:8 w/v), which was adjusted to pH 8.0 with 3 mol L −1 NaOH. After 60 min of extraction, the suspension was separated (4400 U min −1 , 60 min) to obtain a clear protein extract, which was adjusted to pH 4.5 with 3 mol L −1 HCl (room temperature) to precipitate the proteins. After separation by centrifugation at 5600 g for 130 min, the isoelectric precipitated protein was neutralized with 3 mol L −1 NaOH, pasteurized (70°C, 10 min) and spray dried. Enzymatic hydrolysis of SPI Enzymatic hydrolysis of SPI was performed with different proteases (Table 1) in thermostatically controlled reaction vessels. Therefore, SPI was dispersed in deionized water (5% w/w) utilizing an Ultraturrax for 1 min at 5000 U min −1 . The obtained slurry was adjusted to enzyme-specific temperature and pH value (Table 1). After adding the enzyme (E/S-ratio, see Table 1), the mixture was stirred, maintaining enzymes' optimum temperature and pH value. Aliquots of 100 mL were taken at time intervals of 10, 30, 60, and 120 min to obtain SPI hydrolysates with different degrees of hydrolysis. Reaction conditions for Papain were chosen according to the method of Tsumura et al. (2004). Enzymes were inactivated at 90°C for 20 min in a water bath. Control SPI dispersions were prepared under the same incubation conditions and inactivation treatment, but without enzyme addition. The samples were frozen at −50°C and lyophilized. All experiments were performed in duplicate. Degree of hydrolysis using the o -phthaldialdehyde (OPA) method The degree of hydrolysis (DH) was calculated by determining the free αamino groups with o-phthaldialdehyde (OPA) using serine as standard (Nielsen et al. 2001). The percentage of DH was calculated as follows: DH = h/h tot * 100%; where h tot is the total number of peptide bonds per protein equivalent, and h is the number of hydrolyzed bonds. The h tot factor was 7.8 (based on soy) according to Adler-Nissen (1986). Six measurements were performed for each sample. Molecular weight distribution applying sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) The molecular weight distribution of all samples was determined according to Laemmli (1970) using SDS-PAGE under reducing conditions. The samples were suspended with 1× Tris-HCl treatment buffer (0.125 mol L −1 Tris-HCl, 4% SDS, 20% v/v Glycerol, 0.2 mol L −1 DTT, 0.02% bromophenol blue, pH 6.8), boiled for 3 min to cleave noncovalent bonds and centrifuged at 12,100 g for 4 min (Mini Spin, Eppendorf AG, Hamburg, Germany). The electrophoresis was performed on 4-20% midi Criterion™ TGX Stain-Free™ precast gels and the proteins were separated using the Midi Criterion™ Cell from Bio-Rad (Ismaning, Germany). A molecular weight marker (10-250 kDa, Precision Plus Protein™ Unstained Standard, Bio-Rad Laboratories Inc., Hercules, CA, USA) was additionally loaded onto the gel. Electrophoresis conditions were 200 V, 60 mA, 100 W at room temperature and protein visualization was performed by Criterion Stain-Free Gel Doc™ EZ Imager (Bio-Rad). were heated with a heating rate of 2 K min −1 in two cycles from 40 to 105°C. All samples were immediately rescanned, after cooling down to 40°C, to investigate reversibility. Peak denaturation temperatures (T d ), onset temperatures (T onset ), and relating enthalpies (∆H) were calculated by the TA Universal Analysis software. Triplicate determinations were done throughout. Chemical composition The chemical composition (protein, ash, and dry matter) was determined as described by AOAC methods (AOAC 2005a,b). The protein contents were calculated based on the nitrogen content (N × 6.25) according to the Dumas combustion method (AOAC 2005b). Dry matter and ash content were analyzed in a thermogravimetrical system (TGA 601, Leco Corporation, St. Joseph, MI) at 105 and 950°C, respectively. Emulsifying capacity The emulsifying capacity (EC) was determined in duplicate as suggested by Wang and Johnson (2001). Protein solution samples of 1% (w/w) were prepared utilizing an Ultraturrax ® (IKA-Werke GmbH & Co. KG, Staufen, Germany) at 18°C. Rapeseed oil was added by a titration system (Titrino 702 SM, Metrohm GmbH & Co. KG, Hertisau, Switzerland) at a constant rate of 10 mL min −1 until phase inversion of the emulsion was observed, accomplished by continuous determination of the emulsion's conductivity (conductivity meter LF 521 with electrode KLE 1/T, Wissenschaftlich-technische Werkstätten GmbH, Weilheim, Germany). The volume of oil needed for phase inversion was used to calculate the EC (mL oil per g sample). Foaming activity, density, and stability Foaming activity was determined according to Phillips et al. (1987). Protein solution samples (5% w/w) were whipped using the Hobart 50-N whipping machine (Hobart GmbH, Offenburg, Germany) for 8 min. The relation of the foam volume before and after whipping was utilized for the calculation of the foaming activity. The foaming density was measured by weighing a specified quantity of foam volume. The ratio of foam volume to foam weight was defined as foaming density in g L −1 . The foaming stability was estimated as the percent loss of foam volume after 60 min. Water-and oil-binding capacity Water-binding capacity (WBC) was analyzed according to the AACC 56-20 official method (AACC 2000). Oilbinding capacity (OBC) was determined using the method described by Ludwig et al. (1989). Protein solubility Protein solubility was analyzed at pH 4.0 and 7.0 following the method of Morr et al. (1985). For each pH, 1 g of the sample was suspended in 50 mL 0.1 mol L −1 sodium chloride solution. The pH was adjusted using 0.1 mol L −1 NaOH or 0.1 mol L −1 HCl, while the suspension was stirred at ambient temperature for 1 h. Nondissolved fractions of the samples were separated by centrifugation at 20,000 g for 15 min. Afterward, the protein content of the supernatant was determined according to AOAC (2005b). Training of the panelists A sensory panel consisted of 10 panelists had been trained for bitterness evaluation over 2 month (1 h per session, twice a week) using the DIN 10959 threshold tests with caffeine solutions at concentrations of 0, 0.025, 0.05, 0.075, 0.1, 0.125, 0.15, 0.175, 0.2, and 0.225 g L −1 , respectively. Since the bitter profile of caffeine, which was included to select bitter-taster, is slightly different from a protein hydrolysate solution, an Alcalase hydrolysate was additionally added to the training session. The Alcalase hydrolysate was prepared by incubation of 5% SPI dispersion with 0.5% Alcalase at pH 8.0, 60°C for 3 h without pH adjustment. The hydrolysate was then diluted to obtain solutions of 0.05, 0.1, 0.25, 0.5, 1.0, 1.5, and 2.5 g L −1 , respectively. Bitter taste evaluation A 10-cm line scale anchored from 0 (not detectable) to 10 (intense) was used. For scale calibration, Alcalase hydrolysates with a concentration of 1.0 and 2.5 g L −1 were evaluated by the panel to correspond to a bitter intensity of 5 and 10, respectively. Profile analysis In addition to the determination of the bitter intensity, a profile analysis of the samples was obtained. A broad list of attributes characteristic for the individual samples was developed within the panel. The attributes in terms of smell ("fresh", "fruity", "beany"), taste ("sour", "salty", "bitter", "fresh", "beany"), and mouthfeeling ("mouthcoating", "astringent") were also rated on the 10-cm line scale. The attributes "fresh" and "fruity" are associated with the smell and taste of a lemon, whereas "beany" describes the soybean-like aroma. "Sour", "salty", and "bitter" are associated with fundamental taste sensations elicited by acids, salt, and caffeine, respectively. "Mouthcoating" describes the degree of coating inside the mouth after swallowing, while "astringent" is the trigeminal sensation elicited by grapefruit juice. Sample preparation Samples were mixed and stirred with tap water to prepare 2.5% (w/w) solutions. This sample concentration was found to be most appropriate for identifying and evaluating the attributes precisely. The pH was adjusted to pH 7.0 with 1 mol L −1 NaOH. Each panelist was presented with eight samples (10 mL) per session, which were served to the panel in a random order at room temperature in plastic cups, which were coded by arbitrary numbers (three digits). Sample evaluation Each sensory evaluation was conducted by the trained panel (performed in 10 sessions, 1 h each). Water and plain crackers were provided for palate cleansing in between. Sensory analyses were carried out in a sensory panel room at 21 ± 1°C. Solutions containing 2.5% SPI, 1.0% and 2.5% Alcalase hydrolysate were prepared as standard for each session, respectively. The assessors were instructed to evaluate bitterness and the attributes mentioned above in relation to the bitterness and attributes of the standard solutions using the standard 10-cm line scale. Each panelist did a monadic evaluation of the samples at individual speed. Two replicated measurements were made for each sample and replicates were randomized within the same session in order to avoid replicate effects. Statistical analysis All data are expressed as means ± standard deviation of at least two independent measurements (n = 2). All chemical data were statistically analyzed by one-way Analysis of variances (ANOVA) and means were generated and adjusted with Bonferroni post hoc test using SPSS 20.0 (SPSS for windows, SPSS Inc. Chicago, IL). Sensory data (n = 10) were also subjected to ANOVA with the use of the Tukey`s HSD average post hoc test. Statistically significant differences were considered at P < 0.05. Results and Discussion The enzymatic hydrolysis of SPI, containing a dry matter of 94.4%, a protein content of 94.6%, and an ash content of 4.6%, was conducted in two parts. First, a screening of 10 proteases was carried out. The DH, molecular weight distribution (SDS-PAGE), and the bitter taste were analyzed to estimate the degradation of the molecules as an indication for the reduction in the allergenic potential. Based on these results, selected proteases were investigated in more detail by determining the denaturation profile (DSC) as well as the technofunctional and sensory (profile analysis) properties. Screening of different enzyme preparations Effect of the enzymatic treatment on the protein degradation Degree of Hydrolysis (DH) The DH gives an initial indication for the change in the molecular integrity and thus for the reduction in allergenic compounds as presented in several studies (Kong et al. 2008;Tavano 2013). During protein hydrolysis, the large complex structured protein molecules are broken down into smaller sized peptides and specific amino acids. The DH was continuously monitored during enzymatic treatment of SPI. As shown in Table 1, the unhydrolyzed SPI showed an average DH value of 2.1%. In the course of enzymatic hydrolysis, the DH increased significantly (P < 0.05). The highest DH value of 13% was achieved after treatment of SPI for 2 h with Alcalase followed by DH values of 10.6%, 8.5%, 7.8%, and 6.8% by using Pepsin, Flavourzyme, Corolase 2TS, and Corolase 7089, respectively. The lowest DH of 2.8% after a 2 h hydrolysis was achieved by Pancreatic Trypsin. This is probably attributed to the presence of the Kunitz Trypsin Inhibitor, inhibiting the proteolytic action of trypsin. The hydrolysis of the proteins was only caused by the enzyme activities as an increase in the DH values could not be observed in the reference experiments (no enzyme addition). Electrophoretic analysis (SDS-PAGE) A further initial indication for a reduced allergenicity of the hydrolysates was achieved by SDS-PAGE analyses (Fig. 1A-E). Specific emphasis has been given to the two major soybean allergens (glycinin, βconglycinin) (Holzhauser et al. 2009;Amnuaycheewa and de Mejia 2010). In Figure 1, selected SDS-PAGE profiles are shown exemplarily. The unhydrolyzed SPI and reference (no enzyme addition) presented typical electrophoretic patterns for soy proteins (Fig. 1A). The first three bands are α′ (~67-72 kDa), α (~63 kDa), and β subunits (~47 kDa) of βconglycinin. Glycinin is composed of two subunits, the acidic subunit "A" (~29-33 kDa) and the basic subunit ("B") at about 22 kDa (Amnuaycheewa and de Mejia 2010). Already after a 10 min hydrolysis with Alcalase, ß-conglycinin was completely decomposed, while small amounts glycinin remained still within 30 min of hydrolysis. The acid subunit was eliminated after 60 and 120 min of hydrolysis, respectively, while the basic subunit was not completely destroyed. Similar observations could be obtained by the Pepsin preparation (Fig. 1B). The decreased intensity of the acidic subunit of glycinin was more substantial for proteases such as Alcalase, Pepsin, and Papain than for the other proteases examined. In addition, an increasing reaction time led to a progressive disappearance of the basic subunit. This might be due to the fact that the basic group is located inside the glycinin complex and was therefore less exposed to hydrolysis. In contrast the acidic subunit, which is at the exterior of the complex, was degraded by almost all proteases (Yin et al. 2008). Pepsin and Papain turned out to be the most effective enzyme preparations ( Fig. 1B and C). Already after 10 min of hydrolysis, ß-conglycinin and glycinin were completely decomposed. A Papain concentration of 0.05% (data not shown) was also examined, which led to similar result as observed for the 0.2% treatment, indicating the high efficiency of Papain (Tsumura et al. 2005). These results were not expected taking the findings of the DH experiments into account as the DH values of the 0.2% and 0.05% Papain hydrolysates were relative low with 4.6% and 3.8% after 2 h, respectively. These differences might be caused by a weak reaction of the OPA reagent with the cysteine residues released during hydrolysis with Papain (cysteine-protease) (Chen et al. 1979). The SDS-PAGE profiles of the hydrolysates obtained by the other enzymes showed a considerable deviating pattern in comparison to Alcalase, Papain, and Pepsin. The SDS-PAGE profiles of Corolase 2TS and Flavourzyme are shown as an example (Fig. 1D-E) as the SDS-PAGE profiles obtained by the other enzyme preparations are quite similar (data not shown). It could be shown that these enzymes could slightly deteriorate ß-conglycinin, but the glycinin subunits remained unchanged. Although Flavourzyme showed only slight changes in the SDS-PAGE patterns, the DH of 8.5% was contrary high. This might be attributed to the fact that Flavourzyme contains exoproteases, which cleave small peptides at the end of proteins, liberating groups for acting with the OPA reagent. Effects of the enzymatic treatment on the bitterness of SPI Due to the presence of strongly hydrophobic bitter peptides arising as natural degradation products of proteolytic reactions, enzymatic hydrolysates are often associated with a strong bitter taste (Adler-Nissen 1986; Ishibashi et al. 1988;Saha and Hayashi 2001;Sun 2011). Native SPI showed a bitter intensity of 2.8. The bitterness of all hydrolysates increased with increasing reaction time with an exception of the hydrolysate prepared by Flavourzyme, ( Table 2). The bitter intensity of the Flavourzyme hydrolysate, increased within the first hour of hydrolysis from initially 2.8 to 4.3, but decreased after 2 h to an intensity of 2.1, which is even lower than the bitterness of native SPI. Flavourzyme contains both endoprotease and exopeptidase activities. The latter can selectively release hydrophobic amino acid residues from the protein molecules, having a debittering effect (Saha and Hayashi 2001). The highest bitter intensity of 9.2 was achieved using Alcalase followed by Corolase 2TS, Corolase 7089 and Neutrase with bitter intensities of 7.7, 7.6, and 7.1, respectively. The high bitter intensity of the hydrolysates produced by Alcalase is probably caused by the tendency of this enzyme to hydrolyze hydrophobic amino acid residues. Thereby, nonpolar amino acid residues at the C-terminus of the resulting peptides remain and cause a relatively high bitterness (Adler-Nissen 1986; Ishibashi et al. 1988;Saha and Hayashi 2001;Sun 2011). The hydrolysis with 0.2% and 0.05% Papain for 120 min results in low bitterness intensities of 3.1 and 3.0, respectively. Hydrolysis applying the other enzyme preparations resulted in samples with bitter intensities in the range of 5.5 and 6.4 (Table 2). Among the proteases investigated, Alcalase, Pepsin, and Papain turned out to be most efficient in the degradation of proteins into small-sized peptides as evidenced by the DH (except Papain) and SDS-PAGE analysis (Table 1 and Fig. 1), while hydrolysis with Flavourzyme and Papain resulted in hydrolysates with the lowest bitter intensities ( Table 2). with respect to a less bitter taste and an effective degradation of molecular weight distribution were analyzed in more detail. The enzymatic hydrolysis was repeated under the same reaction conditions as described in the screening experiments, but the incubation time was changed. Enzymatic hydrolysis with Alcalase, Flavourzyme, Pepsin was performed for 120 min, the treatment with Corolase 7089 and Papain was conducted for 30 min and with Corolase 2TS for 10 min. For Papain, a lower enzyme concentration of 0.05% was applied due to the high reactivity of this enzyme preparation. Electrophoretic analysis (SDS-PAGE) The individual bands of glycinin and ß-conglycinin units were quantified by Image Lab™ Software (Bio-Rad, Hercules, CA, USA). The relative hydrolyzation in relation to the unhydrolyzed fractions was calculated (Table 3). Alcalase, Pepsin, and Papain were the most efficient proteases for the overall degradation of the major allergens with a proteolytic activity of about 100%, 100%, and 95.9%, respectively (Table 3). Alcalase, Corolase 2TS, Pepsin, and Papain hydrolyzed the basic subunit of glycinin with varying degree (Fig. 1 and Table 3). In general, glycinin was least degraded due to its molecular structure and location of the basic subunit, which is covered in the interior of the glycinin complex (Yin et al. 2008). Hydrolysates prepared with Corolase 7089 and Flavourzyme showed smaller changes in the molecular weight distribution. A complete degradation of the α and βsubunits was observed (Table 3), while the α′subunit was reduced by 70.5% and 61.0%, respectively. However, the acid and basic subunits of glycinin were only slightly affected. Differential scanning calorimetry (DSC) DSC analysis was applied to examine the secondary and tertiary structural changes of SPI due to enzymatic hydrolysis, which can give an additional evidence for the destruction of allergenic proteins. Figure 2 depicts characteristic DSC curves corresponding to unhydrolyzed SPI and three hydrolysates prepared with Flavourzyme, Corolase 7089, and Corolase 2TS while all other hydrolysates exhibit no peaks, indicating complete denaturation of the proteins (data not shown). SPI showed two endothermic thermal transitions, the major peak denaturation temperatures (T d ) were at approximately 71.7°C (T onset = 68.8°C) and 91.7°C (T onset = 87.0°C) along with denaturation enthalpies of 0.03 and 0.32 J g −1 , respectively. These results are consistent with previous reports where the onset denaturation temperature of glycinin is around 80-90 and 60-70°C for βconglycinin (Renkema et al. 2002;Ahmed et al. 2006). Slight variations can be due to genotypic differences in the raw material or varied processing conditions, that is, temperature (Riblett et al. 2001). The Flavourzyme and Corolase 7089 hydrolysates were likely to be partially denatured or rather partially degraded since the first denaturation point (°C) decreased to 69.1 and 70.8°C, respectively with an enthalpy of 0.02 to 0.01 J g −1 . The enthalpy of the second denaturation point of the Flavourzyme hydrolysates of about 95.2°C was not significantly (P < 0.05) lower compared to native SPI being 0.31 J g −1 , while a shift of the second denaturation temperature toward higher temperatures was detected. In contrast, the Corolase 7089 hydrolysate exhibited a denaturation point of about 96.1°C with a lower denaturation enthalpy of 0.10 J g −1 . The Corolase 2TS hydrolysate showed one denaturation temperature at 93.5°C and the enthalpy of denaturation being 0.01 J g −1 was significantly (P < 0.05) lower than that for SPI, Flavourzyme, and Corolase 7089 hydrolysates. The βconglycinin fraction was completely denatured, whereas the glycinin complex was only slightly affected as evidenced by a decreased denaturation enthalpy. These findings are in great accordance with the SDS-PAGE analyses (Table 3). The Alcalase hydrolysate (Fig. 3) showed the highest bitter intensity of 8.2, and therefore, the application of the Alcalase hydrolysate in food systems might be limited. In contrast, the Pepsin hydrolysate (Fig. 3) showed a predominantly "fresh" and "fruity" smell, but the "sour" taste and "astringent" mouthfeeling were significantly (P < 0.05) higher than for the other hydrolysates tested. The application of the pepsin hydrolysate as food ingredient might be limited due to its extreme "sour" taste as well as "astringent" mouthfeeling. Effects on the technofunctionality of SPI Technofunctional properties including solubility, gelation, emulsifying, and foaming of proteins connote the physicochemical properties which govern the behavior of protein in the food matrix. Applying enzymatic hydrolysis, functional properties of proteins are modified (Were et al. 1997;De la Barca et al. 2000;Ortiz and Wagner 2002). Enzymatic hydrolysis decreases the molecular weight and increases the number of ionizable groups in proteins and expose hydrophobic groups which change the physical and chemical interactions (Creusot et al. 2006). Soybean proteins glycinin and βconglycinin mainly reflect the functional properties of SPI and show considerable differences in functional properties such as emulsifying due to their diverse molecular structure (Utsumi and Kinsella 1985). Protein solubility Solubility is the most important technofunctional property due to its considerable effect on other technofunctional characteristics, particularly gelation, foaming, and emulsifying, which depend on an adequate initial solubility of proteins (Vojdan 1996). The solubility of all samples is shown as a function of pH 4.0 and 7.0 in Figure 4. The minimum solubility of 5.0% of the unhydrolyzed SPI was detected at pH 4.0, at the isoelectric point of soybean protein, but was significantly increased after hydrolysis by all proteases. At pH 4.0, the hydrolysates prepared with Alcalase and Pepsin exhibit the highest solubility of 77.4% and 84.3%, respectively. The highest solubility of 91.3% was achieved at pH 7.0 using Corolase 7089 followed by the solubility of 90.5%, 84.5%, and 82.9% by using Pepsin, Corolase 2TS, and Alcalase, respectively. It has been proposed that the reduction in the secondary structure of proteins and the release of smaller peptides, and the corresponding increase in ionizable amino and carboxyl groups are responsible for increased solubility of hydrolysates, increasing the interactions with water molecules (Adler-Nissen 1986;Ortiz and Wagner 2002). At pH 4.0 the solubility of all other hydrolysates was significantly (P < 0.05) lower, ranging from 30.3% to 42.1% and at pH 7.0 between 56.2 and 58.3%. Emulsifying properties The EC of the unhydrolyzed SPI and hydrolysates was determined. SPI had an EC of 660 mL g −1 while all SPI hydrolysates-except hydrolysates generated by Alcalase and Pepsin-showed significantly increased (P < 0.05) EC. The Flavourzyme, Corolase 7089, Corolase 2TS, and Papain hydrolysates had EC's of about 760, 730, 670, and 705 mL g −1 , respectively. Enzymatic hydrolysis has already been used in scientific approaches to improve the emulsifying properties (Wu et al. 1998;Jung et al. 2004). De la Barca et al. (2000) demonstrated an increased emulsification activity after enzymatic hydrolysis of soy protein, which is comparable to the present results. The increased emulsifying properties may be due to the degradation of large protein molecules, exposure of hydrophobic groups and enhanced protein solubility implicating an improved protein surface activity and therefore a better emulsifying activity (Wu et al. 1998). However, the EC of the Alcalase and Pepsin hydrolysates decreased to 438 and 220 mL g −1 , respectively, indicating significant (P < 0.05) differences compared to unhydrolyzed SPI. The reason for this might be due to the excessive protein hydrolysis; thus, a sharp degradation to smaller peptides as evidenced by DH (Table 1) and SDS-PAGE results ( Table 3). The molecular structure of the protein might be altered, particularly with respect to its interfacial adsorptivity and reduction in continuous phase viscosity, which is essential for the ability to form emulsions . It has been reported that the EC of hydrolysates is closely related to the degree of hydrolysis, with a low DH (3-5%) increasing and a high DH (~8%) decreasing EC (Achouri et al. 1998). The obtained results in this study cannot entirely confirm these statements. A high DH does not always results in a reduced EC as evidenced by the increased EC of the Flavourzyme hydrolysate, which had a high DH of about 9.4%, but also the highest EC of 760 mL g −1 . Water-and oil-binding capacity The WBC of almost all hydrolysates was significantly (P < 0.05) lower compared to the unhydrolyzed SPI. The hydrolytic action of proteases causes disruption of the protein network, which is responsible for the inhibition of water-holding properties. The WBC decreased from an initial WBC of 2.6-1.8 mL g −1 , 0. 9, and 0.2 mL g −1 after hydrolysis with Flavourzyme, Pepsin, and Alcalase, respectively, while no WBC for the Corolase 7089 and Corolase 2TS hydrolysates was observed. However, the Papain hydrolysate showed a significantly (P < 0.05) higher WBC with values of 3.9 mL g −1 . Foaming properties The foaming properties are usually characterized in terms of foaming density, activity and stability. Proteins in dispersions cause a lower surface tension at the air-water interface, thus creating a foam (Surowka and Fik 1992). As shown in Table 4, all hydrolysates presented an improved foaming activity. Enzymatic hydrolysis results in smaller peptides with improved foaming activity by rapid diffusion to the air-water interface (Tsumura et al. 2005). Furthermore, native SPI has limited foaming due to its quaternary and tertiary structure, whereas hydrolyzed SPI lost the tertiary structure, which leads to improved foam activity (Yu and Damodaran 1991). Among the proteases studied, the highest foaming activity of 3582% was achieved after hydrolysis with Pepsin, while the hydrolysate prepared with Flavourzyme showed the lowest foaming capacity of 1201% among the hydrolysates. There is an evidence of a trend toward increased foaming activity when the ßconglycinin fraction faded and the glycinin fraction becomes dominant, which is supported by SDS-PAGE profiles ( Fig. 1 and Table 3), where only the hydrolysates generated with Flavourzyme and Corolase 7089 showed a slight degradation of the ß-conglycinin fraction. Although the foaming activities of the hydrolysates were higher compared to SPI, their stability and density decreased (Table 4). The most stable foam was obtained after hydrolysis with Flavourzyme with a stability of 86%, which is near to native SPI with a stability of 90%. For all other hydrolysates, the foaming stability was markedly decreased (Table 4). For foam stabilization some larger protein components are needed, but only few large peptides were found in the hydrolysates, which led to weak foaming stability. The trend of increased foaming activity coupled with decreased foaming stability has been reported in previous studies (Were et al. 1997;De la Barca et al. 2000;Tsumura et al. 2005). Conclusion The aim of this study was to investigate the effect of enzymatic hydrolysis with various proteases on the potential allergenicity, technofunctionality, and sensory properties of SPI. The results clearly demonstrate that enzymatic hydrolysis is an effective approach to reduce the level of Results are expressed as means ± standard deviation (n = 2). Means with different letters within one column indicate significant differences (P < 0.05) following ANOVA (Bonferroni). allergenicity, while sensory and technofunctional properties can be improved depending on the proteases used. According to the findings, Papain turned out to be the most appropriate proteases for improving the technofunctionality and sensory characteristics, while effectively decreasing the molecular weight of SPI. SDS-PAGE and the DH were used to examine the degradation the soybean allergens to enable a first evaluation of the level of allergenicity. As this is an indirect method, further research is required to get detailed knowledge of the allergen structure as well as specific and reliable detection methods. Although the sensory analysis showed promising results, the bitter taste of the produced hydrolysates is still a challenge. Further investigation needs to be carried out focusing on debittering hydrolysates to expand the use in food systems. Studies on enzymatic hydrolysis through various combinations of exo-and endopeptidases and other methods for reducing the level of bitterness and allergenicity are ongoing in our laboratory, which might lead to the development of hypoallergenic SPI with pleasant taste and good technofunctionality.
7,849.8
2015-06-29T00:00:00.000
[ "Agricultural and Food Sciences", "Chemistry" ]
Comparative Analysis of Carbohydrate Active Enzymes in Clostridium termitidis CT1112 Reveals Complex Carbohydrate Degradation Ability Clostridium termitidis strain CT1112 is an anaerobic, gram positive, mesophilic, cellulolytic bacillus isolated from the gut of the wood-feeding termite, Nasutitermes lujae. It produces biofuels such as hydrogen and ethanol from cellulose, cellobiose, xylan, xylose, glucose, and other sugars, and therefore could be used for biofuel production from biomass through consolidated bioprocessing. The first step in the production of biofuel from biomass by microorganisms is the hydrolysis of complex carbohydrates present in biomass. This is achieved through the presence of a repertoire of secreted or complexed carbohydrate active enzymes (CAZymes), sometimes organized in an extracellular organelle called cellulosome. To assess the ability and understand the mechanism of polysaccharide hydrolysis in C. termitidis, the recently sequenced strain CT1112 of C. termitidis was analyzed for both CAZymes and cellulosomal components, and compared to other cellulolytic bacteria. A total of 355 CAZyme sequences were identified in C. termitidis, significantly higher than other Clostridial species. Of these, high numbers of glycoside hydrolases (199) and carbohydrate binding modules (95) were identified. The presence of a variety of CAZymes involved with polysaccharide utilization/degradation ability suggests hydrolysis potential for a wide range of polysaccharides. In addition, dockerin-bearing enzymes, cohesion domains and a cellulosomal gene cluster were identified, indicating the presence of potential cellulosome assembly. Introduction Increased concerns over global climate change and energy security, coupled with diminishing fossil fuel resources, have triggered interest in the development of alternative forms of fuel from renewable resources such as biomass. [1]. Consolidated bioprocessing (CBP) for biofuel production offers the potential to reduce production costs and increase processing efficiencies when compared with alternative strategies for lignocellulose to ethanol conversion. This is because in CBP, enzyme production, cellulose hydrolysis, and fermentation are all carried out in a single step by microorganisms that express carbohydrate active enzymes (CA-Zymes) [2][3][4][5]. Various anaerobic cellulolytic Clostridium species are known to digest cellulose via an exocellular multi-enzyme complex called a cellulosome [6][7][8][9]. However, there are a few anaerobic cellulolytic Clostridium species such as C. stercorarium and C. phytofermentans that do not produce cellulosomes. These bacteria degrade cellulosic biomass by secreting enzymes into the environment [10,11]. Genomic studies have revealed that cellulosomes from different cellulolytic bacteria are complex and diverse in nature and architecture [6,[12][13][14]. The widely studied cellulosome of Clostridium thermocellum (Figure 1) consists of a central scaffoldin protein with varying numbers of cohesin domains, which bind enzymatic subunits through type 1 dockerin domains. The entire complex is bound to the cell surface by the interaction of its type II dockerin domain with the type II cohesin domain on the bacterial cell. The carbohydrate-binding module (CBM), usually of family 3, attaches the complex and the bacterial cell to the cellulosic substrate [15,16]. This allows concerted enzyme activity in close proximity to the bacterial cell, enabling optimum synergistic degradation of the substrate [17]. The cellulosome harbors a variety of carbohydrate active enzymes with different substrate specificities, such as endoglucanases, cellobiohydrolases, xylanases, pectinases, and other hydrolyzing enzymes. The mode of action of these enzymes is very similar to those of the free enzymes systems of other cellulolytic bacteria except that the free enzymes in most cases contain a CBM domain instead of a dockerin 1 domain, which targets the individual enzymes to the substrate [17]. The C. termitidis genome has recently been sequenced (GenBank accession number AORV00000000) [22], though investigation of its CAZyme content relative to other cellulolytic Clostridia has not yet been reported. CAZymes have a variety of functions within a cell, but are also involved in the biosynthesis and degradation of cellulose and other polysaccharides. The CAZy database (http://www.cazy.org/) [23], organizes CAZymes into 6 main classes: i) Glycoside hydrolases (GH), are a large group of enzymes that hydrolyze the glycosidic linkages between two or more carbohydrates or between a carbohydrate and a noncarbohydrate molecule; ii) Carbohydrate esterases (CE), are involved in the hydrolysis of ester bonds; iii) Glycosyl transferases (GTs) catalyze the formation of glycosidic bonds to form a glycoside; iv) Polysaccharide lyases (PLs), cleave glycosidic linkages present in acidic polysaccharides by a beta-elimination mechanism; v) Auxillary activities [24], are redox enzymes that act in conjunction with other CAZymes to breakdown lignocellulose; and vi) Carbohydrate binding modules (CBMs) are non-catalytic protein domains which function in binding polysaccharides, thus bringing the biocatalyst into close and prolonged proximity with its substrate, allowing carbohydrate hydrolysis [25]. The objectives of the work described here were to: i) compare the CAZyme content encoded by the C. termitidis genome with those of selected representative cellulosome forming (C. cellulolyticum H10, C. cellulovorans 743B, and C. thermocellum ATCC27405) and non-cellulosome forming (C. phytofermentans ISDg and C. stercorarium DSM8532), anaerobic, cellulolytic Clostridium species; and ii) identify the carbohydrate degradative ability of C. termitidis based on CAZyme data. This will provide Figure 1. Cellulosome components of C. thermocellum. Enzymatic components (colored differently to indicate enzyme variety) produced by anaerobic bacteria contain a dockerin domain. Dockerins bind the cohesins of a non-catalytic scaffoldin, providing a mechanism for cellulosome assembly. Scaffoldins also contain a cellulose-specific family 3 CBM (cellulose binding module) and a C-terminal dockerin domain II that targets the cellulosome to cellulose and the bacterial cell envelope, respectively. doi:10.1371/journal.pone.0104260.g001 an understanding of the mechanism(s) of carbohydrate degradation in C. termitidis and help facilitate the design and development of novel industrially useful microorganisms. Growth on xylan Clostridium termitidis CT1112 (DSM 5398), initially obtained from American Type Culture Collection (ATCC 51846), was activated prior to experiments by passaging 10% v/v inoculum on 1191 medium (as previously described [19]) containing 2 g/L HPLC-grade cell wall polysaccharide xylan from Beechwood (X4252) (Sigma-Aldrich Canada Ltd. Oakville, ON). Cysteine hydrochloride (Sigma-Aldrich), at a concentration of 1 g/L, was used as a reducing agent. Most of the reagents and chemicals for media were obtained from Fisher Scientific, with the exception of Bacto Yeast Extract, which was obtained from Becton Dickinson and Company. The pH of the media was set to 7.2. Time point experiments were conducted in Balch tubes (Bellco Glass Co.) with a working volume of 27 mL. To maintain an anaerobic and sterile environment, tubes containing 2 g/L xylan and 1191 medium were sealed with butyl-rubber stoppers, crimped with aluminum seals, and then gassed and degassed (1:4 min) four times with 100% nitrogen (N 2 ). Tubes were inoculated (10% v/v) with fresh, mid-exponential phase cultures of C. termitidis and incubated for 36 h at 37 uC. Three independent replicate samples (1.0 mL) were taken every 4 hours. Cell growth was determined by monitoring changes in optical densities at 600 nm using spectrophotometric analysis (Biochrom, Novaspec II) and by protein analysis using a modification of the Bradford method [26]. Briefly, aliquots of cultures were dispensed into micro-centrifuge tubes (Fisher Scientific) and centrifuged at 10,0006g for 10 min to separate the pellets from the supernatants. The pellets were washed with 0.9% (wt/vol) sodium chloride and centrifuged for 10 min. The supernatant was discarded and the pellet was re-suspended in 1 mL of 0.2 M sodium hydroxide. Samples were incubated at 100 uC for 10 mins, and the supernatants were collected for Bradford analysis using Bradford reagent. Optical densities were measured at 595 nm (PowerWave XS, BIO-TEK). Genome source The C. termitidis sequence available at GenBank, with accession number AORV00000000 was used for this analysis [22]. Comparative analysis with other Clostridium species was conducted with genomes available on Joint Genome Institute's IMG database using the IMG-ER platform [27]. The GenBank accession numbers are NC_009012, NC_011898, NC_014393, NC_010001 and CP003992 for C. thermocellum ATCC27405, C. cellulolyticum H10, C. cellulovorans 743B, C. phytofermentans ISDg, and C. stercorarium DSM8532, respectively. Phylogenetic placement Phylogenetic analyses were carried out to determine the relatedness of C. termitidis with other members of the Clostridia family based on chaperonin 60 (cpn60) universal target sequences (549-567 bp region of cpn60 gene), which were collected from the Cpn60 database [28]. The phylogenetic tree was obtained using neighbor-joining [29] in MEGA version 4 [30]. Bootstrap tests with 1000 replications were conducted to examine the reliability of the interior branches. CAZyme annotation Translated protein sequences of C. termitidis were analyzed de novo for identification and annotation of its CAZymes and assigned to carbohydrate active enzyme (CAZy) families using the CAZy pipeline [23], as described in Floudas et al. (2012) [31]. CAZymes of all other Clostridium species analyzed, were directly accessed through the CAZy database [23], and manually compared. Homologous sequences were obtained by screening the CAZyme sequences of the Clostridia and applying BLASTP search tools using the IMG-ER. Unless specified, highest percentage identity and coverage were reported, based on hits with lowest expect (e)-value (threshold 0.01). Sequence coverage was manually assessed by considering the amino acid (AA) sequence length of the query and the data base target. Conserved domains of protein sequences were searched and analyzed using the evidence for function prediction and Reverse Position Specific (RPS) BLASTs [32]. Potential subcellular localization of identified CAZymes was predicted by uploading FASTA AA sequences of genes into the PSORTb 3.0 database [33] and using the final predictions. Growth characteristics Cell growth on 2 g/L xylan was quantified by measuring optical densities (OD 600 ) and protein concentration, as quantified using the modified Bradford's method ( Figure 2). Cultures showed no lag phase and reached stationary phase by 24 to 28 hrs, with a maximum average OD 600 of 0.55 and an average protein concentration of up to 78 ug/mL. Generation times were found to be approximately 5.5 h g 21 . Phylogenetic placement of C. termitidis To determine the evolutionary relationship between C. termitidis and other sequenced strains of cellulolytic Clostridium species, a phylogenetic tree was constructed based on cpn60 universal target (UT) gene sequence. The 60 kDa chaperonin protein which is encoded by the cpn60 gene is a useful marker for strain identification and molecular phylogenetics [34]. It has been shown that the cpn60 UT sequence can differentiate even closely related isolates of the same bacterial species [35,36]. Cpn60 UT sequence alignments have been shown to correlate to whole genome sequence alignments and resolves ambiguities associated with 16S rDNA gene phylogeny in bacteria [35]. Phylogenetic analysis of cpn60 genes ( Figure 3) showed that C. termitidis is phylogenetically more closely associated with its mesophilic counterpart C. cellulolyticum. As a result, C. termitidis CAZymes show similarities with CAZymes of C. cellulolyticum, which can be seen throughout the comparative analysis below. Genome annotation reveals high numbers of CAZymes in C. termitidis genome compared to other cellulolytic Clostridium species Genome analyses revealed that C. termitidis has the largest genome and a significantly greater number of total genes (5389) and protein encoding genes (5327) than other members of Clostridia (Table 1) analysed in this study, while C. stercorarium has the smallest genome with only 2706 protein coding genes. Putative CAZyme genes in C. termitidis CT1112 were analyzed de novo and compared to selected Clostridium species ( Table 2). The C. termitidis genome encodes a total of 355 CAZyme domain sequences. This is much higher than the number of CAZyme domains found in other Clostridium species that were analyzed. Of the CAZyme domains identified in C. termitidis, glycoside hydrolases (199) and CBMs (95) were most abundant. However, numbers of PLs, CEs, and GTs were comparable to those found in other Clostridium species. Harboring a large number of CAZyme genes may not be surprising, considering the size of the genome and the number of genes it carries. Consistent with its larger genome size, C. termitidis has the greatest numbers of enzyme genes related to carbohydrate metabolism (264), glycan biosynthesis and metabolism (56), and central metabolism (623) compared with other Clostridium species, suggesting differences in protein contents related to sugar utilization and metabolism. Interestingly, of the 133 families of GHs currently identified in the CAZy database, the lowest numbers were seen in C. thermocellum, with 27 GH families. C. thermocellum is known to be a comparatively efficient biomass degrader, which identifies it as an attractive candidate organism for CBP [2]. Assignment of sequences into GH families is therefore not necessarily an indication of efficient degradation. While genomic analysis for the presence or absence of a particular CAZy gene can suggest the capabilities of the strain in question, extracellularly localized (cell bound or secreted) gene products may be more beneficial in identifying complex carbohydrate hydrolysis capabilities. Table 3 shows the predicted extracellular GHs of C. termitidis. A comparative analysis of all the GHs, PLs, and CEs predicted to be localized extracellularly, based on PSORTb 3.0 predictions for the selected Clostridium species, is provided in Table S6 in File S1. Based on our subcellular localization predictions, C. termitidis potentially harbors a variety of extracellular CAZymes responsible for hydrolysis of, among others, (hemi) cellulose, chitin, mannans, starch, and pectin. Below we attempt to correlate C. termitidis extracellular CAZymes with its polysaccharide utilization ability. Cellulose hydrolysis. Cellulose hydrolysis is generally achieved by the synergistic action of endoglucanases, exoglucanases, and b-glucosidases. Our analysis indicates that C. termitidis possesses all the necessary enzymes needed to carry out this task. The functionally characterized GH48 of C. cellulolyticum is a cellulosomal processive cellulase with both exo-and endo-activities [37]. BLAST analysis shows that C. termitidis GH48 (Cter_0524) has high (AA) sequence similarity (74%) with this enzyme and 57% sequence identity with the AA sequence of the characterized exoglucanase celS (GH48) of C. thermocellum [38]. Cter_0524 has an additional sequence for a dockerin I domain, making it a putative cellulosomal enzyme. As is the case in most other mesophilic cellulolytic Clostridia [39][40][41][42][43], C. termitidis GH48 forms part of a gene cluster (discussed below) which may be putatively linked to the cellulosome. Members of GH9 family are mainly cellulases which have both endo-and exo-glucanase activities [44][45][46]. CAZy analysis suggests putative endo and exo activities in C. termitidis GH9 members, in addition to having the ability to bind and hydrolyze both the crystalline and amorphous components of cellulose. Of the eleven extracellular GH9s identified in C. termitidis (Table 3), 8 have dockerin I domains and are thus classified as putative cellulosomal enzymes. Five of these (Cter_0272, Cter_0518, Cter_0522, Cter_2830 and, Cter_2831) have a CBM3 domain appended to them, which is known to bind to crystalline cellulose [47]. Our analysis indicates that these are all endoglucanases with high (75-85%) (AA) sequence similarities with corresponding C. cellulolyticum endoglucanases, and up to 77% AA sequence identity with their C. thermocellum counterparts. Dockerin bearing Cter_0521 has a CBM4 domain, which is responsible for directing the GH to the amorphous part of cellulosic substrates. BLAST analysis suggests this to be an exoglucanase with 79% AA sequence identity to C. cellulolyticum exoglucanase Ccel_0732, and 47% AA sequence identity to the characterized CelK (Cthe_0412) of C. thermocellum exoglucanase [48]. Family GH5 has many enzyme activities relevant to biomass conversion such as cellulases, mannanases, xylanases, xyloglucanases and galactanases [49]. C. termitidis has eight extracellularly secreted GH5 (Table 3). Of these, four (Cter_0515, Cter_0519, Cter_1800 and Cter_0517) have dockerin domains and are annotated as endoglucanases. BLAST analysis shows high AA sequence identity with corresponding C. cellulolyticum GH5 members. In addition, Cter_0515 shows 71% AA sequence identity with the characterized CelG (Cthe_2872) of C. thermocellum [50]. Cter_2349 and Cter_1107 (Table 3) annotated as endoglucanases, seem to be unique to C.termitidis as, BLAST analysis did not give significant AA sequence similarities with other Clostridium species. The multi-domain GH5 protein, Cter_4441, an endoglucanase, has three C-terminal SLH domains, which putatively anchors it to the bacterial cell wall. Cter_4441 has a modular structure GH5_2-CBM17-CBM28-SLH-SLH-SLH, similar to C. cellulolyticum Ccel_0428, and shows 71% AA sequence identity. Two extracellular genes in family GH8, with dockerin I domains, were identified in C. termitidis ( Table 3). As with members of the GH5 family, members of the GH8 family have a variety of functions and can cleave b-1,4 linkages of cellulose, xylan, chitosans, and lichenans. Cter_0523 shows high AA sequence identity (74%) with the cloned and characterized endoglucanase C Ccel_0730 from C. cellulolyticum [51], and 54% AA sequence identity with the characterized C. thermocellum Cthe_0269, also an endoglucanase [52]. C. termitidis encodes 11 GH3 genes, highest amongst the other Clostridium species examined (Table S1 in File S1), of which only 2 are extracellular (Table 3), and are annotated as b-glucosidases on IMG database. This suggests that C. termitidis is able to hydrolyze complex sugars to glucose monomers extracellularly before assimilation. This is supported by experiments conducted by us and by the work of Ramachandran et al. [19], which indicates minimal residual glucose levels in culture supernatants, during growth on a-cellulose and cellobiose, suggesting complete assimilation of glucose as a hydrolysis product. It is interesting to note that C.stercorarium is the only other Clostridium (Table S6 in File S1) that has an extracellular b-glucosidase, putatively suggesting cellulose to glucose hydrolytic ability and assimilation. Hemicellulose hydrolysis. Hemicelluloses are polysaccharides in plant cell walls that predominantly consists of xylans and mannans with b-1,4 linked backbones of xylose and mannose monomers respectively [53]. Endoxylanases are commonly found in GH10 and GH11 families. They cleave the xylan backbone to smaller oligosaccharides, which are further degraded to xylose monomers by the action of b-xylosidases found as members of the GH43 family. Our analysis shows that C. termitidis is equipped with the necessary enzymes required for complete xylan hydrolysis. Of the seven genes functionally annotated as extracellular xylanases, four are attached to the cell surface by either a putative dockerin domain or via the SLH domain, while three are secreted freely in to the environment (Table 3). In addition to the dockerin I domain, Cter_1803 also has a CBM6 domain. This module is known for its xylan-binding abilities, guiding the catalytic component to the appropriate site on the substrate [54]. The enzyme shows 80% AA sequence identity with Ccel_1240, a xylanase found in C. cellulolyticum. Cter_2434, the 1496 AA long multi-component (CBM22-CBM22-CBM22-GH10-CBM9-SLH) GH10 has 810 AA that are identical to those of an endo-1,4-betaxylanase of Paenibacillus sp. JDR-2 with similar modular structure, and 604 AA that are identical with the Ccel_2320 of C. cellulolyticum. CBM22 and CBM9 modules are both considered to have xylan-binding capabilities [23]. To further degrade xylo-oligomers into simple sugars, five GH43 genes coding for secreted xylosidases were identified in C. termitidis. Two of these (Cter_0945 and Cter_4060) contain three C-terminal SLH domains for cell attachment. Cter_4060 has a multi-domain structure with domains GH43-CBM35-CBM35-CBM35-CBM35-CBM13-SLH-SLH-SLH. CBM35 and CBM13 are known to primarily bind with both xylan and mannans [55][56][57]. BLAST analysis shows 61% AA identity to the multi-domain C. cellulolyticum b-xylosidase Ccel_3240, which also has a similar modular structure, suggesting that Cter_4060 may have putative xylan hydrolyzing properties. Arabinofuranosidases hydrolyze arabinose side chains in xylan degradation and are members of GH51 family. C. termitidis encodes genes for two extracellular putative arabinofuranosidases (Table 3). Interestingly, only C. cellulovorans appears to have extracellular homologs among the Clostridium species considered (Table S6 in File S1). Four putatively secreted members of the GH30 family annotated as O-glycosyl hydrolases on the IMG database were identified in C. termitidis (Table 3). The GH30 family has members with activities ranging from glucosylceramidase, b-1,6glucanase, b-xylosidase, b-fucosidase, b-glucosidase and endo-b-1,6-galactanase [23]. Cter_0267 and Cter_2867 are putatively cellulosomal due to the presence of a C-terminal dockerin 1 domain. BLAST analysis shows 80% homology with C. cellulolyticum Ccel_0649, which is putatively involved in xylan degradation with high activity towards feruloylated arabinoxylans. Members of GH30 have not been functionally characterized in Clostridia. The 3192 AA sequence of Cter_2817, a multidomain GH5 protein, has a modular structure CBM66-CBM66-CBM66-GH5_distGH43-CBM35-CBM66-GH43-SLH-SLH-SLH and putatively bound to the cell via the SLH domains. This enzyme seems to be unique to C. termitidis, because BLAST searches did not give hits in other bacteria in the data base. In addition to the GH43 catalytic domain, there is an additional GH5 domain of subfamily 43 (GH5_43). GH5_43 has not been functionally assigned as of yet [49]. However, Cter_2817 has been annotated as a b-xylosidase on IMG database, perhaps due to the presence of the putative GH43 domain. According to the CAZy database, GH43 has members that have xylosidase, arabinofuranosidase, arabinose, xylanase, and galactosidase activities [23]. The presence of multiple CBM66 domains and a CBM35 domain indicates its ability to target both fructans [58] and xylans [54] respectively. Endoxylanase Cter_2829, belongs to GH8 family and has a dockerin I domain. BLAST analysis shows 77% AA sequence identity with the Ccel_1298. Hydrolysis of mannans is mainly carried out by the action of bmannanases found in the GH26 family. Our results show that the genome of C. termitidis contains a single GH26 gene annotated as b-mannanase (Cter_4544), which may putatively be involved in the breakdown of the mannan backbone. Even though Cter_4544 has a dockerin domain, PSORTb 3.0 analysis was unable to predict its location. Chitin degradation. Proteins belonging to family GH18 are candidate chitinases, which are enzymes responsible for chitin degradation. A total of five GH18 genes were identified in C. termitidis (Cter_3529, Cter_3349, Cter_1364, Cter_2813 and Cter_3848). Four (Cter_1364, Cter_2813, Cter_3848, Cter_3349) of these five proteins were predicted to be localized extracellularly. They do not bear dockerin domains, and therefore would not be incorporated into the cellulosome. Of these, Cter_3848 may be bound to the cell wall via the C-terminal SLH domains. Cter_1364 (GH18-CBM12-GH18) and Cter_2813 (GH18-GH18-GH18-CBM12-CBM12) have multiple GH18 catalytic sites and have CBM12 domains which are known to bind chitin polymers. Extracellular members of GH18 were identified in C. thermocellum, C. phytofermentans and C. cellulolyticum. However, none of these have a multi-domain structure. Carbohydrate esterases (CEs) catalyze de-acylation of saccharides. The C. termitidis genome encodes a total of 15 CEs belonging to five different families, following C. cellulovorans, which has 21 CEs. ( Table 2, Table S4 in File S1). The C. termitidis CE15 and one CE4 (Cter_5018) gene products both carry a dockerin 1 domain, suggesting their putative association with the cellulosome. CE4 enzymes, of which C. termitidis has 9 members, are annotated as either acetyl xylan esterases or chitin deacetylases, indicating their putative ability to have activity against both xylans and chitins. All the members of the CE7 family are annotated on the IMG database as acetyl xylan esterases. The C. termitidis family CE9 gene product, annotated as N-acetylglucosamine 6-phosphate deacetylase (EC 3.5.1.25), is putatively important for the metabolism of chitin. This suggests an elaborate chitin degradation ability in C. termitidis which may have evolved in response to cannibalism in termites at times of food shortages [59,60]. Starch degradation. CAZyme analysis of the C. termitidis genome shows the presence of a single gene (Cter_3247) belonging to the GH15 family that is annotated as a glucoamylase. Glycoamylases catalyze the release of glucose from the nonreducing ends of starch. C. thermocellum is the only Clostridium species among those examined that has an extracellular homolog (Table S6 in File S1). Also found in the C. termitidis genome are two genes belonging to the GH16 family that have been annotated as extracellular b-glucanases. BLAST analysis gives hits to C. cellulovorans endo-b-1,3-glucosidases. These enzymes are responsible for the breakdown of b-1,3-glucans found as components of various fungi [61]. Pectin degradation. Polysaccharide lyases (PLs) are enzymes that mainly degrade uronic acid containing polysaccharides such as glycosaminoglycans and pectin [23]. They are currently classified into 23 families in the CAZy database. C. termitidis encodes a total of four genes which are all predicted to be localized extracellularly and belong to two PL families: family PL8 with three genes and family PL11 with one gene (Table S3 in File S1 and Table S6 in File S1). All members of C. termitidis PL8 have Table 2. Comparative analysis of the number of putative CAZy sequences in selected Clostridial species. Table 3. Predicted extracellular glycoside hydrolases of C.termitidis based on PSORTb.3 analysis. three C-terminal SLH domains, which are putatively responsible for attachment to the cell wall. None of the other Clostridium species have enzymes belonging to this family. PL8 enzymes are known to degrade hyluronate, chondroitin and xanthan while PL11 members are known for their activity against pectin [23]. C. termitidis PL11 has a dockerin domain and as such is putatively active as part of a cellulosomal complex. Except for C. cellulovorans, which has seven extracellular PLs, all other Clostridium species have fewer extracellular PLs than C. termitidis (Table S3 in File S1 and Table S6 in File S1). Genome analysis shows putative cellulosomal components in C. termitidis Identification of dockerin-containing proteins. CAZyme analyses and conserved domain searches provide evidence for the presence of putative dockerin I domains associated with CAZymes in C. termitidis, and the other Clostridium species examined. As previously mentioned, dockerin I domains attach catalytic subunits to the cohesion domains in the cellulosome. Thus, proteins bearing dockerin domains are putatively considered to be cellulosomeassociated. Variation in the numbers of GHs bearing dockerin domains were observed in the different species analyzed (Table 4). C. thermocellum has the highest numbers of dockerin domain containing GHs (49). This is followed closely by C. cellulolyticum, which has 40, while C. termitidis had the lowest number, with 22 dockerin domain containing GHs. C. phytofermentans and C. stercorarium do not form cellulosomes, and are known to hydrolyze cellulosic material through a non-complexed cellulase system [10,11]. Consequently, no dockerin domains were identified in their CAZomes. Detection of putative cohesin domains (scaffoldin). Bioinformatic analysis of the C. termitidis genome revealed the presence of five putative cohesin 1 domain containing proteins (Figure 4). Cohesins Cter_0001 (352 AA) and Cter_3731 (214 AA) are the first genes on their respective DNA scaffolds (1 and 53). BLAST analysis gives 100% coverage and approximately 60% AA sequence similarity to the cellulosome anchoring protein sequence of both C. papyrosolvens (L323_03625; 1332 AA) and C. cellulolyticum (Ccel_0728; 1546 AA). Their location and partial sequences are an indication of genetic truncation and may be a reason for the short sequences observed. Cter_0520, Cter_0525 and Cter_0526 belong to the same DNA scaffold (scaffold 18) and are components of a putative cellulosome related gene cluster that is discussed below. BLAST analysis shows approximately 70% AA similarities to sequences of cohesin domains of a similar cellulosome related gene cluster of C. cellulolyticum (Ccel_0733 and Ccel_0728). The presence of multiple cohesin genes on 3 different scaffolds may indicate the presence of more than one cellulosome integrating protein in C. termitidis. However, further studies to characterize these domains will facilitate our understanding of the type and function of such proteins. Putative cellulosomal gene clusters. Similar to some anaerobic, mesophilic, cellulosome-forming Clostridia, such as C. cellulovorans, C. cellulolyticum, C. josui, and C. acetobutylicum [39][40][41][42][43], an approximately 20 Kbp putative cellulosomal enzyme gene cluster was found in C. termitidis harboring 13 cellulosomal genes ( Figure 5). With a few differences in the gene content within the different Clostridium species, the gene cluster usually starts with a cohesin containing gene (primary scaffoldin) followed by a series of genes encoding various dockerin-bearing enzymes. This would putatively suggest cellulosome formation in C. termitidis. Such a similarity is an indication that the cellulosomes of these mesophiles may have arisen from a common ancestor. In the case of C. thermocellum, the genes for cellulosomal enzymes are widely scattered on the chromosome and do not form clusters. However, its cellulosome scaffoldin genes, encoding CipA protein, and proteins involved in cellulosome attachment to the cell surface, are organized on the chromosome in a scaffoldin gene cluster [62]. This is not the case in mesophilic Clostridia. Cellulosome -cell surface attachment. Various mechanisms of cell -cellulosome attachment have been noticed in different bacteria. In C. thermocellum, the anchoring of the scaffold containing cellulosome to the bacterial cell wall occurs via the interaction of the dockerin II domains of the scaffoldin with one of three cohesin II proteins (SbdA, OlpB and Orf2p), each of which carries a C-terminal surface layer homology (SLH) repeat that interacts with the S-layer [3,6]. There is however an additional C. thermocellum poly-cellulosome forming scaffoldin (Cthe_0736), that may be involved in the formation of extracellular cell free complexes as no evidence exists for it to be cell associated [63]. In the case of both C. cellulolyticum and C. cellulovorans, cell surface cellulosome anchoring proteins are yet to be identified [64]. However an enzyme annotated as endoglucanase E, (EngE) has been implicated in mediating cell surface attachment of the C. cellulovorans cellulosome [65]. The complex cellulosomes of Ruminococcus falvifaciens FD-1, are attached to the cell surface through a sortase transpeptidation reaction [66]. In the case of C. termitidis, we were unable to locate a cohesin II domain or any other protein mediating cellulosome attachment to the cell surface. This may suggest the production of either putative cell free cellulosomes or a novel mechanism of putative cellulosome attachment which needs to be explored. Conclusion Clostridium termitidis has the largest genome among the Clostridium species considered in this study. It also has the highest number of CAZymes, which may potentially be advantageous for lignocellulosic biomass hydrolysis. In addition, C. termitidis harbors the most CAZymes secreted extracellularly, some of which are unique and have no homologs in other bacteria. These extracellular CAZymes have the potential capacity to degrade a wide variety of complex and simple carbohydrates, such as cellulose, hemicellulose, starch, chitin, fructans, pectin, glucose, cellobiose and xylose, thus making C. termitidis an attractive microorganism for biofuel production through CBP. We were also able to detect several putative genes that encode gene products with AA sequences that are consistent with key cellulosomal components of other cellulosome producing cellulolytic bacteria. This is an indication of putative cellulosome assembly. However, we were unable to detect any gene or domain with the capacity to act as a cellulosome anchoring protein. This suggests either a novel mechanism of putative cellulosome adherence or the production of putative cell free cellulosomes. Nevertheless, this study has provided us with valuable insights into the mechanism of polysaccharide hydrolysis in C. termitidis. Furthermore, studying the relationship between genome content and gene product expression will provide a systems level understanding of the operative mechanisms of hydrolysis under specific substrate conditions. Supporting Information File S1 Table S1) Comparative analysis of the number of glycoside hydrolase (GH) families in selected Clostridium species. Numbers below each family class indicate the number of members belonging to the specific family for the specific Clostridium specie. The number of family members is colored with respect to the average number of members found in the 6 genomes. Color code: black = deviation between -2and 2 standard deviation (SD) with respect to average; light orange =deviation .2 SD above mean; lightgreen = deviation ,-2 SD below mean; dark orange = .3 SD above mean; lightblue = de-viation ,-3 SD below mean; red = .4 SD above mean; blue = deviation ,-4 SD below mean; dark red = .5 SD above mean; darkblue = deviation ,-5 SD below mean. Table S2) Comparative analysis of the number of glycosyl transferases (GT) families in selected Clostridium species. Numbers below each family class indicate the number of members belonging to the specific family for the specific Clostridium species. The number of family members is colored with respect to the average number of members found in the 6 genomes. Color code: black = deviation between -2and 2 standard deviation (SD) with respect to average; light orange = deviation .2 SD above mean; lightgreen = deviation ,-2 SD below mean; dark orange = .3 SD above mean; lightblue = deviation ,-3 SD below mean; red = .4 SD above mean; blue = deviation ,-4 SD below mean; dark red = .5 SD above mean; darkblue = deviation ,-5 SD below mean. Table S3) Comparative analysis of the number of polysaccharide lyase (PL) families in selected Clostridium species. Numbers below each family class indicate the number of members belonging to the specific family for the specific Clostridium specie. The number of family members is colored with respect to the average number of members found in the 6 genomes. Color code: black = deviation between -2and 2 Figure 5. 20 kilobase (kb) putative cellulosome related gene cluster found in the C. termitidis genome. Gene clusters with similar gene arrangement have been identified in other mesophilic Clostridia as indicated. C.termitidis gene cluster with DNA coordinates 84554 to 105381 includes Cter_0526 (1); Cter_0525 (2); Cter_0524 (3), Cter_0523 (4), Cter_0522 (5), Cter_0521 (6), Cter_0520 (7), Cter_0519 (8), Cter_0518 (9), Cter_0517 (10), Cter_0516 (11), Cter_0515 (12), and Cter_0514 (13). Cellulosomal gene clusters identified in C. cellulovorans, C. cellulolyticum, C. josui, and C. acetobutylicum have an approximate size of 21.5 kb, 26 kb, 17.3 kb and 18 kb respectively. Cip (cellulosome integrating protein); CP DT1 (cellulosome protein with dockerin type 1); CD (Cohesin domain). doi:10.1371/journal.pone.0104260.g005 standard deviation (SD) with respect to average; light orange =deviation .2 SD above mean; lightgreen = deviation ,-2 SD below mean; dark orange = .3 SD above mean; lightblue = deviation ,-3 SD below mean; red = .4 SD above mean; blue = deviation ,-4 SD below mean; dark red = .5 SD above mean; darkblue = deviation ,-5 SD below mean. Table S4) Comparative analysis of the number of carbohydrate esterase (CE) families in selected Clostridium species. Numbers below each family class indicate the number of members belonging to the specific family for the specific Clostridium specie. The number of family members is colored with respect to the average number of members found in the 6 genomes. Color code: black = deviation between -2and 2 standard deviation (SD) with respect to average; light orange = deviation .2 SD above mean; lightgreen = deviation ,-2 SD below mean; dark orange = .3 SD above mean; lightblue = deviation ,-3 SD below mean; red = .4 SD above mean; blue = deviation ,-4 SD below mean; dark red = .5 SD above mean; darkblue = deviation ,-5 SD below mean. Table S5) Comparative analysis of the number of carbohydrate binding module (CBM) families in selected Clostridium spp. Numbers below each family class indicate the number of members belonging to the specific family for the specific Clostridium specie. The number of family members is colored with respect to the average number of members found in the 6 genomes. Color code: black = deviation between -2and 2 standard deviation (SD) with respect to average; light orange =deviation .2 SD above mean; lightgreen = deviation ,-2 SD below mean; dark orange = .3 SD above mean; lightblue = deviation ,-3 SD below mean; red = .4 SD above mean; blue = deviation ,-4 SD below mean; dark red = .5 SD above mean; darkblue = deviation ,-5 SD below mean. Table S6) Comparative analysis of predicted extracellular CA-Zymes, designated in the CAZy database, involved with lignocellulosic biomass hydrolysis within Clostridium species. Numbers below each family class indicate the number of members belonging to the specific family for the specific Clostridium specie. (XLSX) Author Contributions
8,066.8
2014-08-07T00:00:00.000
[ "Biology", "Engineering" ]
Thermophysical abuse couplings in batteries: From electrodes to cells Thermophysical couplings in batteries must be understood to ensure that batteries remain safe from potential immolation during operation. This article examines the ways in which thermophysical deformation of lithium-ion batteries can lead to explosions and other safety events and then provides a brief review of characterization methods to assess the behavior and results of such deformations. Finally, a recent example of an event “in the wild” is discussed and the mechanisms covered are applied to competing diagnoses of the failure. Introduction Between 2010 and 2020, secondary or rechargeable battery production increased by a factor of 50, and costs decreased by a factor of six, with average battery cell prices near $USD100/ kWh and battery pack prices below $USD140/kWh. 1 A contributing factor to the decreasing capital cost of batteries has been the increase in energy density of batteries, 2 of which the usable capacity has increased by a factor of three in the same time period. This is due in large part to the safe implementation of anodes of graphite with increasing amounts of silicon compounds added (Gr-Si), and cathodes of layered lithium-metal oxides with increasing nickel and decreasing cobalt content with either manganese (NMC) or aluminum (NCA) stabilizing atoms, as well as cathodes of lithium-iron phosphate (LFP). It may be surprising to the casual reader with a background in thermodynamics that lithium-ion batteries of increasing energy density have had relatively few publicly reported incidents of explosions, and more importantly, fewer deaths still. 3 It may be more surprising to a lithium-ion expert that time travelled to the present from the year 2000 to see relatively low safety incidents. [4][5][6][7][8][9][10] The US Occupational Safety and Health Administration (OSHA) 11 indicates that a safe battery should be a battery with (1) deterministic behavior of a cell in (2) a well-defined environment. If either of these two conditions are violated, then the battery is not safe and should not be used. Standards setting and certifying organizations such as Underwriters Laboratories (UL) act as a nexus for application safety requirements and device physics and build recommendations to satisfy both. 12 In this article, the multimodal physical and thermal measurement improvements of the last decade are explored, and discussed are how these methods enabled, ex situ, in situ, operando, and on-line, the improvement of nameplate energy density at decreasing costs without an increase in safety incidents. To do so, we explore a few of the known and understood critical failure mechanisms of lithium-ion batteries, and the methods that have been developed to check and ensure that these conditions do not exist in a cell after manufacture and in use. Mechanical damage to a battery that does not cause direct rupture of the cell packaging nonetheless has safety consequences. Per the OSHA definition of safety, this mechanical damage can alter the deterministic behavior of the battery and render it unsafe. Liu et al. 18 provide a comprehensive review of mechanical damage loops in batteries; what follows is a brief overview to map cause-effect loops. It is by no means exhaustive, but intended to give the reader a framework to assess why mechanical and thermal damage are mutually reinforcing to batteries and can lead to thermal runaway events. A closed-form electrochemical energy cell ("battery cell") consists of a reducing agent (the "anode"), an oxidizing agent (the "cathode"), a medium to allow for the transport of ions to the surface while blocking electrons (the "electrolyte"), and a medium for transporting electrons to and from a surface, while keeping the mass constrained and contained within a cell (the "current collectors," the "external wiring bus," and the "load"). The ionic current is translated to the electric current at the electrode interfaces via electrochemical reaction, and complimentary reduction and oxidation reactions are required at the anode and cathode to balance the mass and charge transfer within the system. The only regions where, by design, there are simultaneous ionic and electronic current are the porous electrodes, which have physically overlapping regions of reductant and electrolyte, and oxidant and electrolyte (Figure 1a). This pattern may be repeated in reversing sequences many times per Figure 1b. If the battery geometry is deformed in such a way that any of the described operations are hindered and/or altered, the application of current to the battery (via charge or discharge) can lead to an unexpected rate of heat generation in the cell which can then trigger thermal runaway. Next are a few examples of how deformations are instigated and triggered during both standard operation of batteries and within "abusive" conditions. An external short circuit (ESC) drives the potential difference of the current collectors to near zero. Most lithiumion batteries operate between 2.7 and 4.2 V, and as a result, at any state of charge, the cell feels a driving force to try to further equilibrate such that the reductant and the oxidant reach the same chemical potential. The consequences of this are instant heat generation within the cell, and given enough time (enough being seconds to minutes), gas generation can occur as a result of both heating of the liquid electrolyte as well as electrochemical oxidation and reduction of the electrolyte. 19,20 Additionally, since the positive electrode is asking for current from a negative electrode that is depleted or near depleted of lithium, depending on the electrode/current collector design, copper can electro-dissolve to copper ion from the negative current collector. 21 Under significant enough ESC, the external heat generated can push the battery into an oxidant-driven thermal runaway (i.e., thermal decomposition of the metal oxide positive electrode) if the heat generated raises the temperature sufficiently before enough lithium ion has returned to the positive electrode upon discharge. A lithiated (i.e., discharged) metal-oxide cathode is far less prone to thermal runaway, as the lithiated metal oxide has a thermal reduction temperature > 600°C. 22 It is critical to note that heat is also generated in the conductor external to the cell creating the ESC. Regardless of the volatility and stability of the cell components, if the ESC is physically allowed to continue, heat is generated as: If that heat cannot be sufficiently dissipated, the external cell temperature will rise according to the sensible and latent heats of the environment. Many, if not most, battery fires are triggered by a failure to manage this heat generation. If the cell survives the ESC, and the ESC was not detected nor heeded, once the battery goes into charge mode, copper will likely deposit on the negative electrode as a mossy or dendritic film instead of redepositing uniformly on the negative current collector. Given the approximately 15 µm separating the negative and the positive electrode, this copper metal filament can create a non-penetrating internal short circuit (NPISC) (often referred to as an internal short circuit or ISC, the difference in this article will be elaborated shortly). [23][24][25][26] Unlike an ESC, a NPISC cannot be (readily) eliminated by removing the short. Also unlike an ESC, which is assumed to have almost zero resistance, the non-penetrating internal short circuit can have significant resistance (e.g., it will not always drive the cell potential to zero). Depending on the nature of the short, if the impeding metal is thin enough and/or of a small enough cross section when touching both electrodes, it may disconnect itself through chemical oxidation or mechanical shifts. These events are referred to as "soft shorts" and may appear in a voltage signal as momentary dips or noise. 27,28 If the short-circuit metal is sufficiently large and chemically robust, it will permanently bridge the positive and ThErMOphysicaL abUsE cOUpLings in baTTEriEs: FrOM ELEcTrOdEs TO cELLs ThErMOphysicaL abUsE cOUpLings in baTTEriEs: FrOM ELEcTrOdEs TO cELLs negative electrode and will continually discharge the battery internally. The combination of volatile organic electrolytes, thermally unstable metal oxides, and sub-20-µm separator differences exacerbate potential safety triggers. The chemical nature of the metal filament in a NPISC has a significant impact on the potential danger it creates. Beyond the copper case previously discussed, a well-monitored trigger is iron filament from manufacturing, as well as ferric or ferrous ions left unwashed on cathodes before initial charge. 23,29 Ions of copper and iron will be drawn to the negative electrode upon charge, but once plated, they are galvanically protected by the active lithium ion in the system until a zero volt event, which is to be avoided for reasons detailed above. Given the small separator gap, a simple "pinch short" can be the NPISC. Pinch shorts can be the result of poor cell manufacturing processes, damage during cell to battery packing, or unexpected impact (e.g., car crash, dropping a cell phone). 25,30 Finally, a penetrating internal short circuit (PISC) represents a foreign body creating an electrical short circuit within a cell. The canonical example of this is the nail penetration test, in which a nail is driven through a battery to emulate its behavior during an NPISC. While the nail penetration test is a facile way to test cell response, it is sufficiently different from NPISCs such that it should not be used as a sole estimate of a cell's NPISC response. 25,27,31 For example, the cross section of a nail is far larger than that of an internal filament. While the electrical conductivity of this nail is high, the thermal conductivity is high as well, and the nail is connected to the outside world. Charging is also a heat-generating event, and the localized heat of a poorly monitored or designed charging system can lead to overheating at the positive electrode during charge. Since the positive electrode is lithium-depleted during charge, it undergoes thermal reduction at a lower temperature. Thus, the coupling between heat generation during charge and thermal runaway is positive, since the lithium is leaving rather than entering the electrode. Upon overcharge, electrolyte gassing due to electrochemically driven redox 19,32,33 must be considered as well. Fast charge of batteries has been shown to deposit lithium metal upon the graphite electrodes (instead of intercalated into the graphite electrodes). [34][35][36] The safety consequences of this lithium metal within the battery are still being studied, it is understood that this lithium metal is not designed to be within the battery, however, and therefore treated as "unsafe." Finally, the last damage mechanism discussed here will be, broadly, disconnection, where components within the battery are physically isolated rather than connected. Isolation of particles electrically can be caused by physical damage, and isolated surfaces can be created because of gassing due to the previously mentioned overdischarge and overcharge reactions. The danger of disconnection is complementary to the danger of the "over-connection" of the various short circuits. In batteries, charge rates are normalized internally to current densities, but disconnection events are inherently heterogeneous, as a result, the current density becomes nonuniform. In turn, a 1 C charge rate globally may mean a 10 C charge rate locally, and the safety concerns for overcharge apply. 37 A brief, incomplete survey of thermophysical analyses relevant to battery safety Since thermophysical couplings lead to battery safety events, and the couplings are difficult to fully normalize across different form factors of cells, the field has developed a variety of methods to measure thermal and physical behavior of commercially relevant cells. Thermal methods Calorimetric methods may be the oldest form of physical cell analysis, but they are still critical for measuring battery behaviors. Differential scanning calorimetry (DSC) is a common method for identifying the point of thermal runaway for many lithium-ion cathodes as well as understanding phase behavior of individual components. 6,38-40 DSC measures the difference in the amount of heat added or removed to change the temperature of an experimental sample in comparison to a reference of well-defined heat capacity. DSC is particularly interesting for measuring the impact of surface and structural enhancements to prevent unwanted phase changes and exothermic events, since the components in the DSC coupon are purely chemical driven. DSC chambers, however, are typically small and not intended for operando studies of electrochemical systems. Accelerating rate calorimetry (ARC) methods 38,41-45 are excellent "big siblings" to DSC in that they are similar to the heat addition to DSC, but does so in a fully adiabatic setting in which the sample is "allowed" to self-heat while the extent of that self-heating is measured as a function of time and temperature. Whereas DSC is useful for understanding predetermined reaction in simulated cell environments, an ARC experiment is useful for emulating the "full cocktail" of physical-thermal-chemical couplings that may occur in a battery as it drives itself to thermal runaway or other degradation modes. Thermogravimetric analysis (TGA) [46][47][48] is typically used to measure material or simulated cell environment (e.g., cathode-in-electrolyte), measuring the mass loss of a sample on a microbalance as it is heated. Since positive electrodes typically lose oxygen (exothermically) as they heat, and electrolytes vaporize or react, to form gases, TGA is an excellent method to quantify the extent of reaction (which can then be correlated to the expected pressure increases within a cell). When combined with spectroscopic methods, the gaseous compounds can be further classified and/ or quantified depending on the methods (discussed in more detail next). While typically not used to measure critical safety events directly, isothermal micro-or nanocalorimetry (ITC) can be used on full cells to measure the onset of reaction over longer time scales (minutes to hours) as a function of the fixed temperature and electrochemical operation on commercially relevant cell: [49][50][51][52][53][54][55][56][57][58] specifically, electrochemical and temperature conditions are set by the user, and the resulting heat flow in or out of the cell is measured over time. This thermodynamic data can then be analyzed to assess the nature of reactions occurring. This method is good at ascertaining events that form prior to critical damage in cells. Chemical methods Chemical analysis methods have evolved over the last two decades from ex situ tools to near real-time operando tools capable of measuring complex couplings in full cells. TGA methods combined with Fourier transform infrared (FTIR) spectroscopy have been used to directly correlate thermal events to evolved gas quantity and composition in a variety of systems. 46,59,60 Differential electrochemical mass spectroscopy (DEMS) is a cousin to TGA-FTIR, but rather than drive the system thermally, the system is driven electrochemically. [61][62][63][64][65][66][67][68] As a result, it can directly quantify and classify redox driven off gassing in addition to thermally driven off-gassing. Rowden and Garcia-Areaz 69 provide a thorough review of gas evolution analysis methods. Finally, x-ray fluorescence (XRF), 70,71 atomic absorption spectroscopy (AAS), [72][73][74] inductively coupled plasma (ICP), 75,76 and scanning electron microscopy with energydispersive spectroscopic methods (SEM-EDS) are industrial and academic work horses that classify impurities in samples, both electrode powders samples as well as finished electrodes. Water content in lithium-ion batteries must be kept to a absolute minimum for both operational and safety reasons; Karl Fischer titration is the standard tool for assuring water content is sufficiently low. Structural methods Just as crystal structure analysis and accompanying diffraction methods are bedrock methods for understanding equilibrium and desired performance aspects of battery materials, these methods are important for safety consideration, particularly when combined with thermal characterization in operando experiments. For example, time-resolved x-ray diffraction (TR-XRD) 10 in conditions relevant to thermal runaway of nickel-rich cathodes was done to understand the coupling of structure change and oxygen gas generation (Figure 2). Recently, localized TR-XRD methods have been employed to spatially map structural changes in cathodes 77 as well as unwanted lithium deposition within lithiumion batteries. The small x-ray cross section of lithium metal makes x-ray analysis difficult. These recent methods are testaments to the ability of scientists to maximize signal from noise. The tools have lower availability; neutron diffraction and absorption methods have been used frequently to study the impact of lithium metal on lithium-ion batteries, as well as structural changes across fullformat lithium-ion batteries. 19,[78][79][80][81] The interpretation of neutron data is more difficult than x-ray data as there are less prior data available, but the transmissivity of neutrons while being lithium-sensitive makes it a powerful tool for full cell analysis. Optical imaging methods While visible optical imaging of batteries yields little direct data, clever in situ mock cells allow for undercomponent-level understanding. For example, as graphite lithiates, it changes in color from black to gold, with specific hues indicating stages, 82,83 and this can be exploited to examine strain and heterogenous behavior in cells. 84,85 Again, heterogeneous behaviors in batteries are often the root cause of safety events, so methods such as this allow researchers to foresee and develop preventive measures to avoid such needle-in-haystack problems. Video rate methods have also keyed into key challenges in current distribution and metal detachment for metal (Li, Zn, etc.) anode systems problems that can presage disconnection-related safety issues. [86][87][88] In the last decade, x-ray and neutron tomography in near real time has been made available, and with x-rays almost to the laboratory scale. Rates of greater than 10 Hz are available, and this allows researchers to connect lowerresolution maps of the previously mentioned phenomena to full cells, creating a "zoomable map" in four dimensions. [89][90][91] A particularly dramatic example by Finegan and coauthors is the examination of a battery undergoing thermal runaway in real time. 92 This example illustrated (Figure 3) not only where the runaway starts, but how the heat spreads and how cell design can accelerate or mitigate runaway events. 93 Magnetic methods Since metal objects of various sizes can wreak havoc on a lithium-ion battery, and the lithium-ion positive electrode is a collection of valence-changing materials, magnetic interrogation of cells is natural, and similar the other methods described, there are complementary componentlevel analyses as well as full cell methods available. Lithium-metal behaviors in full cells have been examined via nuclear magnetic resonance (NMR) methods in imaging mode. [94][95][96][97] Electrolyte behaviors, particularly decompositions modes, can be teased out of NMR data as well. [98][99][100][101][102] Recently NMR imaging methods (nMRI) have been applied in clever ways to large-format batteries by Jerschow, 103,104 and maps can be completed quickly enough to image extent of reaction as a function of space, and thereby extract current density (Figure 4). Mechanical methods The mechanical correlations of reversible electrochemical reactions have been studied for as long as batteries have been in field use, but the last decade has seen a distillation of practical know-how to scientific understanding. Stress-strain relationships for lithium and lithium-ion systems have been extensively studied with classical tools of physical metallurgy, [105][106][107][108][109][110][111] and recently, acoustic analysis of such systems has revealed similar information in addition to structural mappings. 35,36,[112][113][114][115][116] For example, Chang et al. showed the progression of lithiummetal deposition to dead lithium to gas formation in a multilayer stack. 35 In this case, the lithium metal showed itself not to be a danger because of rapid heating during a short circuit, but rather, because the reactive lithium metal is not stable in the context of the lithium-metal electrolyte, which causes excess chemical gassing and leads to physical disconnection ( Figure 5). Conclusion and case study in these methods The preceding descriptions of thermophysical safety challenges for lithium-ion batteries and methods to assess these dangers are in some part responsible for the relatively low number of battery safety incidents that have occurred despite the significant increase in high energy density lithium-ion batteries over the last decade. But as mentioned, safety is a measure of what does not happen, and engineering for safety requires statistics and event analysis in equal proportion to hypotheses and fundamental understanding. The former qualities can be difficult to access for academic battery researchers, as liability and confidentiality often accompany analyses of real-world safety events. A fire on April 19, 2019, at the APS McMicken Battery in Surprise, Arizona, is unusually openly documented for a large-scale battery-safety event. An entire battery module was destroyed, and several firefighters were injured when combating the fire, and two teams of battery-safety experts analyzed immolated cells with methods such as those previously mentioned to piece together what may have happened. An analysis by DNV-GL 117 suggested that a possible trigger for thermal runaway was a significant amount of plated lithium that formed a hard short in the lithium-ion cell. Figure 6 shows an x-ray tomograph of a failed cell, where missing material is considered evidence of ejecta from a fire triggered by internal thermal runaway. Figure 7 shows a cell that was not destroyed, displaying significant evidence of lithium-metal deposition in the lithium-ion battery. However, a second report on the incident by Exponent questioned the root cause analysis by DNV-GL, 118 questioning the ability of lithium metal to persist long enough in a short-circuit configuration before oxidizing, thus self-limiting instead of allowing enough heat to be generated to cause thermal runaway. The Exponent report hypothesized that an ESC from a misconfigured wiring bus started the heating event which then triggered the cell rupture which then led to a fire. In short, the second theory is that the heat came from outside the cell. As of the writing of this piece, the root cause has not been publicly agreed upon, but the reader is strongly encouraged to read both reports to see how the methods outlined here are applied in practice, and to develop an appreciation of how well lithium-ion batteries have been "hardened" such that events such as this are few and far between. While method development, specifically for online analysis, needs to be improved so events such as McMicken can be studied in real time, the tools for assessing thermophysical correlations within battery are sufficiently developed such that forensic engineers can begin to piece together and learn from battery safety events to ensure future systems do not suffer the same fate. Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
5,269.6
2021-05-01T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
The successes and pitfalls: Deep‐learning effectiveness in a Chernobyl field camera trap application Abstract Camera traps have become in situ sensors for collecting information on animal abundance and occupancy estimates. When deployed over a large landscape, camera traps have become ideal for measuring the health of ecosystems, particularly in unstable habitats where it can be dangerous or even impossible to observe using conventional methods. However, manual processing of imagery is extremely time and labor intensive. Because of the associated expense, many studies have started to employ machine‐learning tools, such as convolutional neural networks (CNNs). One drawback for the majority of networks is that a large number of images (millions) are necessary to devise an effective identification or classification model. This study examines specific factors pertinent to camera trap placement in the field that may influence the accuracy metrics of a deep‐learning model that has been trained with a small set of images. False negatives and false positives may occur due to a variety of environmental factors that make it difficult for even a human observer to classify, including local weather patterns and daylight. We transfer‐trained a CNN to detect 16 different object classes (14 animal species, humans, and fires) across 9576 images taken from camera traps placed in the Chernobyl Exclusion Zone. After analyzing wind speed, cloud cover, temperature, image contrast, and precipitation, there was not a significant correlation between CNN success and ambient conditions. However, a possible positive relationship between temperature and CNN success was noted. Furthermore, we found that the model was more successful when images were taken during the day as well as when precipitation was not present. This study suggests that while qualitative site‐specific factors may confuse quantitative classification algorithms such as CNNs, training with a dynamic training set can account for ambient conditions so that they do not have a significant impact on CNN success. | INTRODUC TI ON Although camera traps (i.e., motion activated cameras) have been used for decades as a means of observing animal species in a wide variety of habitats while causing minimal disturbance (O'Connell et al., 2011), it is only recently that they have become cost effective for widespread deployment in the field.Camera traps have been widely used to observe various aspects of populations such as animal density and abundance (O'Brien et al., 2003;Rowcliffe et al., 2008).Arguably, camera trap studies have become the most appropriate means of obtaining occupancy and abundance data in most environments, even in difficult terrain or habitats with restricted human access (Karanth, 1995;Schlichting et al., 2020). Furthermore, camera trap observations of important species can serve as a basis for estimating the overall ecological health of an ecosystem (Karanth, 1995). However, in order to most effectively estimate animal distribution and abundance, numerous camera traps must be deployed with a high sampling effort (Di Bitetti et al., 2006).As a consequence of a large number of camera traps in a singular or multiple studies, an expansive number of images need to be filtered and labeled.Conventionally, this requires a huge amount of human labor to classify species within each image, often through the use of citizen scientists (Swanson et al., 2015;Willi et al., 2019).Furthermore, outdoor meteorology has been shown to influence camera trap effectiveness, such as detection distance shortening during rainy weather because of moisture reducing the contrast between an animal and its background (Kays et al., 2010). Due to the considerable time and effort expended by researchers when classifying camera trap images, many studies have deployed the use of machine learning to rapidly classify animal species and anthropogenic objects, including humans and vehicles (Duggan et al., 2021;Tabak et al., 2018).In fact, some studies have even found that machine-learning models can sometimes outperform the average citizen scientist with regard to accuracy (Norouzzadeh et al., 2018;Whytock et al., 2021).One of the most popular machine-learning architectures are CNNs, which are deep-learning algorithms that have a variety of branching methodologies in their construction, such as recurrent convolutional neural networks, to suit a variety of problems within the scope of ecology (O'Shea & Nash, 2015).Overall, CNNs are now widely used in camera trap studies for the purposes of image recognition and classification (Gomez Villa et al., 2017).Furthermore, CNNs have the potential to save researchers an extensive amount of time, and thus human labor can be redirected toward other scientific purposes (Norouzzadeh et al., 2018;Swanson et al., 2015). The majority of machine-learning architectures require an exhaustive amount of images to train such animal detectors and classification algorithms, thus being a significant upfront cost to construct such deep-learning models.Transfer learning is a machine-learning method that recycles a preconstructed neural network, typically trained on an extensive dataset, by only adjusting the final steps of say a CNN architecture.The utilization of transfer learning is rapidly enabling researchers to use a relatively small training image set, yet still correctly classifying animals (Duggan et al., 2021;Hu et al., 2015;Schneider et al., 2020;Shao et al., 2015).Through the use of transfer learning, CNN performance can be fine-tuned and improved for more specific classes or objects of interest (Yosinski et al., 2014).While a smaller amount of images can be used to make a satisfactory classifier, such models should always require out-ofsample images to validate such model constructions to a study's general dataset (Tabak et al., 2020). While transfer learning and other methods, such as data augmentation, are showing promise in reducing the effort to train models for animal occupancy models, these models can be improved by adding a wider array of images.A wide variety of unique images are necessary to train these models due to external factors at camera sites, yet few studies mention their effects.High false positive rates have been reported due to dynamic images with background clutter, or variations, such as shadows and swaying vegetation due to wind (Newey et al., 2015;Zhang et al., 2016).False positives due to the ambient environment can occur due to thermal heterogeneity, in which surrounding vegetation triggers camera traps due to it being a different temperature than the background (Welbourne et al., 2016).With external variables, such as degree of light, affecting aspects of image quality and object contours, CNN accuracy may in turn be affected as the model has difficulty in distinguishing animal species from the background in which they inhabit.Furthermore, effectiveness at the camera trap level can be affected by the target species and camera quality.Here, we examine the effects of meteorological conditions and daylight levels on CNN accuracy and provide recommendations for building the training dataset used by a CNN for evaluating a uniquely trained model for the classification of terrestrial animals. | Study site The nuclear accident at Chernobyl, Ukraine (51′27″63° N, 30′22″19° E) occurred in 1986 and released around 1 × 10 19 Bq of radioactivity that was transported over long distances across the northern hemisphere but especially throughout eastern Europe and Scandinavia (Evangeliou et al., 2016).The highest levels of contamination are found within the Chernobyl Exclusion Zone (CEZ) of Ukraine, which consists of 2600 square kilometers surrounding the plant.The local habitat consists of thick forests and fallow agricultural lands which have been closed to the public due to high levels of radiation.Thus, Chernobyl offers the unique opportunity to explore the ecological effects of radiation, as well as terrestrial wildlife without human interference (Mousseau, 2021;Mousseau & Møller, 2014).Due to restricted human access, dangerous levels of radiation, and now a war-stricken environment caused by Russian aggression, camera traps are ideal for observing animal species in Chernobyl safely (Schlichting et al., 2020). | Camera trap sampling design We observed 14 animal species in camera trap images taken from 45 locations across the CEZ We used relatively inexpensive consumergrade Browning Recon Force FHD trail cameras for this study.Trail cameras were placed about 1.22 m above the ground and were generally oriented toward the north so as to avoid glare (see Figure 2.2 in Appendix S2).These cameras use passive infrared detectors (PIR) to sense motion and a series of eight still images were recorded when an animal was detected.Because of their sensitivity, the traps are generally nondiscriminatory with respect to the species they capture-from moose that have a height up to three meters to weasels that only weigh a few ounces.We employed opportunistic sampling design, that is to say, we placed cameras in areas that were accessible and had a high likelihood of animals passing through -such as clearings.Our sampling unit consisted of individual camera stations.These cameras have a reported detection distance of 16.76 m, a trigger speed of 0.67 s.They were programmed to capture images at a 10 MP resolution.Representative images are shown in The traps were placed throughout the CEZ in a variety of locations, including wooded areas and fields (see Figure 2).Camera traps were deployed within an approximately 1500 square kilometer area in the CEZ (for a density of 1 camera per 33.33 km) at elevation ranges from 300 to 500 feet.This area consists of a humid continental climate with warm summers and snowy, cold winters. | Convolutional neural network development Following an application of Duggan et al. (2021) for our Chernobyl study site, we explored the consequences of utilizing fewer images and the factors necessary to consider when implementing CNN architectures in field camera trap imagery.A premade extension of the CNN, Faster-RCNN, was trained with this relatively small image set in order to reduce the running time of the model and to enhance computational efficiency (Ren et al., 2017;Schneider et al., 2018). Emphasis was placed on including images in the training dataset that showed animals in a wide variety of positions and motions so as to give the model multiple perspectives of a species.Using the graphical image annotation tool LabelImg (Tzutalin, 2015), we drew bounding boxes with a label around each species to establish ground truths, which consisted of the correct, or real, classification of each object (see Figure 3).These bounding boxes distinguish the object from background noise.If necessary, multiple bounding boxes per image were labeled to account for multiple animals. Overlapping bounding boxes were allowed in the instance that animals were superimposed in the image.Furthermore, if only part of an animal was present in an image, such as a foot or a tail, these were also labeled with their corresponding species.The defining | Train/test sets Before classification, all images were resized to 1920 × 1080 pixels that is typical of camera trap studies so as to increase processing speed and improve efficiency of limited computational resources. Using the widely accepted 90/10 split (Fink et al., 2019), 90% of images were divided into a training subset and 10% were divided into a testing subset.Only images that displayed a unique perspective of each species were included in the training dataset so as to enhance model training.We took a stratified random sample across 45 cameras and held certain cameras out from evaluation that had significant vegetation triggering.In the conditional sampling of images, a range of meteorological conditions and light levels were included.In total, 4022 images acquired from 45 cameras placed across the CEZ were classified, with 3620 images in the training dataset and 402 images in the testing dataset. | Model validation A validation subset was created by classifying images from five cameras with high species diversity from throughout the study site.We applied the trained model to this validation subset to extrapolate the trained CNN to a different set of images.Therefore, the validation dataset is separate from the train and test sets.A total of 8135 images were used in this subset, including 2610 true negatives.Images from this set were also labeled with LabelImg to evaluate model performance metrics.We ran the trained model at a confidence threshold of 0.9 on these images to evaluate model performance.Validation metrics were compared to the train/test metrics to ensure that the model was not overfitted. | Case study Once the model was effectively trained and validated, it was applied to 9576 images from 12 randomly selected cameras not included in the training, test, or validation dataset within the Chernobyl Exclusion Zone (see Figure 4).These 12 cameras contained images not included in the 45-camera training/testing set because of the chance of labeled data interfering with the unsupervised images.In other words, in order to determine how the model would perform on a random selection of data, we chose separate camera traps from our training camera traps.The images from these camera traps within the case study are different from F I G U R E 2 Map of 45 camera locations within the Chernobyl Exclusion Zone whose images were used to create the model.All cameras shown were part of the train/test set, with a validation subset shown in purple.Five cameras are labeled with an "A" to show that their location was also used in the case study but from a different time range Google Earth Pro 7. 3.6.9285 (2022).Data from the nine most common classes (denoted by an asterisk in Table 1) were selected for analysis.A total of 114 unique animal classifications, defined as events, were contained within the 9576 images taken.Unique animal classifications consisted of photographs of an animal, or a group of animals, captured by the camera traps.Therefore, there were 114 events consisting of 182 unique animals.Furthermore, if consecutive images contained members of the same species taken less than 1 h apart, these were classified as a single event to avoid pseudo-replications.We assumed that camera events only contained one type of species.In more than 2 million images of Chernobyl, only two instances occurred in which multiple different species were present in an image at the same time.A classification of an animal had to be above a critical 90% threshold for taking the model-assigned accuracy predictions into account.Furthermore, images were clarified to one species with the species occurring most often as the correct classification.For example, if the CNN classified seven of the eight images as a red deer and the last as a roe deer, the event was clarified to consist of a red deer. The predicted counts made by the convolutional neural network were compared with the actual counts originally made by human observers.The actual counts were reassessed by a second observer to ensure accuracy.CNNs can be defined by varying levels of success: at the lowest level a success consists of merely separating an animal (no matter the species or count) from vegetation.At a slightly more demanding level, a success can be defined as not only detecting an animal but detecting the correct species.Finally, at the most exacting level, a success can be defined as detecting both the right species and number of animals present in a given image.For the purposes of this research, we chose to use the most stringent parameters for success: events in which the model correctly identified the species and number of animals present in each image were labeled as successes, and all other events were labeled as failures.Due to this definition, the "success rate" of 50.88% is different from the F I G U R E 4 Map of 12 camera locations within CEZ that were used as part of the case study.Five cameras are labeled with a "B" to show that their location was also used in the train/test set but from a different time range Google Earth Pro 7. 3.6.9285 (2022). TA B L E 1 Image distribution for test and train subsets for 14 different classes within 4022 images taken from 45 camera trap locations. | Statistical analysis After generating the case study data, we ran a generalized linear mixed model (GLMM) on R version 4.2.2 with the package "lme4" to tease apart the effects of low light/day, precipitation/no, cloud cover, temperature, wind speed, and image contrast on CNN success.We chose to run a GLMM due to their ability to build on simple linear regression by capturing complex relationships with fixed covariates and factors, including random effects (Bono et al., 2021). Prior to constructing the GLMM model, all continuous predictors were centered, standardized, and checked for high Pearson correlation to remove multicollinear variables.Given contrast and night vs day has a significant correlation (0.82), the predictor night vs day was removed to keep the more descriptive predictor.The predictor temperature contained outliers which are outside the IQR range that resulted in only four events being removed from the training set of the GLMM.Low light/day and precipitation/no were dummy variables-we were unable to make these variables continuous due to the difficulty involved in quantifying amount of light; quantifying precipitation was also not reliable due to its transient nature.Historical weather forecasts may state that it rained a certain amount on a given day or hour, but this did not necessarily correspond to actual precipitation present in the images due to the camera's specific location.For example, it may have rained 2 inches in a general location over a certain period of time, but it may not have been raining at the exact moment in time at the specific location where the picture was taken.Therefore, the presence or absence of precipitation in each image was noted visually (see Figure 5).Amount of daylight (low light vs. day) was determined based on whether or not the camera trap deployed the use of infrared LEDs, signifying low light levels, that is, night (see Figure 6).While the amount of light in an area can in general be determined based on time of year and location, we used this conservative approach to determining day vs. night because it avoids confounding variables such as cloud cover, camera orientation, and shadows.Cloud cover, temperature, wind speed, and contrast were continuous variables.These meteorological data were obtained either from World Weather Online or the image itself, as in the case of temperature (World Weather Online, 2022; each camera recorded ambient temperature along with date and time of day).Overall image contrast was determined by running the case study images through the R package imagefluency. | Raw data To compile our train, test, and validation sets, we observed 14 animal species in a total of 12,157 camera trap images taken from 45 loca- on CNN success, we used an unlabeled image set for our case study consisting of 9576 new images (see Table 2).Maps of these camera trap locations are shown in Figures 2 and 4. Frequency of animal detections, both CNN and human-classified are shown in Table 3. | Training metrics After training, the CNN's predicted values and ground truths were summarized in a confusion matrix (see Appendix S1).Based on the number of false positives, false negatives, true positives, and true negatives the following metrics outlining model performance were calculated: accuracy = 81.31,precision = 97.93,recall = 78.56,and F − 1 = 87.18.These metrics were calculated at a confidence threshold of 0.9. | CNN case study Human and CNN predictions were generated for nine of the most prevalent animal species-which consisted mainly of relatively large mammals-present in 114 events across 9576 images taken from 12 locations across the CEZ (see Table 3). A total of 114 events were classified across 12 different cameras.Seven cameras were at least 50% successful (Figure 7).The total count predicted by the CNN exceeded the count identified by humans, delineating a significant number of false positives (Table 3). | GLMM model selection and evaluation A generalized linear mixed-effects model (GLMM) with binary response was constructed with the popular "lme4" package.The fixed factors were binary categories consisting of no precipitation/precipitation and night/day (0/1).The fixed covariates were temperature (°C), windspeed (km/h), and contrast.This allows us to see how these 4).Night/day was removed due to the high positive correlation with contrast and a small positive correlation with precipitation to retrain the predictors with finer measurements of image clarity and differentiation, including fog or precipitation effects.This study followed the statistical convention of comparisons across a nested model structure to a null (see Table 5).Predictors included in each model are outlined in Table 6. Upon analyzing wind speed, temperature, precipitation, contrast, and cloud cover via GLMM, no strong significant linear relationship between these variables and CNN success was found with the exception of temperature (see Table 5).Additionally, as shown by Model 4, precipitation demonstrates some possible improvement of the GLMM.However, if we only compare models 4 and 5 to our null, model 4 is no longer significantly improving model performance. Thus, significance here is a by-factor of other predictors not correlating with CNN performance (see However, significance does not improve over a null model in direct ANOVA comparison, thus GLMM improvement is a by-factor of previous predictors not correlating with CNN performance. Upon reading the model diagnostics, the residuals of the deviance appear to follow a relationship with the fitted values of the model.Thus, conclusions from this model alone must be made carefully as the relationship is non-linear and/or does not contain all predictors of CNN success.The residual v. fitted values plot (see Figure 4.3 in Appendix S4) demonstrates that this is not a linear relationship, and more values would be necessary to determine whether there is a non-linear relationship at play. | Light level and precipitation After counting successes and failures in the presence or absence of precipitation, the CNN performed better when precipitation was not present; the success rate in clear weather was 21.99% higher than the success rate when it was raining or snowing (Table 7). Furthermore, after assessing successes failures in low light/ conditions, the CNN was 13.11% more successful during the day than during low light conditions (Table 8).As noted previously, these findings correlate with the suggestion that there is some sort of relationship between precipitation and CNN success, | DISCUSS ION Overall, these findings suggest that there is no significant linear relationship between ambient conditions and CNN success when the CNN is trained with a dynamic image set consisting of pictures taken in a wide variety of meteorological conditions.There could potentially be a more complex, non-linear relationship between predictors and CNN success, but more sampling power would be necessary to run this analysis. In regard to temperature, while we did not find that it definitively had a causative effect on CNN success, higher temperatures were shown to have a relationship with the CNN success rate.This is likely because temperatures below freezing are associated with frozen precipitation, especially snowfall.Precipitation was also shown to have a negative association with CNN success (see Table 7), and low light levels were also associated with CNN failure (see Table 8).In the presence of frozen precipitation and in the absence of sufficient light, CNNs may be less successful at image classification due to low levels of contrast between the object and its background (Tao et al., 2017) which would create a relatively blurred object contour.However, no significant linear correlation between CNN success and precipitation/low light levels was found due to training the model with a large number of images taken when temperatures were below freezing, which improves the CNN for cold-weather image classification. We expected that image contrast would play a major role in CNN accuracy.High contrast levels are necessary for effective image classification because the CNN is expected to be better able to distinguish indistinct targets from cluttered backgrounds (Fan et al., 2018).However, we found that the overall contrast of our images did not have an effect on CNN success, which suggests that the issue is not necessarily contrast as a whole on a given image but rather could be attributed to the definition of object contours. The lack of a relationship between wind speed and CNN success is interesting as we expected that high winds would negatively impact CNN success rates on account of moving vegetation triggering the camera (Glen et al., 2013;Zhang et al., 2016) Yu et al., 2013) although this adds to the manual labor required for the process. Future camera trap studies may be further enhanced by performing preliminary pilot studies to determine which camera model best meets the requirements of the study (Newey et al., 2015). The optimal camera model can depend on a variety of variables, such as target species, site accessibility, habitat, and climate (Rovero et al., 2013).Also, there is tremendous variation among camera makes and models in their resolution, field of view and low light capabilities.Higher quality cameras may produce dramatically TA B L E 6 The null and models 1-5 within the GLMM and their corresponding parameters.3) and thus produce a heat signature similar to a deer and are readily picked up by the PIR sensor (Wearn & Glover-Kapfer, 2017).However, it should be noted that if our target species were relatively small (<1 kg), we would recommend using a more sensitive camera model.In addition, camera model may be particularly important for night/low light conditions such as at dawn, dusk, or during daytime precipitation (Rovero et al., 2013), especially given that of the animal targets are particularly active at night.To our knowledge, there have been few comparative studies of camera performance under varying environmental conditions. There were several limitations to the present study, most importantly the use of a relatively small sample size of 114 unique events for CNN training.This constraint was dictated primarily by both human time and effort restrictions and the need for greater computational power.Future studies should analyze a larger number of events, in addition to analyzing background clutter or object contours as variables that may influence CNN success.Although manpower will always be in short supply, desktop computational power continues to rise exponentially thus providing the opportunity for enhanced CNN development.We were able to train an effective CNN that accounts for ambient conditions using limited computational power and a small image set. Classification models that use CNNs are becoming increasingly useful for the processing of camera trap imagery (Norouzzadeh et al., 2018).While CNNs can be time and cost effective, it is difficult to achieve the accuracy levels provided by manual (i.e., human) analyses (Favorskaya & Pakhirka, 2019).However, characterization of the ecological and environmental characteristics of the study site and the use of a dynamic image training set, can greatly enhance the utility of artificial intelligence (AI) tools like CNNs. Finally, on a more general note, camera traps have become an increasingly important tool for the monitoring of vertebrates, both because they are cost effective and relatively easy to deploy, but especially because many animals are extremely difficult to monitor at landscape scales using any other method.This tool is useful for monitoring shy and rare species that are in hard-to-reach locations either because of geography or because of military conflict, as is currently the case in Ukraine.Our studies of mammals in the Chernobyl Exclusion Zone have continued despite the ongoing conflict because the cameras are semi-autonomous and can be left in place for extended periods (several months) without human intervention.The development of automated image processing will greatly facilitate data processing, and the generation of accurate and precise datasets will continue to depend on enhancements of camera design and the incorporation of independent variations (e.g., meteorological conditions) into the training image set. Figure 1 . Figure 1.Camera trap images used to train and validate the model were taken between the months of November 2019 and May 2020. bounding box was transferred to a CSV format with the training processes utilized in the Tensorflow training framework (Abadi et al., 2015).F I G U R E 1 Sample photographs taken from camera traps in Chernobyl.Starting from top left and proceeding clockwise, species are the following: gray wolf (Canis lupus), roe deer (Capreolus capreolus), red fox (Vulpes vulpes), moose (Alces alces), Przewalski's horse (Equus ferus), and boar (Sus scrofa). F I G U R E 3 Bounding boxes with confidence predictions around a target object.Computer-generated bounding boxes are shown on the left side and human-labeled bounding boxes are shown on the right side.images in the train, test, and validation sets.These cameras were all generally placed in clearings within wooded areas with little variation in the surrounding habitats, with the exception of an occasional dirt road (CH16B's site) or abandoned structures (sites from CH20 and CH21B).These 12 cameras contained images from November 2016 to March 2017.Images showing the site-specific factors of each of the camera traps can be found in Appendix S2. tions across the CEZ.To analyze the effects of ambient conditions F I G U R E 5 Sample photograph of precipitation taken from a camera trap in Chernobyl.Species shown are moose (Alces alces).F I G U R E 6 Sample photograph of the camera traps deploying infrared LED technology, signifying low light levels.Species shown is a red deer (Cervus elaphus). predictors and to what degree each predictor has a relationship with the response of successful CNN classification.The random effects are the cameras capturing images for the events, thus accounting for the hierarchical clustering in model construction.Finally, a Pearson correlation analysis was run to remove multicollinearity (see Table F Number of CNN successes and failures per camera, which together constitute total events.Failures are shown in red, and successes are shown in blue.See Table3.1 in Appendix S3 to view how camera numbers 1-12 correspond to camera names.TA B L E 4Pearson correlation matrix summarizing the model parameters, AIC, BIC, log-likelihood, deviance, degrees of freedom, and pvalue following the nested model comparison using analysis of variance (ANOVA). Train images Test images The nine most common classes are delineated by an asterisk. aforementioned accuracy rate of 81.31.Failures consisted of false positives, false negatives, and misclassifications.Our main objective was to attribute failures to a variety of variables, including but not limited to cloud cover, wind speed, temperature, precipitation, and amount of daylight. Table 4.1 in Appendix S4).Test Set of 402 images used to evaluate, or "test," the trained CNN's performance after each iteration of the model construction.See Figure 2. Validation Set of 8135 images used to extrapolate trained architecture to a different set of images.See Figure 2. Description of the train, test, validation, and case study data sets. and CNN success rate a visual positive relationship was shown (see Figure 4.2 in Appendix S4).Based on AIC values, we can assess how well the addition of covariates explains Kullback-Leibler (KL) divergence compared to the null.Model 5 has the lowest AIC value to explain KL divergence and is statistically significantly (p-value < .05)over simpler models.TA B L E 2 GLMM outputs demonstrating that the model does not perform better than random chance (null), that is ambient predictors do not linearly affect CNN success. Number of successes and failures by the CNN in the presence/absence of precipitation.An event occurred in the presence of precipitation if there was visual snowfall or rainfall noted, all other events occurred in the absence of precipitation.Number of successes and failures by the CNN in low light vs. daytime conditions. TA B L E 7TA B L E 8
7,086.8
2023-09-01T00:00:00.000
[ "Computer Science", "Environmental Science" ]
Biomechanical Effects of Using a Passive Exoskeleton for the Upper Limb in Industrial Manufacturing Activities: A Pilot Study This study investigates the biomechanical impact of a passive Arm-Support Exoskeleton (ASE) on workers in wool textile processing. Eight workers, equipped with surface electrodes for electromyography (EMG) recording, performed three industrial tasks, with and without the exoskeleton. All tasks were performed in an upright stance involving repetitive upper limbs actions and overhead work, each presenting different physical demands in terms of cycle duration, load handling and percentage of cycle time with shoulder flexion over 80°. The use of ASE consistently lowered muscle activity in the anterior and medial deltoid compared to the free condition (reduction in signal Root Mean Square (RMS) −21.6% and −13.6%, respectively), while no difference was found for the Erector Spinae Longissimus (ESL) muscle. All workers reported complete satisfaction with the ASE effectiveness as rated on Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST), and 62% of the subjects rated the usability score as very high (>80 System Usability Scale (SUS)). The reduction in shoulder flexor muscle activity during the performance of industrial tasks is not correlated to the level of ergonomic risk involved. This preliminary study affirms the potential adoption of ASE as support for repetitive activities in wool textile processing, emphasizing its efficacy in reducing shoulder muscle activity. Positive worker acceptance and intention to use ASE supports its broader adoption as a preventive tool in the occupational sector. Introduction Exoskeletons (EXOs), designed to be worn on the body to support workers in physically demanding occupational settings, are suggested in certain work environments as a measure against fatigue and as a preventive intervention against risk factors associated with Work-related MusculoSkeletal Disorders (WMSDs).They are also considered a viable solution in cases where other preventive measures are impractical [1].WMSDs, affecting 60% of European workers, exhibit prevalence rates of over 40% in upper limb-related disorders within the working population [2].These disorders, caused by various factors, are particularly influenced or exacerbated by physical factors involving mechanical loads on musculoskeletal structures.Working conditions involving repetition, force, lifting, elevated upper limbs, or a combination of these factors for more than 1 h in a shift are associated with shoulder disorders [3,4], often categorized under the term 'subacromial conflict syndrome' [5].Hand-arm elevation and shoulder load are correlated with a doubled risk of developing chronic shoulder disorders [6][7][8].At the individual level, gender, age and body mass modulate the risk of exposure by interacting with physical occupational risk factors, together determining the likelihood of developing irreversible and long-term shoulder disorders [7,8].Prompt preventive measures in the workplace can have a lasting impact on shoulder health, and, in this direction, the adoption of EXOs in industrial settings has gained momentum.However, its implementation is constrained by regulatory gaps, insufficient long-term efficacy evidence, and concerns about potential adverse events [9,10].On the other hand, EXO design has improved, with lightweight, slim, and user-friendly devices now available on the market.Among the popular EXOs, passive Arm-Support Exoskeletons (ASEs) have demonstrated significant reductions (10-26%) in shoulder muscle activity during both field measurements and controlled laboratory studies [1,[11][12][13][14].Field studies so far conducted in automotive, manufacturing, logistics, and agriculture sectors are essential to assess the effectiveness, appropriateness, safety, and user acceptance of occupational EXOs [15][16][17][18][19]. Exoskeleton studies have not yet considered the industrial textile sector, which involves manual interventions with high-frequency repetitive activities, weight handling, and incongruous upper limb postures.This is determined by the type of production machinery (spinning machines, winders and twisters) with work positions densely arranged along the longitudinal dimension of the machinery, and manual interventions performed on the machinery within a range of variable heights (from 70 cm to 210 cm).The worker's activity mainly involves an upright position, frequent ground-walking alongside the machine, and continuous and repetitive movements of the upper limbs.These movements often require shoulder flexion beyond 80°and involve pinching or grasping threads or spools at different heights of the machine.Employees on shift can be assigned alternating periods of time on different machines, depending on organizational needs.This can result in variable biomechanical risk exposure.Preventive measures in this complex environment focus on risk containment, which is only partially addressed by organizational measures such as task rotation and duration reduction.In this context, ASE appears to be a cost-effective solution applicable to all workers. This pilot study aims to assess the effectiveness of a commercially available passive ASE during typical tasks in the industrial processing of woolen textiles, based on the analysis of objective metrics derived from biomedical signals estimating the muscle fatigue and subjective feedback given by workers.This information can be used to plan interventions at the company to promote the use of exoskeletons.It can also be used to set up long-term monitoring of the various effects associated with the use of these devices. The study employed a compact system to analyze surface electromyography signals, which is ideal for field studies where real movements can be investigated.This ensures greater relevance compared to laboratory studies, where researchers can only attempt to reproduce the actual conditions being analyzed.The research activity in the field of composite and hybrid materials is driving the development of new wearable sensors [20,21].These sensors are small and can be in contact with the subject's body, providing accurate and realistic measurements.In recent years, there has been a rise in proposed solutions in textile technology for medical purposes, particularly for diagnosis and monitoring outside of the laboratory [22].This increase is not coincidental. The next chapter will describe in detail the materials and methods used in this work.In particular, the exoskeleton used during the experimental tests and the study design will be described, with reference to the experimental procedure, the muscle activity parameters and the usability and user satisfaction questionnaires, as well as the statistical methods used for the analysis.The results are presented in Section 4, while their interpretation is provided in the discussion section.The analysis of the effect of the exoskeleton on muscular activity is supplemented by a correlation analysis between the variables collected in the study, which is treated separately for the sake of clarity.Finally, concise conclusions deduced from the statistical analysis are reported. Materials and Methods Figure 1 provide a graphical representation of the experimental procedure described in the following paragraphs. Study Population Our pilot study was conducted in Northern Italy at a textile company as part of an ongoing experiment, where the employer provided a novel passive ASE, for voluntary use by employees during specific work tasks.Among these workers, we selected a small sample for our study.In recruitment, we ensured an equitable distribution of gender and age among workers to increase the representativeness of the sample with respect to the real working population.The sample comprised eight participants, with an equal gender split of four men and four women.Table 1 provides detailed demographic characteristics of the participants.The only criteria for exclusion from participation in the study were the presence of ongoing acute disabling conditions that do not allow the performance of repetitive activities with the upper extremities, or the presence of internal complications that contraindicate the performance of activities involving the lumbopelvic spine.This research project underwent ethical review beforehand and received official approval from the Ethics Committee, identified by the approval number 2732 EC.Each participant received detailed information about the research's objectives and methods and voluntarily confirmed their participation in the study. Exoskeleton The passive upper limb EXO Paexo Shoulder (Ottobock, Duderstadt, Germany) was tested in the experimental study.This specific type of EXO appears in the literature to have already undergone a process of evaluation (assessment in work-like tasks) but not validation (assessment in a controlled laboratory setting with no particular connection to the work task) nor field assessment in the sense of real-world assessment [11]. The Paexo Shoulder EXO (2019) consists of a lightweight frame (total weight 1.99 kg) in the form of a backpack worn on the back of the trunk (Figure 2).The design includes supports attached to the arms, leaving free the movement of the trunk and upper extremities.The support torque varies with the arm elevation angle, reaching its maximum at an elevation angle of 90°(upper arm horizontal), and becomes zero when the arm is lowered along the body.The primary goal of these assistive EXOs is the reduction in effort in the upper limbs. Experimental Procedure The experimental protocol comprised two consecutive sessions for each participant: one involving the use of the ASE and the other without the support (FREE).The order of these sessions was randomized to mitigate the potential order effects in statistical analysis, and each participant acted as their own control, as the intra-individual differences in the variables of interest between the two conditions were considered in the analysis.Each session included three types of dynamic repetitive tasks commonly performed by workers on machinery, specifically twisting and winding machines, which predominantly involve overhead work (Figure 3).During the experimental sessions, the pace of task execution (time per cycle) was not strictly fixed.The worker was instructed to complete the required number of cycles but was free to adopt their usual work pace and preferred technique.The experimentation was conducted on standard machinery, resulting in the same geometric working points for all subjects.For individuals who are short in stature, reaching the highest points of the machine requires greater postural effort on the shoulders. Activity A involves the worker lifting a spool (≈3-3.5 kg) from a service trolley with both hands and raising it to a height of 200 cm by applying pressure to fix it onto a pin (as shown in Figure 3a).The single cycle was made up of two spools fixed on two adjacent pins (either front or rear row), and was repeated 10 times in a session, with an average duration of 80 s.The shoulders are flexed more than 80°for 25% of the cycle. Task B involves operating the spinning machine pre-loaded with five pairs of spools (anterior and posterior row).The worker is required to take a couple of threads, slip them with both hands from a spool at a height of 200 cm, and insert them into the lower parts of the machine.During 55% of the cycle, the shoulders are flexed >80°(refer to Figure 3b).Additionally, in each cycle, the right hand operates a lever, applying a mediumlight force (≈2.5 kg).Each cycle involves positioning two threads for one spool and lasts approximately 11 s.This process is repeated 10 times throughout the task. In task C, the worker transfers a set of dozen tubes (<100 g each) from a trolley to the top of a twisting machine at a height of 210 cm.Typically, the worker holds the tubes with his left hand and uses his right hand for insertion (Figure 3c).Each cycle (one tube) lasts 2 s, for a total of 12 recorded cycles. These three tasks vary in physical demands, including cycle duration, load handling and percentage of cycle time with shoulders flexed (>80 • ) (Table 2).These differences in physical exertion were reflected in the different risk indices for each task, calculated using the OCcupational Repetitive Actions (OCRA) method, developed by Occhipinti et al. [23].The OCRA index categorizes risk levels into at least three classifications: no risk, uncertain or very slight risk, and the presence of risk.This categorization is grounded in predicting the likelihood of injury due to exposure levels.The OCRA method entails a comprehensive analysis of the work task, assigning scores to various risk factors, such as repetitiveness, duration, posture, exertion, recovery periods, and complementary factors.Based on these factors, Task A showed an OCRA index of 3.4 for the left side and 8.7 for the right side, placing it in the yellow (borderline) and red (moderate risk) zones, respectively.The OCRA index for Task B (threading multiple threads) was 2.1 for the left side and 2.3 for the right side, resulting in an acceptable risk window (green zone) and a borderline risk window (yellow).Finally, Task C (tube replacement) had a OCRA index of 1.1 for the left side and 14.0 for the right side, corresponding to a green (acceptable) and purple (high risk) risk window, respectively.In summary, the dominant (right) side was at borderline-to-high risk in tasks A, B and C. EMG Feature Extraction At the start of each session, participants were equipped with adhesive surface electrodes connected to wearable systems to detect electromyography (EMG) activity.EMG data were collected using the assembled EMG Sensor BITalino (r)evolution BLE system (gain 1009, input impedance 9.5 GOhm, CMRR 86 dB and sampling frequency 100 Hz), designed for real-time physiological data recording, coupled with the OpenSignals (r)evolution software (Public Build 2022-05-16; PLUX Wireless Biosignals S.A., Lisbon, Portugal) Three muscles of particular interest were studied: the right anterior deltoid, medial deltoid (Figure 4a), and the right Erector Spinae Longissimus (ESL) (Figure 4b).Each plays a crucial role in many human motor activities, also in the tasks examined in this protocol.Right-side muscles were monitored since participants were all right-handed.Electrodes were placed on these muscles, following Surface Electromyography for the Non-Invasive Assessment of Muscles (SENIAM) recommendations [24] (Figure 4).EMG signals were pro-cessed in MATLAB R2023a (The Mathworks, Inc., Natick, MA, USA) to extract quantitative metrics estimating muscle fatigue exerted during the tasks.Signals were first manually segmented to identify the single cycles composing the session, and in each window, the Root Mean Square (RMS) and the Peak-to-Peak (P2P) amplitude were extracted from the signal.These parameters represent quantitative measures useful for characterizing the signal, and therefore, the activity, of the muscles analyzed.Although they cannot be considered an absolute measure of muscle fatigue when not normalized, in this study, the analysis was conducted on intra-subject variations due to the use of the exoskeleton.The use of statistical tests for paired data only takes this variable condition into account, as all other factors are fixed in the experimental setting. Usability and Satisfaction Assessment To assess the user satisfaction and usability of the EXO used in the study, two different but complementary instruments were submitted to the study population: Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST) and System Usability Scale (SUS).The QUEST, developed by Demers et al. [25], served as the tool for assessing workers' satisfaction with both technical and usability aspects of the EXO, as well as their contentment with the delivery and support they received.We analyzed the responses by categorizing them into two groups based on their satisfaction levels: low satisfaction (scores ranging from 1 to 3, representing "not at all satisfied" to "moderately satisfied") and high satisfaction (scores between 4 and 5, denoting "satisfied" to "very satisfied"). This categorization enabled a clear distinction of participants' satisfaction levels with the EXO.The QUEST framework operates on the premise that user satisfaction with assistive technology is shaped by personal expectations, perceptions, attitudes, and evaluations.It places greater emphasis on specific features and associated services rather than solely focusing on equipment performance. The SUS, as elucidated in [26], functioned as a tool for evaluating the usability of technology.This scale entails user ratings on 10 constructs, expressed through statements concerning product usability, which are assessed on a Likert scale ranging from 1 ("strongly disagree") to 5 ("strongly agree").The final score, calculated as the sum of individual responses, yields a comprehensive measure of the overall usability of the system.The underlying principle of SUS is that usability is contingent on context, closely intertwined with the environment, the activity conducted, and the user.It encompasses three pivotal aspects: efficiency, effectiveness, and user satisfaction. 2.6.Statistical Analysis 2.6.1.Two-Way ANOVA To evaluate differences in muscle activation among subjects in the ASE and FREE conditions during the performance of the previously described tasks, a two-way analysis of variance (Two-Way ANOVA) was employed on three distinct datasets, each corresponding to a different muscle district (anterior deltoid, medial deltoid, and ESL). The two-way ANOVA enables the simultaneous assessment of two categorical variables' impact on a continuous quantitative variable.Its advantage over the one-way ANOVA lies in testing the relationship between two variables (factor and dependent variable), while considering the influence of a third variable (another factor).In this instance, the two qualitative variables are 'subject' and 'condition' (task performed with the EXO vs. without the EXO).The quantitative dependent variables include parameters derived from the EMG signals of each muscle. The results of this analysis can highlight statistically significant effects of the EXO on muscle effort, accounting for inter-subject differences.The significance level was set at 0.05. Correlation Analysis Spearman correlation was used to assess the association between the effects of the EXO and demographic variables of the population.The effects of the EXO on muscle effort were quantified by calculating the difference between the RMS values assessed in the two working conditions (∆RMS).Positive values of ∆RMS indicate reduced muscle effort when using the EXO, while negative values indicate higher EMG signal RMS values when the support is worn by the worker.Spearman correlation is a non-parametric statistical technique used to determine the existence and direction of a monotonic relationship between ordinal or continuous variables that may not adhere to a normal distribution. The Spearman correlation coefficient, denoted as ρ, ranges from −1 to 1.A ρ value near +1 signifies a strong positive monotonic correlation, while a value near -1 indicates a strong negative monotonic correlation.A value near 0 suggests no monotonic correlation between the variables.All statistical analyses were carried out using Jamovi software ver.2.3.28 (Jamovi Project, Sydney, Australia). Analysis of Exoskeleton Effect The first analysis aimed to verify the impact of the use of the EXO in reducing muscular effort by means of the two-way ANOVA statistical technique run on the features extracted from the EMG signals. The boxplots in Figure 5a,b illustrate the distribution of EMG parameters in Task A. Descriptive statistics (mean ± std) and the results of the ANOVA (p-value) are shown in Table 3. Visual analysis of the boxplots indicates an average reduction in P2P amplitude and RMS of the anterior deltoid EMG signal when the EXO is utilized.A similar effect is observed in the medial deltoid muscle for the RMS parameter.These observations are substantiated by the statistical analysis: the ANOVA test highlights statistically significant differences in both parameters for the anterior deltoid and in the RMS value for the medial deltoid.Further significance is found in the differences in P2P amplitude for the ESL muscle.However, in this case, the values are increased in tasks where the EXO is used. For Task B, the distribution of the EMG parameters is reported in Figure 5c,d.Table 4 shows descriptive statistics and ANOVA results in terms of the p-value of the factor condition of use (ASE vs. FREE).Statistical analysis confirms the positive impact of the EXO in reducing the P2P amplitude and RMS of the EMG signal of the muscle anterior deltoid and RMS in the medial deltoid also in Task B, while statistically significant effects are not registered for the muscle ESL activity.The reduction in RMS of the signal for the two shoulder muscles are relevant (p-value < 0.001), higher than those shown in Task A. In Task C, the impact of EXO use is similar to that of the other work tasks.The statistical significance of the difference between the conditions of use of the ASE is underlined for the anterior deltoid in P2P amplitude and RMS, and for the RMS of the signal recorded on the medial deltoid.The distribution of the values, reported in the boxplots of Figure 5e,f, shows the reduction in EMG features when the EXO is worn by workers.Table 5 shows the statistical descriptors of the distribution and the significance level of the ANOVA test. Assessment of Satisfaction and Usability The evaluation of user satisfaction and usability of the EXO was investigated using two scales: QUEST and SUS.QUEST overall scores, assessed by participants on a Likert scale (from 1 to 5, where 1 is "not satisfied at all" and 5 is "very satisfied"), were divided into two groups to identify a low satisfaction level (scores from 1 to 3) and a high satisfaction level (scores from 4 to 5).Regarding the device as shown in Figure 6, 100% of participants were completely satisfied with the durability (mean 4.37; std ± 0.52) and effectiveness (mean 4.75; std ± 0.46).The weight (mean 4; std ± 0.93) and safety (mean 4.12; std ± 0.64) of the device were fully satisfactory for 87.5% of the participants, while ease of use (mean 4.25; std ± 1.16) and comfort (mean 3.62; std ± 0.74) were satisfactory for 75% of the workers.The lowest levels of satisfaction were observed for adjustability (mean 3.5; std ± 0.93) and dimensions (mean 3.5; std ± 0.53).Satisfaction with the supply service (all items) reached full agreement for 87.5% of workers: delivery (mean 4.25; std ± 0.71), assistance (mean 4.37; std ± 0.74), professionalism (mean 4.12; std ± 0.64) and follow-up service (mean 4.25; std ± 0.71). Figure 7 shows the SUS scores assigned by the workers.The highest score was 97.5 (out of a maximum score of 100), and the lowest was 57.5 (average of 79.06, std ± 14.45).In percentile terms, 87.5 is at the 75th percentile, 82.5 at the 50th percentile, and 66.9 at the 25th percentile.Sixty-two percent of the scores are above the value of 80 (5 out of 8 subjects), indicating a predominantly good perceived usability of the EXO. Correlation Analysis Based on the previously presented findings regarding the impact of EXO usage on surface EMG parameters, it can be inferred that the parameter most indicative of the aid's benefits is the RMS, which serves as a representative measure of the signal's 'power'.Consequently, we opted to include in the correlation analysis the difference (∆) between the RMS values observed in trials without the EXO and those calculated from signals acquired during EXO use.Positive ∆RMS values indicate a reduction in muscle power during the task with the EXO, while negative values suggest the opposite effect. The correlation analysis involved examining the ∆RMS value for each muscle in every task and for each subject in relation to various subject characteristics.These characteristics encompassed demographic variables such as age, Body Mass Index (BMI), length of service, and months of EXO use.Additionally, the analysis considered variables from the QUEST aid satisfaction questionnaire (including its individual components), usability questionnaire (SUS), and biomechanical risk classification (OCRA). Individual Variables Table 6 shows the correlation coefficients determined between the sample of data describing the effects of the EXO on muscle activity (∆RMS) and the individual variables.Absolute values of correlation higher than 0.7 are marked with an asterisk.The only correlation value surpassing the defined threshold of 0.7 is the one linking the subject's age to the reduction in effort of the anterior deltoid muscle in Task B (Figure 8).The other correlation indices fall below this threshold and are therefore not considered significant within the exploratory nature of the analysis. OCRA Classification In conclusion, we examine the correlation between the effects of the EXO on the utilized muscle strength (measured by the ∆RMS parameter) and the OCRA classification across three distinct tasks.The inquiry aims to ascertain whether a relationship exists between the EXO's effects and the biomechanical risk associated with each task.Notably, Spearman's correlation coefficient cannot be employed for this analysis, given that the OCRA classification is task-specific and not relative to individual subjects. Table 7 displays the OCRA classification values for each task, exclusively for the right limb, where electromyographic data were recorded.Additionally, Figure 9 presents representative boxplots illustrating the ∆RMS, in percentage values, categorized by muscle and task. Discussions As suggested in the literature, even though there is no standardized procedure for assessing the effectiveness of EXOs, it is important to understand the evaluative metrics, both objective (e.g., surface electrodes and motion sensors) and subjective (user's perception and feedback regarding the EXO).This understanding should be contextualized with respect to the type of posture and tasks analyzed [27]. In our study, we considered the comparison between the conditions of EXO use (ASE) and non-use of the EXO (FREE) during the performance of three industrial tasks involving repetitive upper limb activities and overhead work, carried out by industry workers.These tasks were selected, as they are routinely performed by workers during their shifts and involve repetitive activities and overhead work, factors associated with shoulder disorders. The participants in the study were volunteers selected from a sample of workers in the sector who had previously agreed to the employer's proposal to experiment with exoskeletons for a period of time, in order to test their usefulness in performing repetitive and overhead tasks.In selecting the workers, all of whom are experts in the sector, we took care to include an equal number of men and women and different age groups.This is in response to the criticism that in most of the published field studies on exoskeletons, the samples tested are not representative of the real working population [11]. The sample of real workers provides reliable results, and a fair representation of genders allows to consider aspects of usability and comfort related to the different body conformation of men and women.Furthermore, in the textile sector, the female gender predominates in the workforce due to greater manual dexterity in highly repetitive and precise activities.Gender, together with age and BMI, interacts with occupational risk factors to determine the likelihood of developing WMSDs, even in the long term [8]. Certainly, conducting tests in real-world conditions rather than in controlled laboratory environments may present constraints regarding the type of sensors used and the quality of signals.This is due to environmental and temporal factors, such as cluttered and noisy surroundings, the extent of workers' movements, interferences, and limited time for equipment and calibration, as well as unforeseen events.In planning our field study at a textile production department with several noisy machines and a microclimate of around 26 • C and 70% humidity, the primary concern was obtaining "clean" signals from biometric data.In practice, it was found that EMG recording posed no issues. Analysis of Exoskeleton Effect The purpose of this analysis is to explore the differences in muscle activity during repetitive tasks under two conditions: using an ASE and without it (FREE).In this paragraph, we will explore the changes observed in the electromyographic parameters of specific muscles, namely the anterior deltoid, medial deltoid, and ESL.Statistical analysis has unveiled significant trends and patterns in muscle response, contingent on the condition.These findings offer valuable insights into the effectiveness and potential implications of incorporating EXOs in the work environment. In the analysis of Task A, it is noteworthy that the parameters derived from the EMG signal of the anterior deltoid muscle exhibit a statistically significant reduction when the EXO is employed.Similar effects are observed in the lateral deltoid muscle; however, the differences between conditions are less pronounced.Notably, only the RMS shows a significant difference, implying that the condition influences the overall muscle signal strength but may not necessarily affect the P2P amplitude.Nevertheless, RMS is the metric most closely associated with muscle contraction force; thus, it can be argued that the use of the EXO reduces the force exerted by the shoulder region in carrying out the task of loading the spools. Similar results emerge from the statistical analysis of EMG data recorded during the execution of Tasks B and C. In general, the anterior deltoid muscle is notably affected by the use of the EXO as highlighted by the statistically significant reduction in both characteristic parameters of the EMG signal.The activity of the lateral deltoid muscle also exhibits a statistically significant variation in the two usage conditions, with and without the EXO.However, as emphasized in Task A, there is a statistically significant reduction only in the RMS parameter and not in the P2P amplitude.On the contrary, the right ESL shows limited response to the use of the EXO, with no significant variations in EMG parameters observed between the two working conditions.Only in Task A, the P2P amplitude of the ESL muscle EMG signal is significantly increased, albeit with a small percentage variation.These observations find further support in the analysis of Figure 9, depicting the distribution of variations in the RMS parameter of the EMG signal with the use of the EXO.It is evident that the ∆RMS consistently averages above zero for both the anterior deltoid and medial deltoid muscles across all analyzed tasks, while the distribution centers around zero for the ESL muscle. The study demonstrated a distinct effect in reducing the level of muscle activity in the two shoulder muscles by the EXO, while the impact on the ESL muscle is negligible as would be reasonably expected. According to the recent review by Moeller (2022) [19], studies examining the effects of EXOs in occupational settings mostly focus on passive EXOs, primarily investigating muscle activity (shoulder, upper limb, and body) and secondarily exploring other kinematic, physiological, or usability parameters.According to the review, the Paexo device has been considered in three studies [28][29][30], which demonstrated (in lab assessment) its effectiveness in reducing shoulder muscle activity without negatively impacting trunk activity or compromising performance, eliciting favorable judgments from users. The effects on the shoulder muscles are nevertheless appreciable, even when using other types of passive EXOs.Several studies in the literature demonstrate a reduction in the muscle activity of the anterior and medial deltoid during lifting tasks [31][32][33][34], working with the arms at the shoulder level [35,36], working overhead [35][36][37][38][39][40][41][42][43] , or more specific tasks [35,42,44,45].The evidence diminishes when analyzing the muscles of the back.Increased activity of the Iliocostalis lumborum muscle has been observed with the Fortis EXO during overhead work [37], and with the WADE EXO in lifting and maintaining the arms at shoulder level [33].The latissimus dorsi muscle appears to be relieved of activity when using a passive EXO as reported in [30,35] in a study using the Paexo EXO.In our study, the actual reduction in shoulder muscles activity achieved through the use of the passive upper limb EXO aligns with the literature findings, as does the increase or the lack of evidence for variations in the right ESL muscle [46]. Correlation Analysis The correlation analysis was conducted to understand whether there was a relationship between the extent of the benefits provided by the EXO in reducing muscular effort and individual variables. It is essential to highlight that this analysis is characterized by a limited number of measurements (eight subjects), which restricts the specific robustness of the reported findings.Nevertheless, they provide a valuable initial indication, warranting further investigation. Interestingly, there is no significant correlation between the effects of the EXO and the duration of device use (Table 6).This suggests that the benefits, or more generally, the effects of the EXO, remain independent of the frequency of use and are evident even in subjects using it for the first time.This does not exclude that longer familiarization periods with the device could have potential improved impact on individual behaviors [19].Conversely, a significant positive correlation exists between the recorded benefits of the EXO on the anterior deltoid muscle in Task B and age (refer to Figure 8), indicating a more pronounced effect on older subjects.While no significant correlation is observed, it is noteworthy that Spearman's indices tend to be predominantly negative when considering BMI.This implies that the impact of the EXO is potentially less significant as the body mass index increases.This aspect should be analyzed more thoroughly to ascertain which individual parameter potentially interacts with the degree of effectiveness of the EXO. A noteworthy consideration relates to the exploration of the possible relationship between the effectiveness of the EXO in reducing muscular effort and the biomechanical load associated with the occupational task as assessed by the OCRA index.The three work tasks analyzed in our study have different characteristics in terms of physical demands but all involved repetitive work and overhead tasks.The diverse features of the three tasks led us to hypothesize a more pronounced effect of the exoskeleton in the more demanding task.However, this hypothesis was not supported by the analysis, which instead revealed a reduction in muscle activity (anterior and medial deltoid) in all three tasks in the ASE condition compared to the condition without the exoskeleton (FREE).In other words, the analyzed data do not provide evidence to suggest a more pronounced EXO effect for heavier tasks (Tasks A and C) compared to less biomechanically demanding tasks (Task B).This finding is significant, as it demonstrates the broad effectiveness of the exoskeleton in various types of repetitive and dynamic tasks characterized by "overhead work". Workers' Opinions The level of satisfaction expressed by the workers with the device (Figure 6) provides important clues about the type of exoskeleton and the salient features that are positively perceived by workers in the textile sector, and, along with the high usability score (Figure 7), indicate a favorable reception and more widespread use for the future.The perception of effectiveness and durability, together with the appreciation of the weight and safety of the device, are important prerequisites for a valid use of ASE in the analyzed production context and indicate that the type of ASE analyzed is suitable for the type of work gestures performed on textile machines.The lower levels of satisfaction expressed in relation to the comfort, size and adjustability of the device should guide the development of ASE models with improved ease of use. Considering the Paexo, the perceived ease of use was generally good, particularly in relation to the simple design of the device, as well as its effectiveness and versatility. The significant lightness of the design (1.9 kg) compared to the weight of other commercial ASEs (up to 5 kg), the absence of rigid structures that hinder trunk and upper limb movements and upper limb movements, and its slim structure combined with adjustable straps ensure good stability when used for different types of work activities. According to the most recent review of field studies [10], usability was moderate to high for all types of EXO-ASE evaluated.However, the overall acceptance of exoskeletons in the occupational context is a complex phenomenon related to technology-induced selfefficacy beliefs, which in turn are modulated by the ability to reduce effort and the attributed utility of the device [47]. Significance of ASE as Preventive Intervention for WMSD The literature still lacks evidence of a reduction in WMSDs associated with the use of ASE, although an actual reduction in muscle activity is considered beneficial and favorable in this regard.This gap should be filled by longitudinal or case-control studies, as currently only one author [48] reports evidence of a reduction in the need for medical care by workers using EXOs for long periods. In general, published studies support the use of exoskeletons as a useful intervention for WMSDs control but also highlight the need to optimize the match between the device, activity, and user to maximize beneficial effects and minimize undesired outcomes [9]. Beyond the prevention of WMSDs in all workers, the adoption of exoskeletons can help ensure the continued employment of older individuals or those who have suffered injuries.Despite a reduction in their physical abilities, they can maintain satisfactory performance with the support of the device.This aspect is particularly relevant in the European and Italian work context and holds significant importance from a health, economic, and social perspective.However, as of now, no study has examined the effectiveness of exoskeletons in workers with musculoskeletal disorders in terms of actual work reintegration and the reduction in the risk of leaving employment or developing long-term disabilities [19]. Conclusions The pilot study carried out on a numerically small sample of workers in the wool textile industry under real working conditions confirms the applicability in this context of the technique of measuring EMG signals using wearable sensors and demonstrates the effectiveness of the ASE in reducing shoulder muscles activity, regardless of the type of task performed.The type of ASE considered appears suitable for the work context and is judged positively by workers.These preliminary results may support further field studies aimed at obtaining more robust evidence about the effectiveness of ASE. In the literature, there is still a lack of a comprehensive understanding of the impact of exoskeletons use on workers health, primarily due to the absence of long-term studies in real-world conditions [19], as well as methodological research limitations [46,49].In general, study participants are young and novice subjects (i.e., not actual workers), and almost exclusively male.The numerical consistency of the considered samples is limited, preventing reaching the critical threshold of 80% power for differences.Other limitations include the consideration of specific indicators of workload and only body areas directly supported by the device.The time periods analyzed in the studies are quite short (from a few seconds to 45 min for simulated tasks and, at most, a work shift for direct field observations), making it impossible to extrapolate valid long-term conclusions. Our study took place in real working conditions, including both male and female subjects of different age groups.The analyzed activities represented phases of cyclic tasks actually performed by workers during their shifts, with all participants being proficient in the technique.The short observation period we adopted (a few cycles of repetitive tasks) was chosen due to the pilot nature of the study, for organizational convenience (operations were conducted during actual production activities), and to avoid lengthy recordings while maintaining doubts about obtaining clean signals in a highly disturbed environment.Long-term effects resulting from the use of ASE cannot be deduced from our study. According to the review by Baldassarre et al. (2022) [10], some authors report an increase in the perceived discomfort level by workers during exoskeleton use, especially in dynamic tasks.This effect is associated with friction, pressure, or thermal discomfort caused by the device under specific working conditions.Additionally, a higher level of discomfort is reported by female workers regarding the adaptation of the device due to anthropometric features.In our study, we did not specifically record the subjective perception of workers' effort in the ASE condition compared to FREE, as the literature suggests that this indicator can be subject to a placebo effect [42] and does not reflect the actual bodily benefits that would become evident only after a period of at least 6 months [50].Furthermore, the reduction in the perception of effort caused by ASE would be more impactful after performing static tasks compared to various dynamic tasks [44].However, by itself, it does not affect the level of usability and the worker's willingness to use the device unless accompanied by the perception of effectiveness [51]. Our field pilot study focuses on the industrial textile sector, which had not yet been considered in the literature on occupational exoskeletons.Interventions aimed at preventing WMSDs and maintaining employment for older workers or those with disabilities in work sectors with similar demands to the textile industry can benefit from the results obtained in our study, considering the demonstrated effectiveness in reducing muscular load on the shoulder area. Regarding future research on exoskeletons applied in occupational contexts, firstly, randomized controlled studies are recommended [52], including prospective studies with numerically substantial samples [11], to provide robust evidence of efficacy for preventing WMSDs and to assess health and safety levels associated with device use [53].Various body regions should be considered (including those not directly supported by exoskeleton) to exclude the onset of disorders or adverse events, and the studied samples should exhibit variability in terms of age, gender, and health status [46].The potential unexpected effects resulting from exoskeleton use, related to mobility, postural control, and safety aspects [54] should be considered over a sufficiently long period.The adoption of standardized protocols [1,39] would facilitate comparisons between different models.In general, results from biomechanical analysis associated with the use of exoskeletons should contribute to advancing standards in occupational health and safety, promoting the development and implementation of specific tools for risk assessment associated with the use of such devices in work settings.The detection and monitoring of clinical and subjective aspects should proceed in tandem with the involvement of occupational physicians to establish targeted and effective adoption programs for exoskeletons in productive contexts, including identifying any unsuitable workers or tasks not suitable for their use [10]. Figure 2 . Figure 2. Paexo Shoulder (Ottobock, Duderstadt, Germany), worn by a participant of the study in rest position before carrying out the tests. Figure 3 . A participant performing the activities using the exoskeleton.(a) In Task A, the worker lifts and fix the spool to a pin on the higher part of the machinery, (b) in Task B, the worker takes a couple of threads and inserts them into the lower part of the machinery operating a lever and (c) in Task C, the worker places a dozen tubes to the top of the twisting machine. Figure 4 . Electrode placement on the anterior and medial deltoid muscles (a) and the ESL (b), following SENIAM guidelines.This standardized positioning facilitates reliable EMG signal acquisition for the assessment of dynamic upper limb and back muscle activities. Figure 6 . Figure 6.Satisfaction responses in relation to the delivery service (top) and in relation to the EXO (bottom).The abscissa shows the percentage of subjects declaring themselves to be slightly (in blue) and very satisfied (in orange), respectively. Figure 7 . Figure 7.For each worker interviewed (in abscissa), the score obtained on the SUS usability questionnaire is shown (max score 100). Figure 8 . Figure 8. Distribution of age values and effect of the EXO on the anterior deltoid muscle in task B, for which the correlation is significant. Figure 9 . Figure 9. Representative boxplots of ∆RMS values for each muscle, divided by Task. Table 1 . Demographic characteristics of the study population. Table 3 . Descriptive statistics (mean ± std) and p-value resulting from ANOVA test of electromyographic parameters for the muscles involved in Task A. Table 4 . Descriptive statistics (mean ± std) and p-value resulting from ANOVA test of electromyographic parameters for the muscles involved in Task B. p-values indicating statistically significant differences are highlighted in bold. Table 5 . Descriptive statistics (mean ± std) and p-value resulting from ANOVA test of electromyographic parameters for the muscles involved in Task C. Table 6 . Spearman correlation between the effects of the EXO on muscle activity (∆RMS) and the demographic variables. Table 7 . OCRA task classification values, related only to the right limb on which the electromyographic data were recorded.
9,737
2024-02-23T00:00:00.000
[ "Engineering" ]
Epidemiological, behavioural, and clinical factors associated with antimicrobial-resistant gonorrhoea: a review Antimicrobial-resistant Neisseria gonorrhoeae is a global public health problem in the 21st century. N. gonorrhoeae has developed resistance to all classes of antibiotics used for empirical treatment, and clinical treatment failure caused by extensively resistant strains has been reported. Identifying specific factors associated with an increased risk of antimicrobial-resistant N. gonorrhoeae might help to develop strategies to improve antimicrobial stewardship. In this review, we describe the findings of 24 studies, published between 1989 and 2017, that examined epidemiological, behavioural, and clinical factors and their associations with a range of antimicrobial agents used to treat gonorrhoea. Antimicrobial-resistant N. gonorrhoeae is more common in older than younger adults and in men who have sex with men compared with heterosexual men and women. Antimicrobial-resistant N. gonorrhoeae is less common in some black minority and Aboriginal ethnic groups than in the majority white population in high-income countries. The factors associated with antimicrobial-resistant gonorrhoea are not necessarily those associated with a higher risk of gonorrhoea. Introduction Antimicrobial-resistant Neisseria gonorrhoeae (AMR-NG) is a global public health challenge 1 . The World Health Organization (WHO) estimates that, in 2012, more than 78 million new infections with gonorrhoea occurred worldwide 2 . Of these, more than 90% were in low-and middle-income countries. In high-income countries, including England 3 , the USA 4 and Australia 5 , N. gonorrhoeae is the second most commonly reported bacterial sexually transmitted infection (STI). N. gonorrhoeae primarily infects the mucosal epithelium, causing urethritis in men, cervicitis in women, and rectal and pharyngeal infection in men who have sex with men (MSM) and women 6 . Untreated infection that spreads to the upper genital tract can cause epididymo-orchitis and pelvic inflammatory disease, ectopic pregnancy, and tubal infertility 6 . Infection in pregnancy is associated with preterm birth and low birthweight and can cause neonatal conjunctivitis if transmitted during delivery. Rarely, N. gonorrhoeae can spread systemically, causing arthritis, endocarditis, and septicaemia. The inflammatory response to N. gonorrhoeae in the genital tract increases the infectivity of HIV. All of these complications will become more frequent if antimicrobial resistance renders gonorrhoea untreatable. Gonorrhoea shares some epidemiological characteristics with other bacterial STIs 7 . It is associated with higher numbers of sex partners 8 (which are more common in MSM than in heterosexual adults 9,10 ), younger age 3 , and lower socioeconomic position 11 , and, in high-income countries, it is associated with being a member of some black and ethnic minority groups 11 . N. gonorrhoeae is a bacterium that has extensive capacity for genetic mutation or plasmid exchange of resistant genes throughout its life cycle 1 . This remarkable biological characteristic has helped the bacteria to survive and evolve or acquire resistance to many different classes of antibiotics over the years 1 . Unemo and Shafer have reviewed antimicrobial treatments for gonorrhoea and the emergence of resistance comprehensively up to 2014 1 . Penicillin was first used to treat gonorrhoea in 1943. Initially, chromosomally mediated resistance emerged, so higher and higher doses were needed to cure gonorrhoea. In 1976, the first plasmid-mediated penicillinase-producing strains were reported from South East Asia and West Africa 12,13 . In the 1990s, quinolones, particularly the fluoroquinolone ciprofloxacin, replaced penicillin as the first-line treatment for gonorrhoea 14 . Resistance was reported initially from countries in South East Asia and spread internationally by the early to mid-2000s. Third-generation, extendedspectrum cephalosporins (ESCs) (mostly oral cefixime and injectable ceftriaxone) have been recommended for first-line use since the early 2000s. Resistance to ESCs was reported first in Japan 15 , and strains with high-level resistance to ESCs spread to Europe [16][17][18][19] . Currently, the WHO recommends dual therapy with ceftriaxone and azithromycin for the first-line treatment of gonorrhoea, and the intention is to ensure cure rates of greater than 95% of infections 20 . Clinical treatment failure and high-level resistance to this regimen were reported in 2016 17 . Resistance has also emerged to other drugs, such as tetracyclines, spectinomycin, and azithromycin, that have not been used widely as first-line treatments. Antimicrobial resistance hampers strategies to control and prevent gonorrhoea 21 . Understanding factors that are associated with AMR-NG could help to identify groups at high risk of having resistant infections, provide more focused management, and assist antimicrobial stewardship. In this review, we describe the findings of studies that have examined associations between epidemiological, behavioural, and clinical factors and the presence of AMR-NG. Search strategy We searched Medline (Ovid, Wolters Kluwer) from 1946 until August 2017 without language restrictions by using combinations of keywords for the organism, AMR, and associated factors: Neisseria gonorrhoeae or gonorrhoea, drug resistance, risk factors, sexual behaviour, health services, or epidemiology. We selected studies that compared epidemiological, behavioural, or clinical factors in people with or without AMR-NG. We recorded information about study characteristics, study population, antimicrobials, and findings from each study in an evidence table (Appendix 1). Characteristics of included studies Of 129 articles identified, 24 publications were included 14,22-44 . Appendix 1 summarises the main characteristics of each study. All included studies used a cross-sectional (16 studies) or casecontrol (eight studies) study design. Nine studies were nested in surveillance systems for 25,28,30,32,[37][38][39]44 , and 14 reported a multivariable analysis [23][24][25]27,28,30,31,35,36,39,40,[42][43][44] . The evidence that we found about factors associated with AMR-NG comes mainly from regions and countries that do not have the highest incidence of gonorrhoea (Table 1). Of 24 included studies, 19 came from Europe 23,25,[27][28][29]35,37,40,[42][43][44]22,26,30,33,38,39,41 , although the WHO European Region and the whole WHO Region of the Americas account for only 20% of people with incident gonorrhoea worldwide 2 . These regions include countries with the best-established surveillance systems for STIs in general and systematic surveillance systems for antimicrobial resistance, such as the Gonococcal Resistance to Antimicrobials Surveillance Programme (GRASP) in England and Wales 23 , the US Gonococcal Isolate Surveillance Program (GISP) 30 , and the Australian Gonococcal Surveillance Programme (AGSP) 32 . These systems can collect demographic and epidemiological data so that associations with AMR-NG can be assessed regularly. Our search did not find any studies about potential risk factors for AMR-NG from Africa, where the prevalence and incidence of gonorrhoea are high 2 , or from Latin America and the Caribbean, South East Asia, or Eastern Mediterranean regions, where surveillance for STIs and AMR is also limited. Although AMR-NG strains with resistance to penicillin (penicillinase-producing), spectinomycin, fluoroquinolones, and ESCs were first reported from countries in the Western Pacific region, such as Japan, South Korea, and the Philippines 45 , we found only three studies in the region that examined factors associated with AMR-NG: two in China 24,36 and one in the Philippines 31 . Figure 1 shows the distribution of studies that have examined potential risk factors for AMR-NG over time, according to antibiotic class. Broadly speaking, these follow the periods in which each antimicrobial class was a recommended treatment. The first studies, published in 1989, examined risk factors for penicillin resistance and for tetracycline, which was beginning to be used to treat chlamydia infections and non-specific genital infections 22,33 . The next, and largest, group of studies focused on the identification of factors potentially associated with resistance to fluoroquinolones 14,26,29,31,[39][40][41] , followed by macrolides 25,27,37,38 and ESCs [23][24][25]27,28,[35][36][37]42 . Factors potentially associated with antimicrobialresistant Neisseria gonorrhoeae We describe epidemiological, behavioural, and clinical factors that have been examined in association with AMR-NG (summarised in Appendix 2). We describe as 'risk factors' factors associated with an increased risk or odds of AMR-NG, based on the effect size and its 95% confidence intervals (CIs), where available. Our overall interpretation takes into account the size of the study and the type of statistical analysis. Where findings between studies are inconsistent, we give more emphasis to findings from larger studies with multivariable analyses that control for important potential confounding factors. For most factors examined, there were too few studies to determine whether associations differ for different antimicrobials. In observational studies, confounding of observed associations by measured or unmeasured factors is likely. Epidemiological factors Age. Younger age is a risk factor for gonorrhoea; the peak age groups for diagnosis of gonorrhoea are 20-24 years in both women and men in the USA 4 and 20-24 in women and 25-35 in men in England 3 . Amongst MSM, the peak age at infection is somewhat older 3 . In contrast, AMR-NG was more common in adults who were 25 years or older than in younger people in most studies that examined age as a risk factor for resistance to tetracyclines, fluoroquinolones, and ESCs (Appendix 2). This finding might have resulted from the inclusion of large numbers of MSM; in two large studies, age was no longer associated with decreased susceptibility to ESCs in multivariable analyses adjusted for the composition of the study population 23,42 . In several studies, however, older age remained associated with AMR-NG in multivariable analyses, including ciprofloxacin resistance in women in the Netherlands 44 ; reduced susceptibility to ceftriaxone in heterosexual men and women, but not MSM, in England and Wales 28 ; ciprofloxacin resistance in Spain 35 ; ciprofloxacin and cefixime but not azithromycin resistance in a European Union surveillance network 25 ; and probable resistance to ceftriaxone but not to penicillin or tetracycline in China 24 . Studies that found no association or an association with younger age were small or methodologically flawed 36 . Sex. Gonorrhoea surveillance reports show higher numbers of reported cases of gonorrhoea in men than in women 3-5 , even after the high proportion of infections diagnosed in MSM was taken into account 3,4 . The higher frequency of symptomatic infections in men than in women results in higher levels of attendance at healthcare settings 6 . We found 11 studies that compared AMR-NG between heterosexual men and women 14,[23][24][25][27][28][29]33,35,36,41 . Three publications from two studies with multivariable analyses found AMR-NG more commonly in heterosexual men than in women 23,24,28 . Heterosexual men had about twice the odds of NG with reduced susceptibility to ceftriaxone than did women in Same-sex sexual partnerships in men. Gonorrhoea is more commonly reported in MSM than in men who have sex with women only or in women 46 , and rates of reported gonorrhoea are increasing more rapidly in MSM than in men who have sex with women only or in women 46,47 . Most studies that have examined this factor 14,23,35,44 also found that AMR-NG was more common in MSM than in men who have sex with women only, including most studies with multivariable analyses 23,35,42,44 . AMR-NG was more commonly found in MSM compared with men who have sex with only women in the Netherlands for cefotaxime (age-adjusted OR 2.9, 95% CI 1. AMR-NG might also be common in MSM because the pharynx is thought to be a reservoir for strains that have acquired genes that confer resistance to ESCs in commensal Neisseria species (see 'Anatomical site of infection' subsection of 'Clinical factors' section) 1 . MSM can have gonococcal infection in the pharynx and rectum, resulting from oral and anal sexual intercourse, as well as the urethra 6,48 . Pharyngeal and rectal gonorrhoea are usually asymptomatic and can remain untreated if these anatomical sites are not sampled 49 . Anatomical site of infection is considered as a risk factor below. Racial or ethnic group. Surveillance reports show that rates of gonorrhoea diagnoses are several times higher in some minorities, such as African American, black Caribbean, and indigenous Aboriginal ethnic groups, than in the majority white population in countries such as the USA 4 , the UK 3 , the Netherlands 50 , Canada 26 , and Australia 5 . We found eight studies that examined racial or ethnic group as a risk factor 14,23,[26][27][28]32,39,44 . AMR-NG was not more common in black and Aboriginal ethnic groups. Ciprofloxacin resistance was less common in people from black, Hispanic, and other ethnic groups than in whites in a multivariable analysis in the USA 39 and less common in people from Aboriginal groups in Canada in a univariable analysis 26 . Decreased susceptibility was less common in people from ethnic groups in multivariable analyses in England and Wales 23,28 . In Australia, surveillance data from the Northern Territories and Western Australia showed a much lower proportion of penicillinase-producing NG isolates in remote areas (2%), in which the population is almost entirely Aboriginal, than in urban areas (14-19%), where the population is mixed 32 . However, in the USA, ciprofloxacin was slightly more common in people from Asian and Pacific Island ethnic groups than in whites 14,39 . Assortative sexual mixing patterns, in which people are more likely to have partners from their own than from other ethnic groups 51 , are likely to contribute to differential rates of both gonorrhoea infection and maybe also AMR-NG. Socioeconomic position. Whilst higher rates of reported gonorrhoea are strongly associated with lower socioeconomic position, possibly as a marker of poor education and awareness of STIs 11 and limited access to healthcare, we found only one study that had examined the association with AMR-NG. In one study in China, higher income levels were associated with lower levels of plasmid-mediated tetracycline resistance (adjusted OR 0.34, 95% CI 0.14-0.18) but not ceftriaxone or penicillin resistance 24 . Behavioural factors Multiple sex partners. Gonorrhoea has a short duration of infectiousness, and its persistence in a population relies on transmission in groups with high rates of sexual partner change 8 . The probability of acquiring AMR-NG, however, is not necessarily associated with higher numbers of sexual partners when other factors are taken into consideration. In several studies, a higher number of sexual partners was associated with AMR-NG in univariable analysis [22][23][24]27,28,35,38,40 . In studies that conducted multivariable analyses 23,27,35 , only one study, in the Netherlands, found that the association persisted, with an attenuated OR 27 . Sex with partners abroad. Travel abroad has been reported in some studies as a risk factor for STIs 52,53 , presumably because people take more risks when on holiday, such as having unprotected sex with casual partners 54 . Since AMR-NG often arises first in countries in South East Asia and the Western Pacific, travellers, including sex tourists, who have unprotected sex in these regions are assumed to import AMR-NG into their home countries 45,55 . We found 10 studies that investigated travel or sexual contact abroad as a risk factor for AMR-NG 14,23,[27][28][29][30]35,38,39,43 , and four of them examined fluoroquinolone resistance in the late 1990s and early 2000s 14,30,39,43 . Ciprofloxacin resistance was more common in those reporting travel abroad or sex with a partner who had travelled abroad in univariable analyses from Hawaii 14 and California 39 but not in multivariable analysis 39 . A national study in the USA found higher levels of fluoroquinolone resistance in heterosexual men with a history of travel but found lower levels in MSM 30 . Another study found an association, in multivariable analysis, with sexual contact outside Switzerland 43 . The variables in these studies do not specify exposures in particular places and might underestimate associations. Supportive evidence about the international spread of AMR-NG comes from gene sequencing studies of some highly resistant N. gonorrhoeae clonal strains 18 . More detailed studies on people with gonorrhoea and their sexual networks with detailed phenotypic and genotypic characterisation would contribute to the identification of the origin and spread of resistance. Exchanging sex for money. Commercial sex workers and their clients in some countries are at high risk of acquiring STIs, including gonorrhoea 1 . We included seven studies that considered commercial sex and AMR-NG 14,22,31,33,38,39,44 . One of these studies, conducted among female commercial sex workers in the Philippines from 1996 to 1997, found that, in multivariable analysis, high-level resistance to ciprofloxacin was associated with living in the capital, Manila, and having recently started sex work 31 . One study in the Netherlands found that, in multivariable analysis, female sex workers had a much higher risk of ciprofloxacin-resistant gonorrhoea than did other women (adjusted OR 25.0, 95% CI 7.7-78.2) 44 . Studies in the USA did not distinguish clearly between female or male sex workers or clients 14,22,38,39 ; exposure to commercial sex work was associated with AMR-NG in univariable analysis in only two studies 22,38 . Alcohol and drug use. Only four of the included studies 22,24,38,39 looked at these factors. One study in China found that alcohol use was associated with tetracycline resistance (adjusted OR 1.69, 95% CI 1.08-2.64) 24 in multivariable analysis. In the USA, one study found that having had a sex partner who received drugs or money for sex was associated with azithromycin resistance (crude OR 34.0, 95% CI 2.3-1651) 38 , but another study found a much weaker association with ciprofloxacin resistance in univariable analysis and no association in multivariable analysis 39 . These factors warrant more detailed investigation. Clinical factors Anatomical site of infection. MSM and commercial sex workers can harbour N. gonorrhoeae in the pharynx 1,6 . We found three studies 25,27,28 that considered anatomical site of infection. All three studies conducted multivariable analysis. In the Netherlands, ceftriaxone resistance was more common in the pharynx than in the urethra amongst MSM (adjusted OR 2.52, 95% CI 1.64-3.89) but not heterosexual women and men 27 , and in England and Wales, a slight decrease in susceptibility to ceftriaxone was more common in the pharynx in heterosexual women and men (adjusted OR 1.84, 95% CI 1.44-2.34) but not MSM 23 . In Euro-GASP, isolates from the pharynx were not more likely than genital isolates to show AMR-NG, but cefixime and ciprofloxacin resistance were reported to be less common in anorectal than in genital isolates 25 . Co-infection with HIV and other sexually transmitted infections. People infected with NG are at higher risk of acquiring HIV infection 56 . Being co-infected with HIV was associated with resistance to ESCs or ciprofloxacin in univariable but not multivariable analysis in three studies in the Netherlands and in England and Wales 23,42,44 . In another study in the Netherlands, MSM with HIV infection were less likely than HIV-negative MSM to have azithromycin resistance (adjusted OR 0.72, 95% CI 0.54-0.96) in multivariable analysis 27 . Co-infection with Chlamydia trachomatis is also common in people with gonorrhoea. In studies conducted by GRASP in England and Wales 23,28 , people who were not co-infected with chlamydia were more likely to have AMR-NG in multivariable analyses. There is no definitive explanation for this finding. Recent antibiotic use. Antimicrobial use exerts selection pressure for the emergence of resistance 1 . Current or recent antimicrobial use was examined in five studies in the USA, but findings were inconsistent 14,22,33,38,39 . Ciprofloxacin resistance was found more commonly in female sex workers in the Philippines who were taking antimicrobials in univariable but not multivariable analysis 31 . Studies that did not find associations with past antimicrobial use might have asked questions that were not specific enough about particular antimicrobials. Other risk factors Additional factors-such as gonorrhoea or STI history, lifetime sex partners, partnership type, more than one infected site, and year of isolation-that were reported in small numbers of studies are listed in Appendix 2 but are not described in detail here. Discussion In this review, AMR-NG was more common in older than in younger adults, in heterosexual men than in women, in MSM compared with men who have sex with women only, and possibly in people with poor socioeconomic position. People from some black ethnic groups in the USA and Europe and Aboriginal ethnic groups living in Canada and Australia are less likely to have AMR-NG than the white majority population. Very few studies about risk factors for AMR-NG have been done in countries in sub-Saharan Africa, Latin America, or some parts of South East Asia and the Western Pacific where gonorrhoea is most common. The main strength of this review is that we searched for studies worldwide, irrespective of the language and the year of publication, and we extracted the same information from all studies. The main limitation of the review is that it was not entirely systematic. Our search of Medline might have missed studies, particularly from low-and middle-income countries, non-English language journals, and grey literature. Therefore, the findings of the review are most applicable to factors associated with antimicrobial-resistant gonorrhoea in high-income countries in Europe, North America, and Australia. We did not follow a protocol, and, although we selected factors of interest in advance, we did not report all study findings comprehensively. Nevertheless, our interpretation took into account studies that found no association with a potential risk factor and we distinguished between associations found only in univariable analyses and those found consistently in multivariable analyses that control for potential confounding factors. This review shows that some risk factors for AMR-NG are not necessarily those associated with a higher risk of gonorrhoea infection itself (Appendix 3). Of note, whilst the risk of gonorrhoea in heterosexual adults is highest amongst younger people with high numbers of sexual partners, AMR-NG appears to be more common in older adults and, after other factors were controlled for, high numbers of sexual partners were not consistently associated with AMR-NG. AMR-NG was also less likely amongst people from black minority and Aboriginal ethnic groups living in countries where the majority of the population is from white ethnic groups. These findings appeared to be consistent across several different antibiotic classes. We cannot provide definitive explanations for these findings, but they could offer some empirical support for the results of a mathematical modelling study, which found that a high treatment rate, rather than the rate of partner change, predicts the spread of AMR-NG 57 . Higher prevalence of AMR-NG in MSM could result from a combination of factors, including a high risk of gonorrhoea infection at older ages than in heterosexuals 3 , frequent oral sex resulting in pharyngeal infections 6 , and high attendance rates at sexual health clinics 58 . We did not find that recent travel abroad, commonly reported as a risk factor for AMR-NG, was consistently associated with resistance. Because a history of recent travel, as asked about in the US GISP, is too non-specific, some studies might not have found an association. In addition, associations might differ over time and be found when resistance to a particular class of antimicrobials, or a specific gonococcal clone, starts to spread but might not be found at a later time point. Evidence from gene sequencing studies with supportive evidence from epidemiological studies strongly suggests that antimicrobial-resistant gonococcal strains appear to emerge in parts of South East Asia and are spread by international travellers 1 . Researchers have found more consistent evidence of the role of travel for other organisms. A systematic review of cohort studies showed high levels of acquisition of multidrug-resistant Enterobacteriaceae in travellers returning from countries in southern Asia 59 . Conclusions and recommendations for future research This review found a limited number of studies that investigated factors associated with AMR-NG and few studies from low-and middle-income countries where both gonorrhoea and antimicrobial resistance are most common. For this reason, we could not provide a comprehensive global picture of factors that increase the risk of AMR-NG. The factors associated with antimicrobial-resistant gonorrhoea are not necessarily those associated with a higher risk of gonorrhoea. Future research studies should investigate in more detail the apparent associations with increased risk of AMR-NG in older age groups and amongst travellers and with decreased risk of AMR-NG in black and Aboriginal groups living in high-income countries. Improvements in surveillance systems for antimicrobial resistance, including enhanced surveillance that collects information about key factors such as age, same sex partnerships, travel-associated sexual partnerships, or sentinel surveillance in specific groups, might allow earlier identification of emerging resistance and of risk factors that could allow more intensive follow-up and prevention interventions in groups at high risk of AMR-NG 21 . Better knowledge about modifiable risk factors for AMR-NG could help to mitigate the spread of resistance to ESCs, the last recommended empirical treatment for gonorrhoea. Open Peer Review Current Referee Status: Editorial Note on the Review Process are commissioned from members of the prestigious and are edited as a F1000 Faculty Reviews F1000 Faculty service to readers. In order to make these reviews as comprehensive and accessible as possible, the referees provide input before publication and only the final, revised version is published. The referees who approved the final version are listed with their names and affiliations but without their reports on earlier versions (any comments will already have been addressed in the published version). The referees who approved this article are: Version 1 The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias You can publish traditional articles, null/negative results, case reports, data notes and more The peer review process is transparent and collaborative
5,791.8
2018-03-27T00:00:00.000
[ "Medicine", "Biology" ]
Simultaneous Decoding of Eccentricity and Direction 2 Information for a Single-Flicker SSVEP BCI 3 : The feasibility of a steady-state visual evoked potential (SSVEP) brain - computer interface 15 (BCI) with a single flicker stimulus for multiple-target decoding has been demonstrated in a number Introduction Steady-state visual evoked potential (SSVEP), as one of the most widely used responses in electroencephalogram (EEG) -based brain-computer interfaces (BCIs), has received sustained attention [1][2][3][4][5][6][7].When participants attend a periodic visual stimulus, SSVEPs are elicited at the stimulation frequency and its harmonics [8].Correspondingly, by encoding different targets with distinct frequencies, BCI systems can be realized via real-time frequency recognition of the recorded SSVEPs [3,9].To date, the frequency-coding SSVEP BCIs have achieved significant progress, featured by the relatively large number of simultaneously decodable targets and the high communication speed [5,6], hereby promising for real-life applications such as letter typing. When flicker stimuli were presented at different spatial locations in the visual field, distinct SSVEP responses would be elicited [10].The phenomenon, known as the retinotopic mapping [11,12], has gained increasing interest in recent BCI studies.Based on the retinotopic mapping of SSVEP, while pilot BCI studies have mainly focused on designing visual spatial patterns to increase possible BCI target numbers [13] or enhance the signal-to-noise ratio (SNR) of SSVEP [14] Peer-reviewed version available at Electronics 2019, 8, 1554; doi:10.3390/electronics8121554[15,16].Unlike the traditional frequency-coded SSVEP BCI paradigm in which SSVEP responses were modulated by targets with different frequencies [3,9], it is feasible to design a spatially-coded SSVEP BCI by encoding responses by targets with different spatial locations.Indeed, previous studies have demonstrated that overtly attending to targets at distinct spatial directions relative to a centrallydisplayed flicker stimulus could evoke separable SSVEP responses [15,16].Moreover, the differences in responses are sufficient to support the decoding of directions at a single-trial level to achieve a dial [15] and spatial navigation task [16], suggesting the feasibility of a single-stimulus, multi-targets SSVEP BCI.Compared with the frequency-coded BCIs in which multiple stimuli are required to encode multiple targets, this single-stimulus design can considerably simplify the stimulation setup and the user interface of BCIs [17,18].In addition, given the fact that the stimulus always appears in the peripheral visual field, this single-flicker SSVEP BCI paradigm is expected to reduce the visual burden at the same time [16], indicating its potential to be a good candidate for practical applications. However, the previous spatially-coded SSVEP studies only utilized spatial directions to encode targets, the resulting nine-or four-command designs have limited the potential applications of spatially-coded BCIs when compared with the conventional frequency-coded SSVEP BCIs.For example, in a drone control task, while previous designs are only sufficient to control the moving directions, it is possible to send more commands such as speeding, stopping, climbing etc., if more command channels could be achieved.One way to extend the feasible application scenarios is to include the visual eccentricity information for increasing the number of targets.Indeed, SSVEP responses have been observed to reduce along with the increase of the eccentricity of stimuli from the fixation spot [19], providing neurophysiological evidence in support of the eccentricity decoding in SSVEP responses.Joint decoding of eccentricity and direction information is expected to substantially increase the number of targets, by making a better use of the visual spatial information. Nvertheless, the eccentricity information could contribute to extending the encoding dimension only when the spatial patterns remain separable even with a large eccentricity.Specifically, the weaker SSVEP responses along with increasing eccentricities may lead to a reduced accuracy for the direction classification at the same time, thus influencing the BCI performance in a complex way.Although there are previous studies suggesting a relatively stable spatial patterns of visual motion-onset responses with increasing eccentricities [17,18], efforts are still needed to evaluate how visual eccentricity information modulates the SSVEP responses and whether this modulation could contribute to decoding visual spatial information at a single-trial level. In the present study, the feasibility of a spatially-coded BCI to encode targets with both the eccentricity and direction information simultaneously was evaluated.Eight directions (left, left-up, up, right-up, right, right-down, down, and left-down) and two eccentricities (2.5° and 5°) relative to one flicker stimulus were employed to encode 16 targets.During the experiment, participants were instructed to direct their overt attention to one of the targets with EEG recorded.Then, SSVEP responses modulated by different visual directions and eccentricities were analyzed, and the 16target classification performances were evaluated in an offline manner.Our results suggest the feasibility of the simultaneous decoding of visual eccentricity and direction information based on SSVEP. Participants Twelve participants (five females, aged from 23 to 28 years, mean 24.8 years) with normal or corrected-to-normal vision participated in the experiment.All participants were given informed consent before experiments and received financial compensation for their participation.The study was approved by the local Ethics Committee at the Department of Psychology, Tsinghua University. Visual Stimulation The visual stimulation in the experiment is illustrated at the top panel of Figure.50 cm) was used to present the stimulation.A white disk (radius = 2.5°) was centrally displayed on the screen (indicated as the gray disk in the top panel of Fig. 1).During the experiment, the disk flickered at 12 Hz with a sampled sinusoidal stimulation method [20], forming a flicker stimulus to elicit SSVEPs. The stimulus lasted 4000 ms in total.One small red square (0.25°×0.25°) would appear on the screen to indicate where the participants should direct their overt attention during the experiment.There are 16 possible targets arranged surrounding the central circle at eight directions (left, left-up, up, right-up, right, right-down, down, and left-down) and two eccentricities (2.5° and 5°).Since a previous study observed a rapid drop of SSVEP responses when the stimulus presented beyond 5° away from the central fixation spot [19], 2.5° and 5° were chosen conservatively to evaluate the feasibility of eccentricity decoding in the present study.Eccentricities larger than 5° will be explored in further studies. Experimental Procedure The experiment included ten blocks in total.The duration of the inter-block intervals was controlled by participants themselves with a lower limit of 30 seconds set in the experimental program.In each block, 16 trials corresponding to each attention target were presented with a random order.As demonstrated at the bottom panel of Figure .1, for each trial, one red square was displayed to cue the to-be-attended target for 1000 ms at the beginning, then following by a 4000-ms flicker stimulus.The red square existed for the whole flickering duration for the participants to attend.The inter-trial interval varied from 1000 to 1500 ms, during which participants could blink or swallow. The Psychophysics Toolbox [20,21] based on MATLAB (The Mathworks, Natick, MA, USA) was employed to present the stimulation. EEG Recordings EEG was recorded continuously at a sampling rate of 1000 Hz with a SynAmps2 amplifier (Compumedics NeuroScan, USA).Sixty-four electrodes were recorded according to the international 10-20 system with a reference at the vertex and a forehead ground at AFz. Electrode impedances were kept below 10 kΩ during the experiment.The experiment was carried out in an electromagnetically shielded room. Data preprocessing Continuous EEG data were first band-pass filtered to 1.5-80 Hz, and a 50-Hz notch filter was used to remove the line noise.Next, EEG data were segmented into 4000-ms trials after the onset of the stimulus, resulting in 10 trials for each of the 16 attentional targets.Then, a set of 9 electrodes covering the parietal-occipital (PO5/6/7/8, O1/2, Pz, POz, and Oz), where the SSVEPs typically show maximal responses, was chosen for further analysis. SNR evaluation In order to describe the SSVEP response strength when attending to targets at different directions and eccentricities in a quantitively way, a newly-proposed method [22], which could evaluate the SSVEP SNR of the multi-channel EEG data response while considering multiple harmonics, was employed in this present study.Here, the stimulus frequency, as well as its second and third harmonics, were included in SNR calculation and the following-up BCI classification. First, for each subject, the segmented EEG data were averaged for each attentional target.Then, the SSVEP signal was defined as the projection of the averaged EEG data in the subspace of the stimulus frequency and its harmonics, while noise was defined as the residual after the projection.SNR, which was defined as the ratio between signal and noise, was calculated as the index of the responses for each attentional target with the formula (1).Details about the mathematical derivation could be found in [22]. Here, T is the 9-channel averaged EEG data, and ϕ is the reference signal. Finally, a two-way repeated measure analysis of variance (RMANOVA) with two within-subject factors, i.e., direction (left, left-up, up, right-up, right, right-down, down and left-down) and eccentricity (2.5° and 5°), was conducted to determine their possible effects on the SNR of SSVEP statistically.P values smaller than 0.05 were considered statistically significant after Greenhouse-Geisser correction.Statistical analyses were performed with SPSS (22.0.0,IBM, Armonk, NewYork, USA). BCI Classification In the offline performance evaluation, the single-trial 4000-ms EEG data were used for BCI classification without any artifact rejection.A canonical correlation analysis (CCA) based classification algorithm [23] was employed to capture the distinct SSVEP patterns, as reported in [15,16].Note that all the offline classifications were evaluated with a 10-fold cross-validation procedure. First of all, in order to evaluate how directions and eccentricities contribute to the classification performances, 8-directions classification at each eccentricity and the 2-eccentricity classification in each direction were conducted. In the training phase, K-trial EEG data when the participant was attending to the target location c were concatenated as X c .Then, the reference signal Y was obtained by replicating the ϕ (see formula (2)) K times: Here, N is the target number.For 8-directions classification, N = 8, and for the 2-eccentricity classification, N = 2.The M is the number of canonical correlation coefficients and is set as 6, the same as reported in [15][16]. Then, for each trial in the training set, a 1×(N*M) feature vector was composed by calculating the canonical correlations for all N targets and concatenating them as [r 1 r 2 … r N ], which was used to train a support vector machine (SVM) classifier using the LIBSVM toolbox [24]. In the testing phase, the EEG trial to be classified is filtered with Wx c , and the correlation coefficients with the corresponding reference signals Wy c ϕ are computed, (c = 1, 2, …, N).The concatenated correlation coefficients [r 1 r 2 … r N ] constituted the feature vector for the testing trial, which then was used to recognize the target by the classifier. After decoding the directions and eccentricities separately, a 16-target classification which decoded the visual eccentricity and direction information simultaneously was conducted with the above-mentioned CCA method.Here, N = 16. Finally, in order to evaluate how the visual eccentricity information influences the joint classification of directions and eccentricities, three conditions: individual filter, 2.5° filter, and 5° filter were compared.The individual filter means the spatial filters Wx c and Wy c (c = 1, 2, …, 16) were trained with data from their respective eccentricities, corresponding to the results in Table I.The 2.5° filter, however, indicates the classification accuracies were calculated all by using spatial filters trained with data with an eccentricity of 2.5°, even for those with an eccentricity of 5°.The 5° filter could be explained similarly. Results As illustrated in Figure .2, a typical SSVEP response over occipital and parietal areas could be found across conditions.When attending to targets at different directions and eccentricities, distinct SNR topographies for SSVEP were elicited with a shift of the response over the parietal-occipital areas.Specifically, when participants attended to the target at the right side, the flicker stimulus appeared in their left visual field, leading to a right-dominant response, and the opposite relation held for the target at the left side, suggesting a contralateral response.In addition, the SSVEP spatial patterns remained similar, along with the increasing eccentricities of the flicker stimulus.The accuracies were 75.5±14.9%,and 59.4 ±15.0% at 2.5° and 5°, respectively.As observed, the classification accuracy is reduced for the targets with the larger eccentricity.As reflected in Figure 5, the 2-eccentricity classification achieved an accuracy of 89.6±15.0%,91.7 ±10.7%, 89.6±13.3%,84.2 ±11.0%, 91.7±13.0%,93.8±9.38%,87.9±16.0%,and 90.4 ±9.00% for left, left-up, up, right-up, right, right-down, down and left-down, respectively.There was no significant difference between any pair of directions after a paired t-test with Bonferroni correction. The results so far demonstrated the feasibility of decoding directions and eccentricities separately.Then, the 16-target classification results, which decoded directions and eccentricities at the same time, are summarized in Table I.When using a 4-s data, the mean accuracy across participants is 66.8 ±16.4%, well above chance level for the 16-target classification problem (i.e., 6.25%).Note that an individual difference could be found in classification accuracies, ranging from 38.8% to 90.0%.±15.7% for using data with a length at 2s, 3s, and 4s, respectively.Although a decreasing trend could be observed, the 2-s data still provided accuracies well above chance level.This present study also took a closer look at how the visual eccentricity information contributed to this spatially-coding paradigm.First of all, when attending to targets at increased eccentricities, the corresponding reduced SNRs and decreased 8-direction classification accuracies suggested a weaker response along with the larger eccentricity.Furthermore, this decrease of SSVEP responses could be a contributing feature for the eccentricity decoding, supported by the 2-eccentricity classification accuracies ranged from 84.2% to 93.8% in 8 directions.Then, the 8-direction classification accuracies at 5° were found to achieve an accuracy of 59.4 ±15.0%, much higher than chance level.More importantly, compared with those classifications using spatial filters trained from their corresponding eccentricities, the 16-target classification accuracies, though significantly decreased, still remained comparable when using spatial filters from data with an eccentricity of 5°. , efforts have been devoted to decoding the spatial information embedded in SSVEP responses directly recent years Preprints (www.preprints.org)| NOT PEER-REVIEWED | Posted: 15 October 2019 © 2019 by the author(s).Distributed under a Creative Commons CC BY license. PreprintsFigure 2 . Figure 2. The topographies of the SNR of SSVEP from a representative participant (sub 2).The inner circle represents the eccentricity of 2.5°, while the outer circle represents the eccentricity of 5°.All SNRs were normalized into z-values so that the positive and negative values indicate SNRs above and below the mean level across electrodes, respectively, in z units. Figure 5 . Figure 5.The boxplot of the 2-eccentricity classification accuracy at each of the eight directions.The black dashed line indicated the chance level of classification. Figure 6 . Figure 6.The boxplot of accuracies influenced by spatial filters.The individual filter means the spatial filters were trained with data from their respective eccentricities.The 2.5° filter label indicates the classification accuracies were calculated all by using spatial filters trained with data with an eccentricity of 2.5°.The 5° filter label could be explained in a similar way.The black dashed line indicated the chance level of classification. Figure 7 . Figure 7. Confusion matrix for 16-target classifications.L, LU, U, RU, R, RD, D, LD are short for left, left-up, up, right-up, right, right-down, down, and left-down.The rows show true labels and the columns show predicted labels.The 2.5° filter label means the confusion matrix was calculated by all using spatial filters trained with data at an eccentricity of 2.5°.The 5° filter label could be explained in a similar way.The individual filter means the spatial filters were trained with data from their respective eccentricities. PreprintsFigure 8 . Figure 8. Classification accuracies as the function of data length.Error bars indicate standard error.The black dashed line indicated the chance level of classification. These classification accuracies provide evidence in support of the weaker yet stable spatial patterns across eccentricities.Taken all, our results suggested that the decreased SSVEP responses and relatively stable spatial patterns provided the neural basis of the joint decoding of visual eccentricity and direction information, supporting the feasibility of the visual eccentricity information as an encoding dimension in spatially-coded BCIs.Besides, it should also be noted that the feasibility of transferring spatial filters across eccentricities indicated the potentials to reduce training time, if it is Preprints (www.preprints.org)| NOT PEER-REVIEWED | Posted: 15 October 2019 Peer-reviewed version available at Electronics 2019, 8, 1554; doi:10.3390/electronics8121554 ) Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 15 October 2019 Peer-reviewed version available at Electronics 2019, 8, 1554; doi:10.3390/electronics8121554HereK is 9 for each target as 90% of the EEG data were used as the training set.CCA was employed to find spatial filters Wx c and Wy c (c = 1, 2, …, N) to maximize the canonical correlation = [ 1 … ] between X and reference signal Y: Table 1 . The summary of 16-target classification accuracy when using 4-s SSVEP data.Then, accuracies obtained by using different spatial filters were shown in Figure.6.The accuracies were 66.8 ±15.7%, 62.3±15.5%, and 61.0 ±16.4% for individual filter, 2.5° filter, and 5° filter respectively.A decreasing trend can be observed in three conditions.A paired t-test was used to conduct a comparison with Bonferroni correction.No significant difference between accuracies was obtained in the individual filter condition and the 2.5° filter condition (t(11)= 2.20, p = 0.956).Furthermore, although accuracies obtained in both two conditions were higher than those from the 5° filter condition significantly (individual filter > 5° filter, t(11)= 2.20, p <0.001); 2.5° filter> 5° filter, t(11)= 2.20, p <0.001), it should be noted that the absolute numbers of the accuracies are comparable.
3,997
2019-10-15T00:00:00.000
[ "Computer Science" ]
Evolution of structural and magnetic parameters of nickel nanotubes under irradiation of Fe 7 + ions This work is devoted to investigations of nickel nanotubes behavior under influence of swift heavy ion irradiation. High-energy irradiation initiates damage process inside nanostructures and can cause the appearance of new phases with interesting properties. To understand the basic principles of the evolution of structural and magnetic parameters of nanostructures under the influence of high-energy processes, detailed study of nickel nanotubes irradiated with various fluences of Fe 7 + ions was carried out. Introduction Ion irradiation is an attractive method that allows both determining the limits of nanostructures applicability under extreme conditions and investigating change their structure occurring during interaction of nanomaterials with swift heavy ions (SHI). High-energy effect can not only worsen the physical properties of materials, but also allows obtaining nanostructures with novel properties [1][2][3]. Thus, one of the important problems of the radiation modification is the controlled formation of defects in the crystal structure in order to improve the functional properties of the material, for example, increasing the strength characteristics or enhancing of nanostructures magnetic parameters [4,5]. The acquired properties directly depends on the degree of radiation damage, which is related to the irradiation conditions (mainly with energy and fluence) and the type of incident ions [6,7]. Depending on the irradiation energy, the dynamic processes associated with the deformation of the atomic structure of nanomaterials can be activated, as well as the formation of metastable phases that can lead to partial amorphization, structural deformation and implantation. The fluence of irradiation makes it possible to evaluate the nature of the processes of interaction of incident ions with matter, mechanisms of defect formation and phase transformations [8]. Today great attention is paid to experimental investigations of ion irradiation of nanowires and nanotubes (for example [9][10][11][12]) with different type of irradiation with both low (<1 MeV/nucleon) [13,14] and high energies (>1 MeV/nucleon) [15][16][17][18]. In listed papers, results of the studying of changes of structure and composition as well as conductivity of irradiated nanostructures are presented. The correlation between structural changes and magnetic properties is not estimated well. To eliminate this gap, in our work, the study of the morphological and structural features of nanostructures under irradiation, on the example of nickel nanotubes which were irradiated with Fe 7+ ions with fluences up to 5 × 10 11 cm −2 , are considered, and the correlation between the structural changes and the magnetic characteristics is established. Materials and methods The technique of template synthesis with electrolyte NiSO 4 × 6H 2 O (100 g/l), H 3 BO 3 (45 g/l), C 6 H 8 O 6 (1.5 g/l) at potential difference of 1.75 V and temperature 25 • C was used [19,20]. Templates were PET porous films with pore density 4.0 × 10 7 cm −2 , thickness 12 microns and diameters 380 ± 20 nm. Irradiation of nanotubes contained in PET templates was carried out at the "DC-60" heavy ion accelerator of Astana branch of the Institute of Nuclear Physics. As bombarding beams were used Fe 7+ ions with energies of 1.5 MeV/nucleon with fluences ranging from 1 × 10 9 to 5 × 10 11 cm −2 . Initial and irradiated nanotubes was studied by methods of X-ray diffraction structural analysis, scanning electron microscopy described in [21][22][23]. Magnetic characteristics of samples were studied on universal measuring system (automated vibrating magnetometer) «Liquid Helium Free High Field Measurement System» (Cryogenic LTD) in magnetic fields ± 2 T at 300 K temperature. Results and discussions Arrays of nickel nanotubes were synthesized in the pores of the PET templates with length (11.7 ± 0.2 µm) and diameters of 390 ± 20 nm. The general view of the nickel nanotubes array after removal from the polymer template is shown in Figure 1a. Synthesized arrays of nickel nanotubes in templates were irradiated with different fluence by Fe 7+ ions. Figure 2 shows the dynamics of changes in the morphology of nanostructures as a result of irradiation. As can be seen from the data presented, as a result of an increase in the irradiation fluence, there are no visible structural changes and the formation of cracks and amorphous regions that were observed in the case of light ion irradiation, which indicates the stability of nickel structures to irradiation with Fe 7+ ions. An increase in the irradiation fluence leads to a change in the surface morphology, as well as to the formation of small spherical outgrowths on the surface of the nanotubes which can be caused by the migration of defects to the grain boundaries and, accordingly, to the surface of nanotubes. Figure 3 shows the data on the changes in X-ray diffraction patterns of the studied samples because of irradiation. According to x-ray phase analysis, the initial samples are polycrystalline Ni structures with a face-centered type of crystal lattice, spatial symmetry Fm-3m (225). According to XRD analysis, irradiation with Fe 7+ ions leads to the appearance of low-intensity peaks in the diffraction patterns that correspond to the phase of the FeNi interstitial solid solution. The appearance of new FeNi peaks indicates the implantation of Fe 7+ ions in the interstices or lattice sites. In this case, an increase in the irradiation fluence leads to an increase in the peak intensities characteristic of FeNi, which indicates an increase in the content of the impurity phase in the structure. The increase of the content of the impurity phase may be due to similar chemical properties and ionic radii of Fe and Ni atoms, which leads to the substitution of Fe atoms for Ni atoms in the lattice, followed by the formation of stable compounds of the substitutional solid solution phase. In this case, the appearance of the interstitial phase leads to distortion and deformation of the structure, as evidenced by an increase in the asymmetric distortion of the diffraction peaks and their shift to the region of small angles. The shift of the diffraction peaks indicates a change in interplanar spacing as a result of the introduction of iron ions into the nodes of the crystal lattice with the subsequent substitution of nickel atoms and the formation of impurity inclusions. The formation of impurity inclusions leads to a sharp decrease in the peak intensity (220), which indicates a reorientation of crystallites as a result of the introduction of Fe 7+ ions. The formation of the interstitial phase, as well as the reorientation of crystallites, is caused by large energy losses of Fe 7+ ions on the nuclei. This leads to the formation of a large number of initially knocked out atoms from the lattice sites, which are replaced by Fe 7+ ions with the subsequent formation of a new phase, which leads to an increase in the crystal structure parameter. Lattice parameter, dislocation density and crystallinity degree were calculated by the methodical, mentioned in [24]. In turn, a change in distortions and deformations in the structure leads to a change not only in structural parameters, but also in the degree of perfection of the crystal structure, as well as a change in the density of dislocation defects (data in Table 1). In this case, an increase in the dislocation density of defects indicates a deformation of the crystal structure and a subsequent increase in the concentration of disordered regions in the crystal lattice. 9 3.5060±0.0015 2.61 89 10 10 3.5081±0.0015 4.41 87 5×10 10 3.5097±0.0012 4.55 86 10 11 3.5106±0.0025 7.06 '84 5×10 11 3.5298±0.0022 7.08 82 To determine the effect of irradiation with Fe ions on main magnetic characteristics of Ni nanotubes, magnetization dependence on the magnetic field M(H) for parallel and perpendicular field directions respectively to orientation of the nanotubes axis were carried out (Figure 4). Based on hysteresis loops the main magnetic characteristics ( H c is coercivity, M R / M S , is squareness ratio of hysteresis loop) were determined. These characteristics are presented in Table 2. The hysteresis loops have a typical form for soft ferromagnetic materials. Some discrepancy in the shape of hysteresis loops for different directions of the magnetic field relatively to the nanotubes axis indicates the presence of magnetic anisotropy. For example, for pristine samples, the coercivity values for a parallel orientation of the field relatively to the nanotube axis ( H C|| lies within 50 Oe) are lower than the values for the perpendicular direction of the field ( H C =81 Oe). Irradiation of nickel nanotubes with Fe ions leads to a slight change in both the coercivity and the squareness ratio of hysteresis loop (Table 2). Coercivity increases to a value of 10 10 cm −2 and decreases after exceeding this fluence for both directions of the applied field (perpendicular and parallel). The change in squareness ratio of hysteresis loop has a rather unexpected nature. M R / M S for the direction of the magnetic field parallel to the axis of nanotubes decreases, and when the perpendicular field is applied, it increases. The different behavior of the M R / M S with increasing irradiation fluence for different directions of the magnetic field relative to the nanotube axis is most due not only to structural changes, but also to the nature of the defect distribution inside the nanotubes. As we showed in the works [25,26] the moving of ions inside the nanotube facilitates the stretching of the crystallites along the nanotube with the simultaneous flow of defects to the surface of the nanotube. Two stable magnetic states are usually realized in nanotubes (magnetic moments are aligned along the axis of the nanotube or twist along the walls of the nanotube). For the samples (b)-(f) the number of defects in the direction along the axis of the nanotube decreases (by drawing the crystallites), while the number of defects at the surface of the nanotube due to defect runoff). These structural changies will cause a decrease in the M R / M S for the direction of the magnetic field, parallel to the axis of the nanotubes, and increases the M R / M S for perpendicular applied field. Conclusion Structural, morphological parameters and magnetic characteristics of Ni nanotubes with diameters of 390 ± 20 nm, which were irradiated with Fe 7+ ions with fluences up to 5 × 10 11 cm −2 and an energy of 1.5 MeV/nucleon was studied. The change in the main crystallographic characteristics after irradiation with Fe ions, are due to the appearance in the structure of defects (point defects, dislocations and the average stress), amorphous zones and formation of a new phase -FeNi. The dynamics of the main magnetic characteristics of Ni nanotubes was determined and analyzed from the position of structural changes. It was shown that change in magnetic properties is connected not only with structural changes such as defect formation, amorphization of the structure, and formation of the FeNi phase, but also with the nature of the defects distribution inside the nanotube.
2,548
2020-06-22T00:00:00.000
[ "Physics", "Materials Science" ]
A 60 GHz Millimeter-wave Antenna Array for 3D Antenna-in-Package Applications This paper presents a 60 GHz millimeter-wave (mm-wave) antenna array using standard printed circuit board (PCB) for 3D Antenna-in-package (AiP) implementation. The array consists of a 4 microstrip patch elements, differentially fed with an open stub matching feed network to enable 3D integration. The 1×4 finite antenna array with ball grid array (BGA) and silicon (Si) interposer operates from 58.46 to 62.14 GHz with 3.6 GHz instantaneous bandwidth, low mutual coupling of about <-25 dB and achieves a realized gain of about 10.51 dBi. The array is capable of scanning down to ±45° and provides low cross polarization levels of -40 dB. The fabricated multilayer 1×4 array consists of two substrates and one bondply layer with antennas, via-to-open stub matching network, and a differential to single-ended corporate feed network for the measurement. A prototype with a differential to single-ended corporate feed network was fabricated and tested showing a gain of about 10.02 dBi at the operating frequency with ≥90% radiation efficiency. Such a gain and efficiency make the presented design a leading candidate for 3D AiP applications. I. INTRODUCTION T HE need for fast data processing and high data rates is increasing exponentially and exceed the limits of current available technologies. Currently, the radio frequency (RF) spectrum (<6 GHz) is highly congested with limited resources and degradation in the quality of services [1][2]. To overcome this problem, radio applications are moving towards millimeter-wave (mm-wave) spectrum for wider bandwidth and high speed communications. In particular, mmwave bands are attractive for mobile communications, WiGig applications, short-range and long-range satellite communications, and future autonomous and vehicular communications [3][4][5]. However, mm-wave systems suffer from high penetration loss, high path loss, and attenuation due to rain and severe weather conditions and are associated with fabrication errors due to their smaller sizes. The large pathloss can be overcome by utilizing high gain antenna arrays and by performing beamforming techniques. Notably, the most widely implemented mm-wave antennas are either a dipole, a patch, a grid or a loop antenna. These antennas are simple to design and are of small size, which make them ideal candidates for system-on-chip (SoC)/system-in-package (SiP) integration [6][7][8]. On-chip antennas suffer from high losses and surface mode excitation due to the proximity between the antennas and the Si substrate. Further the high relative permittivity r and low resistivity of Si substrates suppress the gain and the efficiency of the antennas [6]. High cost and low yield of the SoC drives the designers towards a system-in package (SiP) approach where antennas and other integrated circuits (IC) components are integrated together on the same package. However, the SiP requires the usage of bond wires which are lossy and lead to path loss at such higher frequencies. Recently, a low loss 3D integrated method to interconnect heterogeneously stacked ICs within a SiP was presented in [9][10]. Instead of using lossy bond wires, Through Silicon Via (TSV) structures were employed to enable 3D vertically stacked ICs. This resulted in significant size reduction and higher efficiency as compared to the traditional 2D SiP implementation. Concur-FIGURE 1. Low loss 3D SiP phased array radio rently, the potential of this novel 3D SiP method cannot be realized without a low profile, highly efficient antenna that can be vertically integrated on the package. The implementation of mm-wave antennas has so far been challenging due to the high losses, expensive fabrication processes, and inaccuracies at such high frequencies. Mm-wave antenna arrays fabricated using low-temperature cofired ceramic (LTCC) process has been widely studied [11][12][13]. A simple and easy fabrication method is using printed circuit board (PCB). In [14], substrate-integrated waveguide (SIW) slot antennas with multi-layer circular patch array printed on a discontinuous dielectric substrate were designed using PCB. However, the requirement for an additional fabrication process and the high risk of structure deformation present significant limitations. An organic package with multi-layer phased antenna array designed using air cavity technology was presented in [15]. However, the introduction of the air cavities leads to fabrication challenges and also increases the risk of delamination during bonding or soldering at high temperatures. In [16], a PCB-based aperture coupled phased array with additional reflector to improve front-to-back ratio >10 dB was designed. The reflector requires additional substrate layer below the feed network, and hence the package suffers from increased thickness. Further, the antenna design requires specials vias (viz. blind vias) which increases the design complexity. We note that PCB fabrication requires stringent design rules such as trace widths >5 mil, conductor spacing >5 mil, and copper-to-edge clearance >5 mil. With this in mind, we introduce a simple mm-wave PCBbased patch antenna array with low loss and high efficiency to interconnect heterogeneously integrated stacked circuits in a 3D SiP, as depicted in Fig.1. The array is designed to operate at 60 GHz using a 4-element patch array. In [10], an initial design of the array without ball grid array (BGA) was presented with only simulations. In this paper, we extend the work in [10] to include a standard PCB multilayer array design considering BGAs and Si interposer and for the measurement purpose alone, we fabricated an array prototype with corporate feeding network for testing and characterization using a single end launch connector. The paper is organized as follows. In Section II, we present the design and simulation results of the mm-wave antenna array stackup. Then, fabricated and measured results of the fabricated prototype with a differential to single-ended corporate feed network are presented in Section III. II. DESIGN OF A 60 GHZ MULTILAYER ANTENNA ARRAY A. GEOMETRY OF THE ARRAY STACK-UP In this section, we present the design of finite mm-wave patch antenna array operating at 60 GHz using a simple PCB design. Fig. 2 shows the infinite array's antenna element of 3D Antenna-in-Package (AiP) that consists of the antenna array, ball grid array (BGA), and Si interposer. In this paper, we only show the design and implementation of the antenna array. The choice of PCB implementation avoids the losses associated with the Si substrate. The overall dimensions of the stack-up are 9.6mm×2.8mm×0.568mm (L×W×h). The multilayer phased antenna array is comprised of 4 patch antenna elements designed on Isola tachyon substrate with dielectric constant r =3.02, thickness 0.13 mm, and dielectric loss tangent tanδ= 0.0021. The patch elements with L patch =1.28 mm and W patch =2 mm are differentially fed from the Si interposer by using through-hole vias, a feed network, and BGAs. Further, no balun is required since the antenna elements are excited in the differential mode from the Si interposer [9,10]. Fig. 3 shows the antenna element of the designed infinite antenna array. A 50Ω impedance matching feed network is designed on a separate Isola tachyon substrate with thickness 0.13 mm and serves as a transition between the Si interposer and the through-hole vias feeding the antenna aperture. A prepreg layer of thickness 0.038 mm and the dielectric constant same as the core material is used for bonding the two core substrates. The feeding network consists of a 50Ω transmission line and compact open stubs on both sides. The multi-layer design adds additional capacitance to the feeding network, therefore an open stub that acts as an inductance is added on both the sides of the transmission line. The optimized length and width of the stubs are L stub =0.42 mm and W stub =0.13 mm. We note that, in this analysis, BGAs of 0.1 mm diameter are used as an integrating component between the multilayer antenna array and Si interposer. B. SIMULATED RESULTS An infinite array simulation with the designed antenna elements was carried out to optimize the array operation at 60GHz, using Ansys HFSS software. Fig. 4 illustrates that increasing the length of the open stubs increases the reactance of the antenna's input impedance. Fig. 5 depicts the simulated active S11<-10 dB showing a good matching to 50Ω across 57.97-62. 18 GHz. The bandwidth of the antenna can be further improved by increasing the thickness of the substrate and choosing a low dielectric constant material. The realized gain of designed antenna element is compared to a similar single layer traditional patch antenna element without BGA and Si interposer, as illustrated in Fig. 6. The theoretical gain of the antenna element was estimated using the antenna's effective aperture by [17] where A e is the effective aperture of the antenna, λ is the operating wavelength. Notably, the addition of BGA and Si interposer to the antenna element only accounts for a 0.23 dBi loss. A 1×4 array with 4 patch antenna elements was designed as shown in Fig. 7. The spacing between the patches are maintained at a distance of 0.48λ. The dimensions of the open stubs are optimized for 50Ω impedance matching. The optimized 1×4 finite array was simulated. Fig. 8 shows that the array operates from 58.46 to 62.14 GHz with low mutual coupling of <-25 dB between the adjacent elements. Notably, the designed differential feed array provides good isolation by maintaining a center-to-center element spacing of 0.48λ, without using any additional isolation improvement technique. Further, the scanning performance of the 1×4 finite antenna array was analyzed by including progressive phase shifts between the elements. The phase shifts between the elements were calculated using (2) where k = 2π λ , λ is the operating wavelength, d is the antenna element's spacing, and θ is the scan angle. Fig. 9 shows that the antenna array can scan down to ±45 0 in E plane (YZ plane) with a scan loss of about 2.29 dBi. As expected, the realized gain reduces by a cos(θ) factor. Notably, the designed array achieves a total gain of 10.51 dBi at boresight in both E (YZ plane) and H plane (XZ plane), as displayed in Fig. 10. Since the antenna elements are designed using infinite array boundary conditions, the designed array can be easily extended to larger number of elements for higher gain. Therefore, the designed array with high gain and ±45 0 beam scanning has a capacity to compensate pathloss. Fig. 10 also demonstrates that the antenna array provides sidelobe level of 14 dB. The designed via feed array provides a front-to-back ratio of >17 dB at 60 GHz, whereas an aperture-coupled antennas require additional reflector for improving the front-to-back ratio [16]. Additionally, the simulated antenna array provides low cross polarization levels of a -40 dB in E plane and -50 dB in H plane with ≥90% radiation efficiency at the operating frequency, as shown in Fig. 11. Indeed, the differential feed from Si interposer provides polarization purity and radiation symmetry. III. DESIGN VALIDATION OF THE 1×4 ARRAY WITH CORPORATE FEED NETWORK A 60 GHz multilayer 1×4 antenna array was fabricated and measured to validate the designed 3D AiP approach. For the measurement purpose, we developed a differential to singleended corporate feed network to excite all the 4 patches at boresight, as shown in Fig. 12(a). The designed feed network provides 0 0 and 180 0 phase outputs to excite the differential patches along with optimized open stubs (L stub =0.35 mm, W stub =0.24 mm) for 50Ω impedance matching (see Fig. 12(b)). Fig. 12(c) shows that the patch elements and feed network are designed on separate Isola Tachyon substrate with thickness 0.25 mm, dielectric constant r =3.02, and dielectric loss tangent tanδ=0.0021. A prepreg material of dielectric constant r =3.02 and thickness 0.038 mm (1.5 mil) is used as a bonding material between these two substrates. A ground plane with 35 µm of copper was created on the top layer for a 1.85mm connector's ground connection. The connector's ground is connected to the antenna's ground plane by using through-hole vias with diameter 0.2 mm and pitch spacing of 1 mm on both the sides. The fabricated multilayer antenna array is measured using a 1.85mm end launch connector. The top and bottom view of the fabricated prototype are shown in Fig. 13 (a) and (b), respectively. The antenna array is connected to 1mm measuring cable using a 1.85mm to 1mm adapter. Fig. 14 shows the simulated and measured S11 of the array with corporate feed network. Notably, the simulation shows that the array with differential feed network operates from 57.6 to 63 GHz. Conversely, the array without differential feed network operates from 58.46-62.14 GHz (3.6 GHz impedance VOLUME 4, 2016 bandwidth). The measured S11 shows that it operates from 59.3 to 65 GHz. Discrepancies between the simulated and the measured results are due to fabrication errors, the presence of 1.85mm connectors and adapter used for the measurement. Nevertheless the designed antenna array with differential feed network provides a bandwidth of about >5 GHz. Fig. 15 shows pattern measurement setup using a mmwave anechoic chamber. The normalized simulated and measured radiation patterns of the fabricated array in the E and H plane are shown in Fig. 16 and Fig. 17, respectively. Simulations show that the realized gain with differential feed network is about 10.47 dBi in the E plane and 10.41 dBi in the H plane. Conversely, the realized gain without feed network is 10.51 dBi in both E and H planes. The measured gain with differential feed network in the E and H planes is 9.34 dBi and 10.02 dBi, respectively. Even here, the ∼1 dB discrepancy between simulation and measurement is due the use of 1.85mm connectors and adapter. The aperture efficiency of the antenna array can be esti- where G is the gain of the antenna array, and A e is the effective area of the antenna array [22]. The aperture efficiency of the antenna array without feed network in Fig. 7 is 81% at 60 GHz. Fig. 18 shows that the designed 1×4 antenna array operates with ≥90% radiation efficiency at the operating frequency. The implemented array provides low loss and high efficiency which makes it suitable for mm-wave 3D AiP applications. Table I compares the different 60 GHz millimeter-wave in-package antennas and their fabrication process. It clearly indicates that the designed antenna array provides high efficiency, low mutual coupling, good gain, low profile, simple in structure. Further, it does not require any special vias such as blind or buried vias, thus reducing the design complexity. IV. CONCLUSION This paper presented a 60 GHz mm-wave antenna array for 3D AiP applications. The designed antenna array consists of a differentially fed patch antenna elements with a via-to-open stub matching feed network. The designed antenna array with BGA and Si interposer provides a gain of 10.51 dBi with ≥ 90% radiation efficiency. The fabricated array with differential to single-ended corporate feed network operates from 59.3 to 65 GHz with a measured gain of 10.02 dBi. Notably, the antenna array can be fabricated independently and combined using traditional BGA packaging technology, thus making it an ideal candidate for antenna-in-package integration.
3,445.4
2021-01-01T00:00:00.000
[ "Computer Science" ]
On polynomial congruences . We deal with functions which fulfil the condition Δ n +1 h ϕ ( x ) ∈ Z for all x,h taken from some linear space V . We derive necessary and sufficient conditions for such a function to be decent in the following sense: there exist functions f : V → R , g : V → Z such that ϕ = f + g and Δ n +1 h f ( x ) = 0 for all x,h ∈ V . Introduction Let V be a linear space over Q, R or C and n ∈ N (we assume that 0 ∈ N). The symbol ≡ stands for a congruence modulo Z (so a ≡ b ⇐⇒ a − b ∈ Z, a, b ∈ R), the symbol [x] denotes the integer part of a real number x andx denotes the fractional part of x (so x = [x] +x,x ∈ [0, 1)). Following e.g. [10], we define the difference operator: Definition 1.1. Let f : V → R be a function. Then is called a polynomial function of degree n. The aim of this paper is to examine functions ϕ : V → R fulfilling a less restrictive condition than (1.1), namely We call condition (1.2) polynomial congruence of degree n. This study is inspired by several works (e.g. [1][2][3][4]), in which the so called Cauchy's congruence (or Cauchy equation modulo Z) i.e. is considered. In these works the problem of decency in the sense of Baker of solutions of (1.3) is discussed (see e.g. [1]; the solution ϕ of (1.3) is called decent iff there exist an additive function a : V → R and a function g : V → Z such that ϕ = a + g). In many cases Cauchy's congruence can be easily transformed to the con- To be more precise, if ϕ fulfills (1.3), then Almost conversely, if Δ 2 h ϕ(x) ∈ Z for x, h ∈ V , then the functionφ = ϕ−ϕ(0) fulfillsφ(x+y)−φ(x)−φ(y) ∈ Z. Indeed, observe first thatφ(0) = 0. Moreover, Obviously, if ϕ = f + g, f : V → R is a polynomial function of degree n and g : V → Z, then ϕ solves the congruence (1.2). In analogy to Baker [1], we call such functions ϕ decent solutions of (1.2). Examples ofÁ. Száz and G. Száz from [13] and Godini from [8] prove that there exist non-decent solutions of (1.3). Thus the natural question arises: what conditions should be imposed on the solution of the congruence Δ n+1 h ϕ(x) ∈ Z to ensure its decency. In the present paper we obtain results which correspond to those of Baron et al. from [2] and results of Baron and Volkmann from [3]. Namely, we present analogues of results from [2,3] for polynomial congruences of degree greater than 1. Below we cite one of the characterizations of decent solutions of the Cauchy's congruence from [2], because we use it in Remark 1.3: When dealing with polynomial functions the inductional approach may always come in mind. In our situation one could expect that a solution of the congruence Δ n+1 h ϕ(x) ∈ Z is a decent iff for every h ∈ V the function V is a decent solution of the polynomial congruence of degree n − 1. However, this is not the case as it is visible from the following remark: There exists a function ϕ such that Δ 3 h ϕ(x) ∈ Z for all x, h ∈ R, Δ h ϕ is a decent solution of the polynomial congruence of degree 1 for every h ∈ V , but ϕ is not a decent solution of the polynomial congruence of degree 2. Proof. Let α : R → R be a function fulfilling α(x+y)−α(x)−α(y) = m(x, y) ∈ Z for all x, y ∈ R, which cannot be expressed as a sum of an additive function and an integer-valued function (the existence of such a function is proved in [8], [13]). Then α fulfills the congruence Δ 2 h α(x) ∈ Z, x, h ∈ R (which is proved on the previous page). Define is a decent solution of the polynomial congruence of degree 1 (for every fixed h ∈ V ). Suppose that the function ϕ is a decent solution of the polynomial congruence of degree 2. Then from Theorem 2.2, which is proved in the second part of this paper, it follows that for every v ∈ R there exist con- Then α(ξv) ≡ b v ξ for ξ ∈ Q and Theorem 1.2 implies that α is a decent solution of Cauchy's congruence, which is in contradiction to our choice of the function α. We make use of the following, easy to check, properties of (decent) solutions of the congruence (1.2): Proof. Ad (i) The first part is a consequence of the equality Δ n+1 Observe that ϕ is of the form ϕ = f +g, with f : V → R being a polynomial function of degree n and g : Ad (ii) The first part follows from the identity Δ n+1 which proves the first part. We , which means that ϕ + m can also be split into a polynomial and an integer-valued part. We can also notice that ϕ fulfills the congruence Δ n+1 for all x, h ∈ V , where0 means the neutral element of the quotient group (R/Z, +). We recall the well-known result (see e.g. [14], Theorem 9.1, p.70) describing solutions of the Frechét equation in a wide class of spaces. It will be useful for us in our further considerations (Theorem 2.2) and, moreover, it will clarify why we cannot use it for the group R/Z (the group R/Z is not divisible by n! for n > 1). For the simplicity of the statement we assume that a 0-additive function is an arbitrary function, whose domain is the linear space {0} (see e.g. [7]). Main result We start with the result which corresponds to Theorem 2.1 from [2]. In the proof we make use of Theorem 1.5 and the following, very obvious remark: , which takes only integer values for rational arguments, then p is constantly equal to p(0). Our first theorem reads as follows: v ∈ V there exists a polynomial p v of degree smaller than n + 1 with real coefficients so that ϕ(ξv) ≡ p v (ξ) for all ξ ∈ Q. Proof. Firstly, assume that ϕ is a decent solution of the polynomial congruence of degree n. Then there exist functions f : . . . , v) for an i-additive and symmetric At the beginning, let us consider the case ϕ(0v) = 0 for v ∈ V . Then we can choose polynomials p v ∈ R n [X] in such a way that From the above congruence it follows that For an arbitrary function ϕ considerφ = ϕ−ϕ(0). Using the already proved part of the theorem to the functionφ, we obtain thatφ is decent. From Remark 1.4 (ii) it follows that it is equivalent to the decency of the function ϕ. Considering (i) of Remark 1.4 we can rewrite Theorem 2.2 in the following manner: Then ϕ is a decent solution of the polynomial congruence of degree n if and only if for any vectors v, w ∈ V there exists a polynomial p v,w of degree smaller than n + 1 with real coefficients so that ϕ(v + ξw) ≡ p v,w (ξ) for all ξ ∈ Q. In our main theorem we apply the following result: Theorem 2.4. (Ger [6]) Let X and Y be two Q-linear spaces and let D be a nonempty Q-convex subset of X. If algint Q D = ∅ then for every function Now we present our main result, which provides necessary and sufficient conditions for a function ϕ fulfilling Δ n+1 h ϕ(x) ∈ Z for all x, h ∈ V to be a decent solution of this congruence. Then the following conditions are equivalent: (i) ϕ is a decent solution of the polynomial congruence of degree n, (ii) For every vector v ∈ V there exists a polynomial p v of degree smaller than n + 1 with real coefficients so that ϕ(ξv) ≡ p v (ξ) for all ξ ∈ Q, (iii) For every vector v ∈ V there exist ε > 0 and a polynomial p v of degree smaller than n + 1 with real coefficients so that ϕ(ξv) ≡ p v (ξ) for all ξ ∈ Q ∩ (0, ε), (iv) For every vector v ∈ V there exist ε > 0 and a polynomial p v of degree smaller than n + 1 with real coefficients so thatφ(ξv) Proof. The equivalence (i) ⇐⇒ (ii) has already been proved. The implication (ii) =⇒ (iii) is obvious. Now we show that (iii) =⇒ (ii). For this aim, denote Ω = {ξ ∈ Q : ϕ(ξv) ≡ p v (ξ)}. From our assumption it follows that Q ∩ (0, ε) ⊆ Ω. where E n+1 denotes the set of all natural even numbers smaller than or equal to n + 1 and O n+1 denotes the set of all natural odd numbers smaller than or equal to n + 1. From our assumptions it follows that there exist functions m : E → Z and q : E → (−α, α) such that ϕ| E = m + q. Since X is a locally convex linear topological space and intH(E) = ∅, there exists an open and convex set U such that ∅ = U ⊆ H(E). Fix x ∈ U and choose h ∈ X such that x + kh, x − kh ∈ E for k = 1, 2, . . . , n + 1. Then 1 2 n+1 ) for x ∈ U . Thus there exist functionsm : U → Z,q : U → (− 1 2 n+1 , 1 2 n+1 ) such that ϕ| U =m +q. Now we fix x ∈ U and choose h ∈ X such that x + h, . . . , x + (n + 1)h ∈ U . Then we have hm (x) and Δ n+1 hq (x) = 0 for x ∈ U and h ∈ X such that x + h, . . . , x + (n + 1)h ∈ U . Theorem 2.4 applied for the functionq, the space X and the set U implies that there exists a polynomial function F : X → R of degree n such that F | U =q. Therefore F is bounded from both sides on U , so it is continuous (Theorem 3.7). Obviously, G is a continuous polynomial function of degree n. Denote Ω = {x ∈ X : ψ(x) ≡ G(x)}. We know that U − c ⊆ Ω and U − c is a convex neighbourhood of 0. We show that if W is a convex neighbourhood of 0, then W ⊆ Ω implies that (1 + 1 n )W ⊆ Ω. Choose arbitrary x ∈ W . From the convexity of W and 0 ∈ W it follows that 1 n x, . . . , n−1 n x ∈ W . Thus Theorem 3.9. Let X be a linear space and let ϕ : X → R be a solution of the polynomial congruence of degree n. Assume that one of the following two hypotheses is valid 1. X = R m , with some positive m and ϕ, is Lebesgue measurable. 2. X is a real Fréchet space and ϕ is a Baire measurable function. Then ϕ is a decent solution of the polynomial congruence of degree n. Moreover, ϕ = f + g with f being a continuous polynomial function of degree n and g being an integer-valued and Lebesgue (resp. Baire) measurable function. If k 0 = 0, then the previous theorem and Remark 3.5 in case (1) and Remark 3.6 in case (2) implies the decency of ϕ and the continuity of its polynomial part in a decomposition of ϕ on a polynomial function and an integer-valued function. If k 0 ∈ {1, . . . , 2 n+2 (2 n+1 − 1) − 2}, then consider the function Of course, the functionφ is a solution of the polynomial congruence of degree n and Therefore, from Remark 3.5 in case (1) and Remark 3.6 in case (2) and the previous theorem it follows thatφ is a decent solution of the polynomial congruence and a polynomial part of its decomposition is continuous, but then also ϕ is a decent solution of the polynomial congruence of degree n with continuous polynomial part in the decomposition. We proved that ϕ = f +g, where f is a continuous polynomial function and g is an integer-valued function. Since f is continuous, it is Lebesgue measurable in case (1) and Baire measurable in case (2). Therefore, g = ϕ − f is Lebesgue measurable in case (1) and Baire measurable in case (2), too. Open Access. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
3,138.8
2016-06-30T00:00:00.000
[ "Mathematics" ]
Constrained Connectivity in Bounded X-Width Multi-Interface Networks As technology advances and the spreading of wireless devices grows, the establishment of interconnection networks is becoming crucial. Main activities that involve most of the people concern retrieving and sharing information from everywhere. In heterogeneous networks, devices can communicate by means of multiple interfaces. The choice of the most suitable interfaces to activate (switch-on) at each device results in the establishment of different connections. A connection is established when at its endpoints the devices activate at least one common interface. Each interface is assumed to consume a specific percentage of energy for its activation. This is referred to as the cost of an interface. Due to energy consumption issues, and the fact that most of the devices are battery powered, special effort must be devoted to suitable solutions that prolong the network lifetime. In this paper, we consider the so-called p-Coverage problem where each device can activate at most p of its available interfaces in order to establish all the desired connections of a given network of devices. As the problem has been shown to be NP -hard even for p = 2 and unitary costs of the interfaces, algorithmic design activities have focused in particular topologies where the problem is optimally solvable. Following this trend, we first show that the problem is polynomially solvable for graphs (modeling the underlying network) of bounded treewidth by means of the Courcelle’s theorem. Then, we provide two optimal polynomial time algorithms to solve the problem in two subclasses of graphs with bounded treewidth that are graphs of bounded pathwidth and graphs of bounded carvingwidth. The two solutions are obtained by means of dynamic programming techniques. Introduction In the last decade, multi-interface wireless networks have drawn the attention of practitioners and the research community. They are a widely used communication infrastructure which can support a plethora of different, important and popular applications such as file transfers. Multi-interface wireless networks are composed of heterogeneous devices that have different computational capabilities, are battery powered and can be equipped with different communication technologies such as Bluetooth, WiFi, 4G, 5G and GPRS. Mobile phones, laptops, smartwatches, tablets are but a few of the possible heterogeneous devices that can compose a multi-interface wireless network. They can activate one or more interfaces in accordance to the bandwidth of communication that is required, the cost (generally expressed as a percentage of the energy consumption) for maintaining an interface as active, the neighborhood. The activation of an interface has associated an energy cost. This must be carefully considered when establishing the desired connections since devices are battery powered. Suitable solutions should be devised to prolong the network lifetime. In this paper, we aim at finding the most energy efficient connections by activating a subsets of node interfaces. Actually, a fixed positive integer p is defined which indicates the maximum number of interfaces a single device can activate at most. We describe a multi-interface wireless network by using a graph G = (V, E), where V is the device set and E is the set of connections. This is defined by considering some parameters such as the distance between devices and the interfaces they share. Each v ∈ V is associated with an interface set W(v). The set v∈V W(v) defines all the available network interfaces; k denotes its cardinality. The endpoints of an edge that share at least one active interface can establish a connection. A node u consumes an energy c(α) to keep the interface α active. This provides a maximum communication bandwidth b(α). Figure 1 shows a multi-interface wireless network where various devices such as mobile phones, smartphones, smartwatches, tablets and laptops can perform a point-to point communication by using different interfaces and protocols, i.e., IRdA, Bluetooth, Wi-Fi, GSM, 4G. It is worth noticing that each connection can be established by at least an interface; some devices are not directly connected although they share some interfaces. This can be consequence of different factors such as obstacles, distances and the protocols that are used for communication. A full instance specification could be provided (if necessary) by defining the cost of activating an interface on a specific device together with the interface bandwidth. This can be assumed equal among all devices while the energy spent to activate a specific interface may be different for different devices. We simplify the model by having a cost that refers to the percentage of battery consumed by each device. Effectively, the cost can be considered the same for each device with respect to a specific interface among the whole network. Nevertheless, different assumptions may lead to completely different problems that point out specific peculiarities of the multi-interface networks. In this paper, we study a variant of the Coverage problem [1]. This is used to find the cheapest way to establish all connections that are defined by an input graph G. The interfaces used for establishing a connection and bandwidths requirements are not considered. This problem only aims at ensuring that for each edge of G there is a common active interface at its endpoints. The objective is the minimization of the overall network activation cost. We add to the Coverage problem a further constraint [2][3][4][5] where a device cannot activate more than p interfaces. This constraint is used to keep under control the energy spent by single devices. In other words, instead of finding the solution that minimizes the overall cost due to the activation of the interfaces along the whole network, or the one that minimizes the maximum cost spent by a single node, the aim becomes to minimize the overall cost subject to the constraint p. This problem has been proved to be NP-hard also for simple instances such as p = 2 and unitary costs of the interfaces. For this reason, algorithms design activities have focused their attention to particular topologies where the problem can be solved in an optimal way. We follow this trend by solving the problem in polynomial time for graphs (modeling the underlying network) of bounded pathwidth and for graphs of bounded carvingwidth, which represent two main graph classes frequently studied for hard problems such as p-Coverage. Actually, we first show that the problem is polynomially solvable for graphs of bounded treewidth. Even though both graphs of bounded pathwidth and graphs of bounded carvingwidth represent two (incomparable) subclasses of graphs of bounded treewidth, the requirement in exploring such classes comes by the fact that specific algorithms allow for better performance. Related Work In the last decade multi-interface wireless networks have drawn the attention of the research community. The benefits of taking advantage of multiple interfaces at each device is usually the main focus. In this context, many standard network optimization problems have been reconsidered [6,7] by focusing on routing issues [8] and network connectivity [9]. Multi-interface wireless network combinatorial problems have been studied in [10] and [1,11] which investigate the Coverage problem. The constrained Coverage problem referred to as p-Coverage has been introduced in [2] and further investigated in [3]. The authors in [12,13] study the problem of finding the cheapest way to ensure network connectivity. More precisely, they aim at activating for each node a subset of its available interfaces which ensures a path between every pair of nodes in G. These paths should minimize the overall cost of all interfaces that have been activated. This corresponds to a generalization of the Minimum Spanning Tree problem as the set of connections established must form a spanning subgraph of G, not necessarily a tree. In fact, as costs are not on the edges but on the node interfaces, a node can use the same interface to establish several connections thus saving energy. This property of multi-interface networks highlights the advantage as well as the higher complexity of the studied problems. In [11] the authors still study connectivity but with a different objective function, i.e., the minimization of the maximum cost paid on a single node. This is a widely studied objective function as it is more suitable for distributed settings. The authors in [13] study the Cheapest path problem. The goal is to activate available interfaces in some nodes to guarantee a path between two specified nodes. This path should have a minimum cost in terms of activated interfaces. This problem is the generalization of the Shortest Path problem between two nodes in standard networks. The cheapest path is one of the few problems that have been reconsidered in the context of multi-interface networks that maintains its computational complexity. In fact, it can be optimally solved in polynomial time. The authors in [14] study the Maximum Matching problem. As with its classical version, the problem looks for the maximum subset of connections that can be established at the same time without sharing any common node. Each node must appear at most once in the solution since this must have a set of disjoint edges. The Maximum Matching problem becomes difficult in the context of the multi-interface networks. When considering a problem instance of an input graph G without the specification of costs and bandwidths, the solution is difficult to be found. In fact, two edges established by means of the same interface included in the solution, cannot be directly connected by another edge. This connection would invalidate the solution since both its endpoints share the same active interface. The authors [15,16] face bandwidth constraints by investigating flow problems in multi-interface networks. Each interface is associated with one additional parameter that defines the bandwidth the interface can deal with. The Maximum Flow problem and the Minimum Cost Flow problem aim at ensuring a connection between two given nodes by considering bandwidth constraints on the provided interfaces. The Maximum Flow problem finds the maximum bandwidth between two selected nodes. Actually, the maximum value achievable can be obtained by standard techniques. More specifically, by considering all the network interfaces as active, then all the allowed connections are established and the problem coincides with that on standard networks where bandwidth capacities are associated with edges and not to interfaces. However, when one wants to find a solution that guarantees the maximum flow but at minimum cost, then a suitable activation of the available interfaces must be found. Hence, this problem is a generalization of the Maximum Flow problem in standard networks. The Minimum Cost Flow problem aims at ensuring the following two goals: (i) a communication sub-network between two given nodes which has minimum energy consumption; (ii) a minimum amount B of communication bandwidth. In practice, the problem aims at finding the minimum cost set of interfaces to activate among the input network so that two specified nodes s and t are guaranteed to exchange data with at least B bandwidth. Clearly, the solution might result in a complex graph with source s and tail t composed of nodes with active interfaces. This problem is a generalization of the Minimum Cost Flow problem in standard networks. The theoretical results of multi-interface wireless networks can be applied in various applications such military applications and the tactical networks that are provided with new software technologies [17,18] which employ resource-constrained mobile devices. The main resource that must be preserved is the battery power of each device thus a suitable usage of the interfaces could be crucial for improving the network lifetime. Our Results In this paper, we are interested in the p-Coverage problem. In particular, each node of the network can activate at most p interfaces. In what follows, the p-Coverage problem will be denoted simply as CMI(p), in accordance to the name provided to the original formalization of the Coverage problem on multi-interface networks where no restrictions on the number of interfaces each device may activate was introduced, see [1]. Actually, we consider what was referred to as the unbounded case, in which instead there is no bound on the number k of interfaces available over all the network. Considering the new notation, that problem becomes CMI(∞), i.e., p = ∞. Moreover, following the trend of the previous work [2,3] on the same subject (and similarly to the technique used in [19]), we focus on CMI (2). The contribution of this paper is summarized and compared with previous results in Table 1. Table 1. Complexity of the CMI(2) problem. Parameters n, k, ∆ and h are the number of nodes, the number of interfaces, the maximum node degree, and the X-width of the input instance of CMI(2), respectively. Graph Class Costs Complexity of CMI(2) Reference Graphs with ∆ ≥ 4 unitary NP-complete (feasibility) [ In particular, we investigate CMI(2) on the class of graphs with bounded treewidth, showing that it is indeed polynomially solvable. Then, to obtain specific performance, we consider two well-known (but incomparable) subclasses of graphs with bounded treewidth that are graph admitting a bounded pathwidth or a bounded carvingwidth. While the formal definitions of treewidth, pathwidth and carvingwidth will be provided later, here we give just a first intuition of what kind of graphs we are approaching. A tree decomposition of a graph G is a tree of subsets of vertices of G such that the endpoints of each edge of G appear together in at least one subset and all subsets containing a same vertex of G constitute a connected subtree. The treewidth is one less than the size of the largest subset in a minimal tree decomposition. A path decomposition of a graph G is a sequence (a path) of subsets of vertices of G such that the endpoints of each edge of G appear in at least one subset and each vertex of G appears in a contiguous subsequence of the subsets. The pathwidth is one less than the size of the largest subset in a minimal path decomposition. A carving of a graph G is a tree T whose internal vertices all have degree 3 and whose leaves correspond to the vertices of G. The width of a carving T is the maximum size of an edge-cut in G that is induced by an edge of T. The carvingwidth of G is the minimum width of a carving of G. While graphs admitting bounded pathwidth or bounded carvingwidth also admit bounded treewidth, there is no relation among the two classes of graphs admitting bounded pathwidth and those of bounded carvingwidth. Hence, investigating on the resolution algorithms of CMI(2) when restricted to one of the two subclasses is a challenging issue. In fact, since in general the problem has been shown to be NP-hard even for graphs of maximum degree ∆ ≥ 4 with interfaces of unitary costs, it is worth investigating when the problem becomes affordable. As shown in Table 1, many graph classes have been already investigated, including paths, trees, rings, complete graphs, complete bipartite graphs, and series-parallel graphs. Hence our work is a continuation of this investigation, involving graphs of bounded treewidth, pathwidth or carvingwidth. In particular, while for graphs of bounded treewidth (and hence also for graphs of bounded pathwidth or carvingwidth) we prove that CMI(2) is solvable in polynomial time, in the specific cases of graphs with bounded pathwidth and graphs with bounded carvingwidth we also provide specific polynomial time algorithms that solve CMI(2). Outline In the next section we provide all necessary definitions and notation to formalize the CMI(p) problem. In Section 3 we provide our first result holding for graphs of bounded treewidth. In Section 4, we focus on CMI(2) in the case that the input graph admits a bounded pathwidth. We then provide an optimal resolution algorithm for the specific case. In Section 5, we investigate the optimal resolution of CMI(2) when the input graph admits a bounded carvingwidth. The obtained solutions work in polynomial time. Finally, Section 6 provides some useful remarks and discussions for interesting directions of future works. Notation and Definitions Given a graph G = (V, E), let V be the set of nodes, and E be the set of edge. Moreover, let n = |V| and m = |E|. Unless differently specified, G is assumed to be simple (without multiple edges nor self-loops), undirected and connected. For each node v ∈ V, we denote by deg(v) the degree of v, and let ∆ = max v∈V deg(v). A global assignment of the interfaces to the nodes in V is given in terms of an appropriate interface assignment function W, as follows. For all nodes in V, the activation of a specific interface is assumed to cost the same. The rationale behind it is that each device might be thought to spend the same percentage of its energy. Hence, the costs associated with the interfaces are defined by a function c : {1, . . . , k} → R + , and the cost of activating interface i is referred to as c i . The considered CMI(p) optimization problem is so formulated: Otherwise, a negative answer. Goal: Minimize the total cost of the interfaces that are activated, i.e., c( In general, the cost function c spans over R + . When c(i) = 1, for any i = 1, . . ., k then the particular case of unitary cost is considered. Clearly k ≥ 2 is always assumed as the case k = 1 admits the obvious and unique solution where all the nodes activate the only available interface. Please note that CMI(p) is a generalization of the original CMI(∞) problem (see [1,7]), where each node cannot activate more than p interfaces. Surprisingly, the basic case for p = 2 comes out to be more difficult, in general, than CMI(∞). In fact, in [2] the following theorem has been proved: . Finding a feasible solution for CMI (2) is NP-complete for graphs with ∆ ≥ 4, even for the unitary cost case. Please note that the feasibility of CMI(∞) is easily solvable by the definition of the problem. In fact, by activating all interfaces for each node, a feasible solution for CMI(∞) is obtained. However, there are special graph classes that turn out to be much more affordable in CMI (2). Concerning trees and complete graphs, for instance, in [2] the CMI(∞) problem has been investigated. For trees CMI(∞) comes out to be APX-hard whereas for complete graphs it is not approximable within O(log k). In both topologies, instead, CMI(2) is optimally solvable in polynomial time. In the next section, we show that CMI(2) is polynomially solvable in graphs of bounded treewidth, whereas the subsequent sections explore in more details CMI(2) in two different and incomparable graph classes, i.e., graphs with bounded pathwidth and graphs with bounded carvingwidth. Graphs with Bounded Treewidth In this section, we show that CMI(2) is polynomially solvable in graphs of bounded treewidth. We start by formally define what the treewidth of a graph is. A tree decomposition of a graph G is a way of representing G as a tree-like structure. Definition 2 ([20]). A tree decomposition of a graph G = (V, E) is a pair ({X i |i ∈ I}, T = (I, F)) with {X i |i ∈ I} a collection of subsets of V, called bags, and T = (I, F) a tree, such that The width of a tree decomposition ({X i |i ∈ I}, T = (I, F)) equals max i∈I |X i | − 1. The treewidth of a graph G is the minimum width of a tree decomposition of G. The treewidth is said to be bounded if it is limited by a constant h. In other words, for any fixed constant h, if G has treewidth at most h then G is said to be of bounded treewidth. Before providing the result on the solvability of CMI(2) in graphs of bounded treewidth, we need to recall the well-known Courcelle's theorem [21]: Theorem 2 (Courcelle's [21]). Given a graph G and a property φ on G expressed in the monadic second-order logic, then checking whether G satisfies φ is Fixed Parameter Tractable (FPT) with respect to the treewidth of G. Courcelle's theorem basically provides a powerful means to understand whether the problem of verifying a property φ on a graph G is Fixed Parameter Tractable (FPT) with respect to the treewidth of G. This is actually what we obtain by applying the Courcelle's theorem to CMI (2). Indeed, the next theorem shows that CMI(2) can be formulated in the monadic second-order logic. Theorem 3. Let I be an instance of CMI(2) such that the input graph G admits a tree decomposition of width h. Then CMI(2) is FPT with respect to h. Proof. The proof is based on Courcelle's theorem which states that any graph property that can be defined on monadic second-order logic can be tested in time f (h) · poly(|I|), where f is some computable function and |I| is the dimension of the input. Thus, to complete the proof, we need to express CMI(2) in the monadic second-order logic. From Theorem 3, it follows that CMI(2) is solvable in polynomial time for any graph with bounded treewidth, since in such a case h is a constant and consequently f (h) is constant as well. However, the performance of the underlying resolution algorithm might be practically heavy, hence requiring further investigation on specific cases. In particular, in the next sections we consider two different and incomparable graph classes that are graphs with bounded pathwidth and graphs with bounded carvingwidth. Please note that such classes are both subclasses of graphs with bounded treewidth but, as we are going to see, they allow different computational times. Graphs with Bounded Pathwidth In this section, first we formally define what the pathwidth of a graph is. Subsequently, we present our new resolution algorithm that optimally solves CMI(2) in polynomial time on graphs with bounded pathwidth . Definition 3 ([20]). A path decomposition of a graph G = (V, E) is a set P = (X 1 , . . . , X r ) of subsets of V that is X i ⊆ V for each i ∈ {1, . . . , r}, called bags, such that The width of a path decomposition P is the maximum number of vertices contained in every bag of P minus one, i.e., max i∈{1,...,r} |X i | − 1. The pathwidth of a graph G, is the minimum width for every possible path decomposition of G. For any fixed constant h, if G has pathwidth at most h then G is said to be of bounded pathwidth. In the following, we will call nodes the elements of P, and vertices the elements of V, to avoid confusions. An important property of path decomposition [20], which we exploit to write our dynamic programming algorithm, is the pathwidth separator property. It states that for every three nodes X i , X j , and X k such that X j is between X i and X k , each path in G that connects a vertex in X i \ X j with a vertex in X k \ X j contains a vertex in X j . This means that node X j separates the vertices in X i \ X j from the ones in X k \ X j . Our algorithm works on a particular type of path decomposition called nice. A nice path decomposition can be always obtained from a path decomposition in linear time, maintaining the same width. It is easy to check that the number of nodes in a nice path decomposition is at most twice the number of vertices in V, because property (iii) in Definition 3 says that every vertex v ∈ V belongs to a consecutive set of bags. A path decomposition of the graph in Figure 2 is depicted in Figure 3, while a nice path decomposition, of the same graph, can be found in Figure 4. Both decompositions have width equal to 2. Solving CMI(2) on Graphs with Bounded Pathwidth This section describes a polynomial time optimal algorithm for CMI(2) on graphs with bounded pathwidth, which uses the dynamic programming technique. The algorithm exploits the nice path decomposition provided above, and in particular the pathwidth separator property. Given an instance of CMI (2), it is possible in linear time [22,23] to find a pathwidth decomposition for G(V, E) of width h. Then, the algorithm computes a nice path decomposition P = (X 1 , . . . , X r ) with the same width h, which can be done again in linear time [20]. Denote by G(X i ) the subgraph induced by the vertices in i j=1 X j . Let f (X i , A) be the minimum value of CMI(2) on G(X i ), where A is a collection of |X i | subsets A(u) of the available interfaces W(u), with u ∈ X i , which satisfies the following constraints. • The interfaces active in the vertex u are those in A(u), with |A(u)| ≤ 2, for every u ∈ X i . By exploiting the pathwidth separator property, the core of the algorithm computes at each node X i of P the values f (X i , A), for every possible collection A. The dynamic programming algorithm starts at X 1 , and ends at X r . Clearly, the constrained version of CMI(2) cannot be always solvable. In this case, we set f (X i , A) = +∞. Since in X 1 there is only one vertex u, in G(X 1 ) there are no edges, and A = {A(u)}, the following condition holds: , and |A(u)| ≤ 2. Substantially, for X 1 we activate all the possible subsets of at most two interfaces available for u. Actually, since in G(X 1 ) there is only one vertex, these partial solutions are needed only to build an optimal solution for G, if G contains more than one vertex. In any introduce node X i+1 = X i ∪ {v}, the value f (X i+1 , A), for a specific collection A of active interface sets, is computed by solving the following constrained minimization problem, which uses the values f (X i , B) already computed in the previous node X i . with N(v) being the set of vertices neighboring v. The second constraint assures that the vertex v can communicate with all its adjacent vertices u in G(X i+1 ) by sharing at least a common active interface. The first constraint simply states that the new solution A equals B for all the vertices except the new one v. The objective function sums the already computed optimum value f (X i , B) with the cost of the interfaces A(v) activated in the new vertex v. In any forget node X i+1 = X i \ {v}, the value f (X i+1 , A) for a specific collection of interface subsets A(u), with u ∈ X i , and |A(u)| ≤ 2 is computed by solving the following constrained minimization problem. In fact, the value f (X i+1 , A) is essentially the minimum value of f (X i , A ∪ B(v)) for every possible subset of active interfaces B(v) compatible with every A(u), which means A(u) ∩ B(v) = ∅ for every u adjacent to v. At the end of the algorithm, the optimum of CMI (2) is the minimum value f (X r , A), for every possible collection of interface subsets A(u) ⊆ W(u), with |A(u)| ≤ 2, in the unique vertex u ∈ X r . We conclude this section computing the complexity of the algorithm. At each introduce node, we solve at most (k + ( k 2 )) (h+1) problems (1), one for every possible collection A. In fact, every possible subset of active interfaces A(u) is such that |A(u)| ≤ 2, and every |X i+1 | ≤ h + 1. Moreover, for every possible collection A, since in each subproblem there are at most h + 1 constraints, its resolution requires Analogously, at any forget node, we solve at most (k + ( k 2 )) (h+1) problems (2), given by all the subsets of active interfaces A(u), with u ∈ X i+1 , and all the possible subsets of active interfaces B(v) for the forget vertex v. In conclusion, since there are at most n introduce nodes and n forget nodes, the time complexity of the dynamic programming algorithm is O(n(k + ( k 2 )) (h+1) ). We can then state the following: Theorem 4. Given an instance of CMI(2) with a graph G, and a pathwidth decomposition P of width h for G, CMI(2) is solvable O(n(k + ( k 2 )) (h+1) ) time. Graphs with Bounded Carvingwidth We start by formally define what the carvingwidth [24] of a graph is. Subsequently, we present our new resolution algorithm that optimally solves CMI(2) on graphs with bounded carvingwidth in polynomial time. Definition of Carvingwidth We briefly recall some basic definitions before describing the algorithm. Given a graph G = (V, E), let {V 1 , V 2 } be a partition of V, and (V 1 , V 2 ) ⊆ E be the subset of edges with one extreme in V 1 and the other in V 2 . Clearly, (V 1 , V 2 ) is an edge-cut of G. Let T be a sub-cubic tree where each leaf corresponds to one vertex in V, and all the internal nodes have degree three (two children). For this particular tree, it is possible to define a specific edge-weight in the following way. For every edge e ∈ E(T), let T 1 and T 2 be the two subtrees obtained removing e from T. Then, let V 1 and V 2 be the sets of vertices in V corresponding to the leaves in T 1 and in T 2 . We set the weight w(e) of e at |(V 1 , V 2 )|, i.e., the number of edges between V 1 and V 2 . The tree T is called a carving of G, and (T, w) is called a carving decomposition of G. The width of (T, w) is the maximum weight w(e) overall e ∈ E(T). The carvingwidth of G, denoted by cw(G), is the minimum width over all carving decompositions of G. For any fixed constant h, if G has carvingwidth at most h then G is said to be of bounded carvingwidth. We define cw(G) = 0 if |V| = 1. An example of a carving decomposition for the graph in Figure 5 is shown in Figure 6, where the red (additional) edges among the leaves correspond to the edges of the decomposed graph. Each (black) edge of the tree is associated with an integer that is the weight given by the decomposition. The width of this particular carving decomposition is 5, which is the maximum of the weights, and corresponds to the cut ({1, 2, 5, 6, 7}, {3, 4}). Solving CMI(2) on Graphs with Bounded Carvingwidth This section describes a polynomial time optimal algorithm for CMI(2) on graphs with bounded carvingwidth, which uses the dynamic programming technique. Given an instance of CMI(2), it is possible in linear time [23,25] to find a carvingwidth decomposition (T, w) for G(V, E) of width h. Denote by root the root of T. We now describe a dynamic programming algorithm to find an optimal solution for CMI(2) for G, which exploits the structure and the properties of carving decomposition (T, w). As already done before for the pathwidth, we will call nodes the vertices of T, to avoid ambiguities between the vertices of the graph and those of T. Denote by T(i) the subtree induced by the node i of T and all its descendants. Denote also by V i the leaves of T(i), by V i − the leaves of T in V \ V i , and by G(i) the subgraph of G induced by the vertices in V i . In the core of the dynamic programming algorithm, we compute the optimal value of a constrained version of CMI(2) restricted to the subgraph G(i). The constraint is given by the interfaces active on each node in V i that is connected with some other nodes in V i − . In particular, we compute f (i, A), which is the optimum value of CMI(2) on G(i), where A is a collection of interface subsets defined as with the following additional constraints. • The interfaces active on each vertex u such that (u, v) ∈ (V i , V i − ) are the ones in A(u) ∈ A. Please note that the carvingwidth of G is h, so the maximum number of subsets in A is exactly h. If i is a leaf corresponding to a vertex u ∈ V, then the following condition holds: In fact, f (i, A) is essentially the cost of the active interfaces on the unique vertex u belonging to the subgraph G(i). If i is an internal node of T with two children j and k, then we compute f (i, A) solving the following minimization problem, where A has several subsets equal to |(V i , V −i )|, as defined in Equation (3). Moreover, there can be edges between G(j) and G(l). If it is the case, the interfaces active on the extremes of these edges must be compatible. The following constrained minimization problem solves f (i, A) for a particular collection A by using the values f (j, B), and f (k, C) already computed, with B and C being the collections defined according to Equation (3). In fact, by definition of carving decomposition, we have that V i = V j ∪ V l , V j ∩ V l = ∅, and that the edges in . This means that the collection A is bipartitioned into two sub-collections, one belonging to B, and the other belonging to C. The former refers the vertices in V j that are extremes of some edges in (V i , V i − ), the latter, analogously, refers the vertices in V l that are extremes of some edges in (V i , V i − ). This bi-partition leads to the first two constraints in Equation (4), while the third constraint guarantees the compatibility between every couple of subsets A(u), and A(v) in A such that the edge (u, v) belongs to G(i). Notice that A = ∅ for f (root, A), because V root − is empty, so there are no edges in (V root , V root − ). This means that the last problem of the algorithm is the following. where only the constraint that guarantees the compatibility of the two solutions B and C is needed. Clearly, the optimum of CMI(2) in G is the value of f (root, A), which is unique, because A is empty. The computational time needed to compute f (i, A) for a leaf of T is k + k 2 , i.e., O(k 2 ), because |A| = 1, and we essentially try every subset of at most two interfaces for the unique vertex in V i . When i is an internal node of T with two children j, and l, we can just combine every collection B with every collection C in order to build A, and compute f (i, A). Given two collections B and C, the check of the feasibility according to Equation (4) costs O(1) since there are at most O(1) edges between G(j) and G(l) that require activation. Moreover, as the cardinality of B and of C is at most h, every set B(u) ∈ B and every set C(u) ∈ C contain at most two interfaces, the time complexity to compute f (i, A) for an internal node is O(k 4h ). Notice that each internal node as exactly two children, and in a perfect binary tree there are 2n − 1 nodes, then T has at most 2n − 1 nodes. Concluding, the dynamic programming algorithm to solve CMI(2) for a graph with carvingwidth h needs O((2n − 1)k 4h ) time. We can then state the following: Theorem 5. Given an instance of CMI(2) with a graph G, and a carvingwidth decomposition P of width h for G, CMI(2) is solvable O(nk 4h ) time. Conclusions In the context of multi-interface networks, we have investigated a constrained variant of the Coverage problem. Given a graph G = (V, E), the aim is to find the cheapest way to establish all the connections defined by E by activating for each node v ∈ V a suitable subset of interfaces among those available in v. One further constraint provided as a positive integer p has been considered with respect to the original model. Parameter p specifies how many interfaces a single node can activate at most. The aim of the modification has been to balance the energy consumption among all the nodes of the network, hence prolonging the lifespan of the single nodes. As CMI(p) has been proven be much more difficult in general with respect to the basic case of CMI(∞), we keep on investigating whether the problem can be efficiently solved when restricted to specific graph classes. In particular, we have considered graphs with bounded treewidth in general, and graphs with bounded pathwidth or bounded carvingwidth in particular. In these two subcases we could design optimal polynomial time algorithms to solve CMI (2). All the results can be easily extended to CMI(p) of any constant p ≥ 1. As general main open question on CMI(p), it would be interesting to investigate general graphs in case the input instance is guaranteed to admit a solution, i.e., the complexity of the problem cannot rely on feasibility issues. Moreover, since the completeness proof for the underlying decisional problem holds for graphs with ∆ ≥ 4, while the problem has been solved for ∆ ≤ 2 (that is, paths and rings [2]), it remains to show what happens for sub-cubic graphs, i.e., ∆ ≤ 3. Other directions require investigation into whether CMI(p) can be solved on specific graph classes by means of different techniques. Please note that the solutions proposed for the case p = 2 can be easily extended to any p > 2; however, it is worth investigating whether CMI(p) can be solved on specific graph classes in a smarter way. Finally, a generalization of the problem that could cover more realistic cases, is to introduce multi-objective functions, similar to [26], where it is taken into account also global cost constraints. What can be done, for instance, if the underlying graph exhibits specific properties like small-world [27], planarity [28], or a bounded chromatic number [29,30]? How would the problem be affected if the cost function determining the energy spent to activate an interface is related to the bandwidth or to the physical distance among connected devices (see [31])? All such possible investigating directions confirm the general and interdisciplinary nature of the multi-interface networks model.
9,109.8
2020-01-26T00:00:00.000
[ "Computer Science" ]
Exocytosis, dependent on Ca2+ release from Ca2+ stores, is regulated by Ca2+ microdomains The relationship between the cellular Ca2+ signal and secretory vesicle fusion (exocytosis) is a key determinant of the regulation of the kinetics and magnitude of the secretory response. Here, we have investigated secretion in cells where the exocytic response is controlled by Ca2+ release from intracellular Ca2+ stores. Using live-cell two-photon microscopy that simultaneously records Ca2+ signals and exocytic responses, we provide evidence that secretion is controlled by changes in Ca2+ concentration [Ca2+] in relatively large-volume microdomains. Our evidence includes: (1) long latencies (>2 seconds) between the rise in [Ca2+] and exocytosis, (2) observation of exocytosis all along the lumen and not clustered around Ca2+ release hot-spots, (3) high affinity (Kd =1.75 μM) Ca2+ dependence of exocytosis, (4) significant reduction in exocytosis in the prescence of cytosolic EGTA, (5) spatial exclusion of secretory granules from the cell membrane by the endoplasmic reticulum, and (6) inability of local Ca2+ responses to trigger exocytosis. These results strongly indicate that the control of exocytosis, triggered by Ca2+ release from stores, is through the regulation of cytosolic [Ca2+] within a microdomain. Introduction Ca 2+ -dependent exocytosis is an essential and widespread process (Sudhof, 2004). An increase in cytosolic Ca 2+ concentration ([Ca 2+ ]) triggers secretory vesicle fusion with the plasma membrane leading to the release of vesicle cargoes, such as neurotransmitters and proteins, for example, hormones and enzymes. In excitable cells, Ca 2+ entry through voltage-gated Ca 2+ channels (Rizzuto and Pozzan, 2006) is the major route to elevate cytosolic [Ca 2+ ] and trigger exocytosis. In some excitable cells, Ca 2+ channels and exocytic sites are closely apposed; they are positioned within volumes of nanometre dimensions that are called nanodomains (Adler et al., 1991;Bucurenciu et al., 2008;Stanley, 1993) and enable fast, efficient regulation of the secretory response (Stanley, 1993). In other excitable cells, clusters of Ca 2+ channels provide a localized 'cloud' of Ca 2+ , triggering exocytosis across microdomains (Beaumont et al., 2005;Borst and Sakmann, 1996;Chow et al., 1994). In the latter case, the secretory response is slower, but precise control of [Ca 2+ ] within the microdomain is used to fine-tune the secretory output (Chow et al., 1994). The role of Ca 2+ release from intracellular stores in triggering exocytosis is less well understood. In cells with voltage-gated Ca 2+ channels, Ca 2+ release can modulate exocytic responses (Dyachok and Gylfe, 2004;ZhuGe et al., 2006) but in many cell types Ca 2+ release is the exclusive source of increase of cytosolic [Ca 2+ ] (Matthews et al., 1973;Tse et al., 1997). How the sites of Ca 2+ release from stores are related to the sites of exocytosis and control secretion in these cells is not known. A good example of secretion regulated by Ca 2+ release from Ca 2+ stores is in exocrine acinar cells. Here, exocytosis of enzyme containing granules (Chen et al., 2005;Nemoto et al., 2001) is dependent on Ca 2+ release through inositol trisphosphate receptors (InsP 3 Rs) on the endoplasmic reticulum (ER) Ca 2+ store (Futatsugi et al., 2005;Ito et al., 1997). This Ca 2+ response has complex characteristics in space, time and amplitude (Fogarty et al., 2000a;Kasai and Augustine, 1990;Kasai et al., 1993;Thorn et al., 1993). Through the use of Ca 2+ buffers (Kidd et al., 1999) and high-speed imaging (Fogarty et al., 2000b;Kidd et al., 1999) it has been shown that there is one hot spot of Ca 2+ release that can act alone, giving a local response or act to initiate larger, global Ca 2+ signals (Fogarty et al., 2000b;Shin et al., 2001). This hot spot is likely to represent a site of enrichment of the Ca 2+ release apparatus, possibly with more-sensitive isoforms of the InsP 3 R (Futatsugi et al., 2005;Lee et al., 1997;Nathanson et al., 1994;Park et al., 2008) or with a greater density of IP 3 Rs . How these complex Ca 2+ signals are employed to regulate the exocytic response is not known. Here, we test the hypothesis that control of exocytosis in acinar cells is through local Ca 2+ release that targets high [Ca 2+ ] to closeby sites of exocytosis within nanodomains. We employ high-speed two-photon microscopy to simultaneously measure cytosolic Ca 2+ (with Fura-2 and Fura-FF) and exocytosis (with extracellular aqueous dyes) in response to the endogenous agonist cholecystokinin (CCK) and the photoliberation of Ca 2+ from onitrophenyl (NP) tagged to EGTA (NP-EGTA). Our results show that events of exocytosis are not clustered around hot spots of Ca 2+ release and we conclude that Ca 2+ release from Ca 2+ stores regulates exocytosis through the control of a microdomain. In cells that do not require a rapid secretory response we speculate that fine-tuning of Ca 2+ levels within the microdomain gives precise control of secretory output. pancreatic acinar cell to agonists (Kasai and Augustine, 1990). Fig. 1 shows an example of such a response recorded in freshly isolated tissue fragments loaded with Fura-2 and stimulated with a physiological concentration of 20 pM CCK. The image sequence in Fig. 1A shows the Ca 2+ response in four cells on the edge of the tissue fragment (composed of 10-50 cells); the relative time point of each image is indicated with roman numerals (see Fig. 1B, upper graph). The Fura-2 fluorescence signal was converted into ratios and plotted in pseudocolor in Fig. 1A (upper images). To observe exocytic responses the tissue was bathed in sulforhodamine B (SRB), a fluorescent probe that surrounds the cells and diffuses into the lumens between the cells (Fig. 1A, lower images, colored red) (Nemoto et al., 2001;Thul and Falcke, 2004). Upon granule fusion SRB enters the granules, which seen as the sudden appearance of small spherical objects (~0.8 m diameter) at the apical pole of the cells (Fig. 1A, lower sequence). The average Fura-2 response in each cell is plotted in Fig. 1B (upper graph) and the time-course of the exocytic responses measured as normalized SRB changes -within regions of interest (ROIs) centered on each exocytic granule are shown in Fig. 1B (lower graph). These CCK-induced global Ca 2+ responses occur asynchronously (Yule et al., 1996); they originate in the apical region spreading across the cell to the basal pole. Fig. 1C shows the image sequence and a graph of average Fura-2 ratio against time of the Ca 2+ response for the lower left cell of Fig. 1A. The graph plots average changes in three ROIs spread across the cell and shows the apical to basal spread of the Ca 2+ wave. We calculated the velocity of the Ca 2+ wave to be 10.5±0.77 m/second (mean ± s.e.m., n17), which is comparable to previously published data (Larina and Thorn, 2005). Response to an exogenous Ca 2+ signal -spatial organization of exocytosis To initially characterize the stimulus-secretion relationship in the absence of the spatial complexities of the agonist-evoked response we induced a cytosolic increase of [Ca 2+ ] by uncaging of Ca 2+ from the photolabile Ca 2+ buffer NP-EGTA. This method elevates Ca 2+ uniformly across the cell. The image sequences in Fig. 2 (and in supplementary material Movie S1) show three cells at the edge of a pancreatic fragment loaded with NP-EGTA (AM ester) and Fura-2. The upper panel in Fig. 2A shows the ratiometric pseudocolor Fura-2 response to a 100-ms UV flash. The lower panel in Fig. 2A shows the induced exocytic response recorded by the entry of SRB into individual granules. The graph of the Fura-2 ratio over time shows a large, rapid rise in [Ca 2+ ] after the UV flash that triggers exocytic activity ( (C)Within a single cell, the Ca 2+ response is seen as a wave spreading from apical to basal regions. The enlarged images show a time sequence of images (0.2 second time intervals) from the lower left-hand cell (shown in A). The Ca 2+ wave initiates in the apical region (red circle) and then spreads to the basal region (green circle). Scale bar: 10m. The graph shows average ratiometric fluorescence changes in each ROI (red, yellow, green) plotted against time. The Ca 2+ signal rises first in the apical region. We then used these Ca 2+ responses that were induced by the uncaging process to map sites of exocytosis and to determine whether they are clustered along the lumen. Cells were loaded with NP-EGTA and stimulated with a 100-msecond UV flash. We then measured granule-to-granule distances along the lumen from one granule to all other exocytic granules in the same cell. A frequency histogram showed no evidence of clustering around short granuleto-granule distances (Fig. 3A). However, granule-to-granule distances would be affected by the lengths of lumen in each cell. Therefore, for each cell a scatter plot of granule-to-granule distances was plotted against the lumen length (Fig. 3B). The predicted line -if the granule-to-granule separation were random -shows a close approximation to our data; consistent with a lack of preferential sites of exocytosis along the lumen (Fig. 3C). These data therefore indicate that all regions along the lumen are equally capable of exocytosis. We applied the same clustering analysis to the exocytic response to 20 pM CCK with similar frequency distributions of granule-togranule distances (supplementary material Fig. S1A). Since it is known that compound exocytosis (granule-to-granule) fusion is prevalent in this cell type, we extended this analysis to identify the location of primary granules (those fusing directly with the plasma membrane) and secondary granules (those fusing with primary granules). Supplementary material Fig. S1B shows that the frequency plot of granule-to-granule distances is very similar for primary and secondary granules. Response to an exogenous Ca 2+ signal -exocytosis has a Ca 2+ K d of 1.75 M We also used the responses induced by uncaging Ca 2+ to determine the Ca 2+ dependence of exocytosis. Here, we varied the duration of the UV flash over a range from 5 ms to 200 ms and, for each duration flash, measured the maximal response to Fura-FF (a lowaffinity Fura-2 derivative with a measured in vivo K eff of 1.84 M for Ca 2+ , see Materials and Methods) (n309 cells). The duration of each flash was then calibrated as a Ca 2+ change, plotted against the number of exocytic events per cell (by using entry of SRB extracellular dye into fused vesicles). The graph shows a sigmoid relationship with an estimated K d of 1.75 M for the Ca 2+ dependence of exocytosis (Fig. 4). This is similar to the K d of 2 M Ca 2+ that is found for enzyme release in these cells (Knight and Koh, 1984). Our K d value is also comparable to endocrine cells, such as chromaffin cells, where the calculated K d is 1.6 M (Augustine and Neher, 1992). In all further experiments, we employed a 100-ms UV flash (3.4 M Ca 2+ ) to induce maximal exocytic responses. In summary, experiments where Ca 2+ was uncaged from NP-EGTA show that, in principle, exocytosis can occur all along the lumen and that the exocytic process is relatively sensitive to cytosolic Ca 2+ . We next set out to determine the spatial relationship between the agonist-evoked Ca 2+ signal and the triggered exocytic responses. Agonist-evoked initiation sites of Ca 2+ signals are distant from sites of exocytosis -morphometric analysis We used first derivatives and region mapping of the Ca 2+ response to determine the precise point of origin of the Ca 2+ responses to The granule-to-granule distance was measured using one reference exocytic granule in each cell and determining the distances to all other exocytic granules within the same cell. (A)The frequency distribution has a broad spread of granule-to-granule distances. (B)Same data as in A, but each granule-to-granule distance is plotted against the lumen length in each cell. As expected lumen length is limiting and within each cell there is a wide distribution of granule-to-granule distances. (C)Mean granule-to-granule distances (mean ± s.d., n24 cells) for each cell plotted against lumen length in that cell. Also shown is the predicted line if numbers of exocytic granules are evenly separated along the lumen. CCK (Fig. 5, n22 cells). These initiation hot spots of the Ca 2+ signal probably correspond to the local enrichment of InsP 3 Rs in regions below the apical plasma membrane as previously described by Kidd and colleagues (Kidd et al., 1999). Here, we measured changes of Fura-2 intensity in 3ϫ3 m ROIs, which robustly identified the origin of the Ca 2+ responses. In most cases, a single discrete origin was identified (Fig. 5C,D) with the Ca 2+ rise occurring in advance of surrounding regions (Fig. 5C). In a small number of cells (<10%) the Ca 2+ signal appeared simultaneously across a broader region. We interpret these latter responses to be indicative of the Ca 2+ origin lying outside the plane of the twophoton cross-section and did not include these experiments in further analysis. The measured spatial relationship between the site of Ca 2+response initiation and the sites of granule fusion (Fig. 6) did not show any events of granule fusion at the origin of the Ca 2+ signal. The numbers of exocytic events increased to a maximum at a distance of 3 m from the Ca 2+ signal origin and then decreased at further distances. These data, therefore, do not support the idea that Ca 2+ nanodomains control exocytosis. Instead, they indicate the importance of larger-volume microdomains and even suggest a distinct separation of sites of Ca 2+ release from sites of exocytosis. Ca 2+ initiation sites are distant from sites of exocytosisblock of exocytosis by EGTA EGTA, a Ca 2+ chelator with a slow on-rate for binding Ca 2+ , is often used as an indicator of the spatial extent of a Ca 2+ response (Beaumont et al., 2005;Borst and Sakmann, 1996;Bucurenciu et al., 2008;Chow et al., 1994;Stanley, 1993). Since EGTA is unable to act as an effective Ca 2+ buffer within nanodomains, a Ca 2+ target that is close (<200 nm) to a Ca 2+ source will be unaffected by the presence of EGTA (Bucurenciu et al., 2008;Thul and Falcke, 2004). By contrast, EGTA is an effective buffer when the Ca 2+ source is further away from the Ca 2+ target (<1 m). We loaded the cells with EGTA-AM and recorded the Ca 2+ and the exocytic responses (Fig. 7). The Fura-2 responses decreased with increased duration of EGTA-AM loading, which is as expected because EGTA will compete with Fura-2 for cytosolic Ca 2+ . The exocytic response was significantly reduced within 30 minutes of EGTA-AM loading (Fig. 7). These data support the idea that exocytosis is regulated by Ca 2+ microdomains. Crowding limits access of secretory granules to the apical plasma membrane To further investigate the ultrastructural relationship of organelles in the apical region of acinar cells we next used thin-section transmission electron microscopy. The ER surrounds secretory granules in the apical region (Bolender, 1974). We now established that these ER projections extend right up to the apical plasma membrane. Fig. 8 shows examples of typical electron micrographs of the region surrounding the lumen of an acinar endpiece. In all of our electron microscopy sections we found evidence of rough ER lying immediately under the apical plasma membrane, sterically blocking granule access to the apical cell membrane, and measured the distances between the apical plasma membrane and the centre point of the first layer of secretory granules under the membrane. If the granules were tightly packed against the cell membrane then the distance to the centre point of each granule should equal the granule radius. We measured the mean granule diameter as 748.6 ± 11.1 nm (mean ± s.e.m., n230) and, with tight packing, a normal distribution centered on 374 nm (the radius) is therefore expected, with a further peak at 1122 nm (reflecting another layer of granules). Instead, the frequency-distance plot shows a first peak at 500 nm, indicating that granules are further away from the plasma membrane than predicted (Fig. 8B), and no further peak at greater distances. We therefore conclude that the granules are relatively loosely packed and are separated from each other and from the cell membrane by structures such as the ER. Local Ca 2+ responses fail to trigger exocytosis A characteristic of the physiological response to CCK is that the global Ca 2+ responses (as focused on in this study) are interspersed with fast local Ca 2+ responses that remain within the apical region (Kasai et al., 1993;Thorn et al., 1993). These local responses are thought to represent Ca 2+ release from Ca 2+ hot spots that fail to propagate across the cell (Thorn et al., 1993). Cytosolic Ca 2+ levels within the nanodomain of the hot spot region are transiently expected to be high (>50 M) (Thul and Falcke, 2004), and sufficient -in principle -to elicit exocytosis within this region. In six independent experiments we compared local and global cytosolic Ca 2+ responses in 20 cells stimulated with 10-12 pM CCK. A total of 66 Ca 2+ events were observed, 25 of which were local. All global Ca 2+ signals induced exocytosis. By contrast, no local Ca 2+ signals induced exocytosis (Fig. 9). We therefore conclude that local Ca 2+ responses are not sufficient to drive exocytosis, which further supports the notion that nanodomains are not important in the regulation of secretion and that sites of Ca 2+ release are further away from sites of exocytosis. Discussion Our study describes the spatial relationship between sites of Ca 2+ release and sites of exocytic fusion in a cell type where secretion is 3205 Control of exocytosis by Ca 2+ release Fig. 7. Cytosolic EGTA blocks exocytosis. Cells loaded with EGTA-AM were stimulated with 50 pM CCK. Both the Ca 2+ response recorded by using Fura-2 and the exocytic response recorded by using extracellular SRB decreased as a function of the EGTA-AM loading time. Comparisons between Fura-2 AM loading and Fura-2 loaded through a patch pipette (Thorn et al., 1993) indicate that, at 30 minutes, intracellular EGTA concentrations reach a concentration of >200M. Data were obtained from 3-6 independent experiments per set of data (**P<0.01 compared with controls measured using Student's t-test). In all sections the sub-apical region also contained other organellessuch as rough ER -which lay between granules and the apical plasma membrane. (B)Frequency histogram of the distances between the apical membrane and granule centres is not consistent with tight packing of granules against the cell membrane. Instead the data suggest that intercalated organelles, such as the ER, spatially separate granules from each other and from the cell membrane. dependent on Ca 2+ release from Ca 2+ stores. On the basis of previous work, with cells dependent on Ca 2+ entry to trigger exocytosis, we expected a close apposition between hot spots of Ca 2+ release and sites of granule fusion. However, our findings indicate no sites of clustered exocytosis. Instead, we show that exocytosis occurs all along the luminal membrane and is in fact excluded within a region of ~3 m around the hot spot of Ca 2+ release, suggesting that the ER, required for Ca 2+ release, is locally obstructing granule access to the plasma membrane. Combined with the observations that EGTA profoundly inhibits exocytosis and that local Ca 2+ responses do not trigger exocytosis we conclude that secretion, dependent on Ca 2+ release from Ca 2+ stores, is controlled by cytosolic Ca 2+ microdomains. We suggest that this regulation of the microdomain sacrifices speed of secretory control for precision; small adjustments in [Ca 2+ ] fine tune the secretory output. Ca 2+ -release-dependent triggering of exocytosis -a model The evidence presented here supports the idea that Ca 2+ release controls an apical cytosolic microdomain of Ca 2+ that in turn triggers exocytosis. Within this microdomain the Ca 2+ concentration reflects the activity of many ion channels and will be regulated both by the ensemble control of the Ca 2+ release channels and the mechanisms of Ca 2+ clearance. In turn this tight regulation of the Ca 2+ concentration leads to a precise control of secretion. By contrast, nanodomain control is found in some neurons; the very local delivery of Ca 2+ through ion channels tightly couples the Ca 2+ stimulus to secretory output with very short latencies. The limitation of this mechanism is that the stochastic opening and closing of the ion channels produces rapid and extreme local changes in Ca 2+ in the cytosolic nanodomain beneath the Ca 2+ channel. Active and passive mechanisms of Ca 2+ clearance in this nanodomain mean that closure of a Ca 2+ channel leads to a rapid drop in [Ca 2+ ]. Thus, the delivery of sufficient Ca 2+ levels to trigger exocytosis is dependent on the probabilistic opening of the Ca 2+ channel and this can lead to failures to drive exocytosis (Stanley, 1993). Dependence on nanodomains is, thus, necessary to drive fast processes (e.g. neuromuscular control of muscle contraction) but leads to unpredictability in controling the number of exocytosis events. So how is the microdomain of Ca 2+ regulated in the acinar cells? We have previously measured how much Ca 2+ is released during responses in acinar cells and have shown that local Ca 2+ responses are comparable (slightly larger) to the puffs seen in oocytes (Fogarty et al., 2000b;Xun et al., 1998). Modeling of cytosolic free [Ca 2+ ] suggests that, in the immediate vicinity of a puff site, cytosolic Ca 2+ can rise to >50 M but drops off dramatically further away from the puff site (Thul and Falcke, 2004) where, at distances of 1 m, the cytosolic Ca 2+ is predicted to be <1 M (Thul and Falcke, 2004). With our estimate of a K d of 1.75 M Ca 2+ for exocytosis of a secretory response driven solely by Ca 2+ release from the hot spot would need to have the exocytic sites within 1 m of the Ca 2+ hot spot. Since we show here that no clustering of exocytosis is evident around hot spots of Ca 2+ release, local Ca 2+ responses cannot induce exocytosis and EGTA can effectively block exocytosis, a more realistic model is that the explosive Ca 2+ release from InsP 3 Rs within the hot spot recruits further InsP 3 Rs along the ER (Fogarty et al., 2000a). The summed Ca 2+ release from these InsP 3 Rs thus collectively contributes to microdomain Ca 2+ levels that, in turn, control exocytosis. We also suggest that the regulation of [Ca 2+ ] within a microdomain is the key to integrating convergent cell stimuli on the control of cell secretion. This way, the regulation of InsP 3 Rs via the CCK and acetylcholine cell-surface receptors is the major mechanism of control (Kasai and Augustine, 1990;Nathanson et al., 1992). This will be modulated by triggering Ca 2+ release from ryanodine receptors (Nathanson et al., 1992) through other signaling pathways, such as the secretin-dependent cAMP regulation of InsP 3 Rs (Giovannucci et al., 2000) and through the regulation of mechanisms of Ca 2+ clearance (Camello et al., 1996). Divergent cell stimuli are thus integrated by converging on the regulation of [Ca 2+ ] within the microdomain, which then precisely controls secretory output of the cell. Mechanisms of regional targeting of the Ca 2+ release system and the exocytic machinery Despite our conclusion that the hot spots of Ca 2+ release are only loosely coupled with sites of exocytosis it nevertheless should be recognized that both the Ca 2+ release system and the exocytic machinery are precisely localized within the cell by mechanisms that are poorly understood. It is well established that InsP 3 Rs are enriched beneath the apical plasma membrane of polarized epithelia (Lee et al., 1997;Nathanson et al., 1994;Yule et al., 1997); as shown by immunohistochemistry, the receptors colocalize with apical markers such as the F-actin apical web (Waterman-Storer and Salmon, 1998), and tight-junction markers such as ZO-1 (Larina et al., 2007;Turvey et al., 2005). The mechanisms of this localization are not well understood but disruptions of both the microtubular system (Colosetti et al., 2003) and the F-actin network perturb the generation of Ca 2+ signals (Turvey et al., 2005). It has been proposed that the microtubular system acts to position the ER (Fogarty et al., 2000c) and that F-actin is part of a complex that specifically anchors InsP 3 Rs (Foskett et al., 2007). Functional work has proven that the whole of the ER within these cells forms a single continuous network (Park et al., 2000). Electron microscopy of the sub-plasmalemmal region under the apical pole has shown that it is enriched in secretory granules with interspersed ER elements (Bolender, 1974) (Fig. 8). Although there is no evidence for close association (docking) of granules at the cell membrane it is clear that there must be mechanisms that moves the granules to the apical region and retain them there. Again, these mechanisms probably depend on the cytoskeleton, a suggestion that is supported by earlier reports that movement of granules depends on kinesin (Marlowe et al., 1998) andmyosin 1 (PoucellHatton et al., 1997), and also recently that, in situ, granules are tethered in the apical region (Abu-Hamdah et al., 2006). Most recently a proteomic analysis has shown that myosin Vc is present on zymogen granules (Chen et al., 2006). In terms of the molecular components of exocytosis in non-excitable cells our knowledge lags behind excitable cells. So, whereas soluble NSF attachment protein receptors (SNARE) proteins on the cell membrane and granule membrane have been identified (Cosen-Binker et al., 2008;Gaisano et al., 1996;Hansen et al., 1999), it is still unclear which SNARE proteins are actually involved in exocytosis of zymogen granules at the apical plasma membrane. Comparison of other measures of secretion The two-photon method we used here has been proven to reliably identify exocytosis of zymogen granules (Larina et al., 2007;Nemoto et al., 2001;Thorn et al., 2004). Using cell capacitance measurements as a read-out for exocytosis, changes are detected that might not be directly associated with fusion of a zymogen granule. The expectation that fusion of a zymogen granule should lead to rapid, large-step increases in capacitance is seen in parotid cells (Chen et al., 2005) but has rarely been observed in pancreatic acinar cells, in which slower increases are usually seen (Ito et al., 1997;Maruyama and Petersen, 1994). Ito et al. kinetically separated these capacitance signals, and suggested a fast component possibly owing to processes other than the secretion of amylase (fusion of zymogen granules) (Ito et al., 1997). The underlying mechanism used by this fast component is unknown but it might explain why capacitance changes have been seen with local Ca 2+ responses (Maruyama and Petersen, 1994); and yet we find no evidence that exocytosis of zymogen granules is induced by local Ca 2+ spikes (Fig. 9). Conclusions Ca 2+ release from Ca 2+ stores is either the exclusive regulator of, or a component in, the regulation of exocytosis in many different cell types. Our work described here shows that exocytosis is precisely controlled by regulating Ca 2+ release in a microdomain. We further show that, in response to physiological stimuli, exocytosis is only driven by global Ca 2+ signals. Cell preparation Mice were humanely killed according to local animal ethics procedures. Isolated mouse pancreatic tissue was prepared by a collagenase digestion method in normal NaCl-rich extracellular solution (Thorn et al., 1993) that was modified to reduce the time in collagenase and to limit mechanical trituration. The resultant preparation was composed of pancreatic lobules and fragments (50-100 cells). In the indicated experiments pancreatic fragments were loaded with 2 M Fura-2 (or Fura-2FF) acetoxymethyl ester (AM) for 30 minutes at 30°C. Fragments were then washed and plated onto poly-L-lysine-coated glass coverslips and used within 3 hours of isolation from the animal. In experiments using NP-EGTA the NP-EGTA-AM (1 M) was loaded together with Fura-2-AM. Live-cell two-photon imaging We used a custom-made, video-rate, two-photon microscope employing a Sapphire-Ti laser (Coherent), with a 60ϫ oil immersion objective (NA 1.42, Olympus), providing an lateral resolution (full width, half maximum) of 0.26 m and a zresolution of 1.3 M (Thorn et al., 2004). We imaged exocytic events using Sulforhodamine B (SRB, 20 g/ml, Sigma), as a membrane-impermeant fluorescent extracellular marker excited by femtosecond laser pulses at 800 nm, with fluorescence emission detected at 550-700 nm. Fura-2 was excited at the same wavelength with fluorescence emission detected at 450-550 nm. The Fura-2 signal was analysed using the following formula: Fluorescence ratio  (resting fluorescence -signal fluorescence) ÷ resting fluorescence, where the resting fluorescence is taken from an average of images before stimulation. Images, with a resultant capture rate of six frames/second (resolution of 10 pixels/m, average of five video frames), were analysed with the Metamorph program (Molecular Devices Corporation). Kinetics of exocytic events were measured as changes in SRB fluorescence from ROIs (0.78 m 2 , 100 pixels) centered over granules. Traces were rejected if extensive movement was observed. All data are shown as the mean ± s.e.m. Photoliberation of Ca 2+ from NP-EGTA An epifluorescent mercury light source provided high-intensity ultraviolet (UV) light to uncage Ca 2+ from o-nitrophenyl (NP)-EGTA in a ~30-m diameter field at the image plane. The duration of exposure to UV light was limited by a computercontrolled shutter (Prior) and was varied between 5 and 200 ms. Fura-FF was calibrated in vivo by loading the cells with 2 M Fura-FF-AM (for 30 minutes) and then permeabilizing with 500 nM ionomycin in the presence of a range of extracellular solutions of different [Ca 2+ ]. For [Ca 2+ ] of less than 1 M we used the MAXC chelator program (Patton et al., 2004) to calculate the relative concentrations of Ca 2+ and EGTA. For higher [Ca 2+ ] we added Ca 2+ directly to the medium. Fura-FF fluorescence was measured after equilibration and the fluorescence-Ca 2+ curve fitted using GraphPad Prism giving a K eff of 1.84 M. In converting the Fura-FF changes induced by flash photolysis of NP-EGTA we assumed the resting fluorescence was the same as F min . Then F max was expressed as a fraction of F min based on the maximum change of fluorescence induced by ionomycin in the calibration experiments; which was F max 0.55F min . In this way we used the equation Ca 2+  (F min -F) ÷ [(F -(F min ϫ 0.55)] ϫ K eff to calculate [Ca 2+ ] reached by each UV flash; F min is the fluorescence at minimum [Ca 2+ ], F max is the fluorescence at maximum [Ca 2+ ], K eff is for Fura-FF. Image analysis All morphometric analysis was performed using the Metamorph imaging suite. Individual cells were readily identified by the outline that is apparent in the extracellular SRB stain. However, some examples had cells lying on top of one another and we used the Fura-2 signal to aid in the identification of single cells. Here, the asynchronous Ca 2+ response in each individual cell (as described in Fig. 1) supported unambiguous identification of single cells. When measuring of distances between granules (Figs 3, 4, 6), we identified the approximate center of all fused granules labeled with SRB and measured granuleto-granule distances parallel to the length of the lumen. The two-photon z-thickness of 1.3 m approximates to the diameter of the lumens between the cells. Experimentally, we focused through each tissue fragment and selected image planes to optimise the length of lumen observed (because this is where exocytosis exclusively occurs). Most images, therefore, have lumens much longer than the diameter of individual granules, making it simple to measure inter-granule distances parallel to the lumen. By contrast, where the lumen is complex, errors are possible in our estimates of inter-granule distance. However, this did not bias our data analyses because errors of above and below the estimate are equally probable. In calculating the expected granule-to-granule distances (Fig. 3) we assumed to have triggered the maximal exocytic response and a granule diameter of 1 m, which were to give us the simple linear relationship of: y0.4x; where y is the granule-to-granule distance and x is the length of the lumen. In our estimates regarding the focal point of origin of a Ca 2+ signal, our experimental approach was -again -to select image planes with long lumens. Knowing that InsP 3 Rs are located along the lumen (Lee et al., 1997;Nathanson et al., 1994;Yule et al., 1997), we were likely to image the origin of the Ca 2+ signal in most recordings. By using the maximal increase of the Fura-2 signal in ROIs along the lumen we believe that, in most cases, were able to identify a single point of origin of a Ca 2+ signal. In the few instances where the rate of increase of the Fura-2 signal appeared diffuse along the lumen -indicating a Ca 2+ -signal-origin outside the plane of focus -records were rejected. We acknowledge the personnel in the Centre for Microscopy and Microanalysis of the University of Queensland for their help in teaching us electron microscopy. This research was supported by grants from the Australian Research Council Grant (DP0771481) and National Health Medical Research Council (456049).
7,725.2
2010-09-15T00:00:00.000
[ "Biology" ]
Series Supply of Cryogenic Venturi Flowmeters for the ITER Project In the framework of the ITER project, the CEA-SBT has been contracted to supply 277 venturi tube flowmeters to measure the distribution of helium in the superconducting magnets of the ITER tokamak. Six sizes of venturi tube have been designed so as to span a measurable helium flowrate range from 0.1 g/s to 400g/s. They operate, in nominal conditions, either at 4K or at 300K, and in a nuclear and magnetic environment. Due to the cryogenic conditions and the large number of venturi tubes to be supplied, an individual calibration of each venturi tube would be too expensive and time consuming. Studies have been performed to produce a design which will offer high repeatability in manufacture, reduce the geometrical uncertainties and improve the final helium flowrate measurement accuracy. On the instrumentation side, technologies for differential and absolute pressure transducers able to operate in applied magnetic fields need to be identified and validated. The complete helium mass flow measurement chain will be qualified in four test benches: - A helium loop at room temperature to insure the qualification of a statistically relevant number of venturi tubes operating at 300K.- A supercritical helium loop for the qualification of venturi tubes operating at cryogenic temperature (a modification to the HELIOS test bench). - A dedicated vacuum vessel to check the helium leak tightness of all the venturi tubes. - A magnetic test bench to qualify different technologies of pressure transducer in applied magnetic fields up to 100mT. Introduction Since June 2014, CEA-SBT has been contracted to supply 277 venturi tube flowmeters to measure the distribution of helium in the superconducting magnets of the ITER tokamak. Six sizes have been designed to cover the full range (table 1). Furthermore, the largest size is supplied in two configurations: a two-pressure-tap configuration (classical configuration to measure helium mass flowrate) and a three-pressure-tap configuration, to measure flowrate and to be used as secondary quench detection (measurement of back flow). * To whom any correspondence should be addressed. In addition to the supply, different tests will be performed in order to guarantee the correct operation of the entire acquisition chain: -The qualification of 16 venturi tubes at room temperature; this test is a qualification of a statically relevant number of "warm venturi tubes" -The qualification of 9 venturi tubes in cryogenic conditions -The leak check of all the venturi tubes -The identification and the test of 8 pressure transducers (4 absolute and 4 differential) in magnetic fields of 100mT. Due to the number of venturis to be supplied, the required measurement accuracy and the specific conditions of use, a dedicated production process has been developed. This comprises the detailed design of the venturis, the development of specific tests benches and the organization of the manufacturing activities. The first part of this article explains the drivers for the design, reports the measurement accuracy estimations and presents the specific process developed for the manufacturing control of these venturi tubes. The second part details the four test benches manufactured to perform all the qualification tests. The test bench design took into account criteria such as: repeatability of manufacturing, reduction of cost and manpower needs. Sizing of venturi flowmeters The venturi Flowmeters for ITER have been designed following the standard NF EN ISO 5167-4 [1] even if this standard is not strictly applicable in our case: -The diameter of the upstream (D) pipe is smaller than 50 mm -The Reynolds number in the upstream pipe is less than 2.10 5 for flowmeters operating at room temperature -The Reynolds number in the upstream pipe is greater than 2.10 6 for flowmeters operating at around 4 K with supercritical helium -The general design of the helium stream is given in the following figure (figure 1). Nevertheless the CEA/SBT experience is that such a design can give good results in cryogenic conditions. A key point is that the flowrate coefficient, which takes into account the fluid compressibility and the pressure losses in the venturi, must be measured in order to achieve high accuracy flowrate measurements. The mass flow rate in a venturi can be derived from the Bernoulli relation, adapted for venturi flowmeters (1): = . . 2√2 where: m: mass flow rate : flowrate coefficient D, d: upstream and neck diameters respectively : pressure and temperature in the upstream pipe : density of the fluid : pressure at the neck diameter All sizes of venturi flowmeter have been designed to have the same differential pressure (∆ = − = 200 ), so as to use only one kind of differential pressure transducer. Measurement accuracy As for all measuring tools, the results obtained include some uncertainties. For the venturi flowmeters installed at ITER, two sources can be identified: uncertainties that come from parameters measured during the operation of the ITER machine ( , , ∆ ) and those measured during the manufacturing and tests of the flowmeters (, D, d). D and d will be measured for each flowmeter in order to minimize the geometrical error, and  will be statistically estimated at the CEA/SBT laboratory in two dedicated test benches in order to minimize the cost of this part of the work. To estimate the flowrate coefficient, all the physical values have to be measured and this includes the mass flow rate. CEA/SBT has proposed to use a Coriolis flowmeter as CERN has demonstrated the capability of this technology to operate at cryogenic temperature [2]. It turns out that some uncertainties appear twice (on , , ∆ ): the first time during the measurement in the lab and the second time during the operation in the ITER machine. To distinguish the two cases, an index m is added for the measurements performed in the laboratory. By differentiation of formula (1), the relation which links the uncertainties can be calculated. . . The flowrate coefficient will be measured on several venturi flowmeters (of a given size) and will then be applied to all venturis of that size. This results in the geometrical uncertainties being taken into account twice. If a quadratic error is calculated, the uncertainties follow the formula (4): During ITER magnet operation, cold flowmeters will operate at temperatures between 4.2 K and 6 K and at pressures between 4 bars to 10 bars. In these conditions, the flowmeters will work near the helium critical point (2.27 bars, 5.2K) where there is a strong dependence of the helium density with the temperature. The consequence is a significant variation of the uncertainty on the mass flow measurement depending on the operating point. Figure 2 shows the uncertainty obtained in the estimation of the density of the helium depending on the pressure and the temperature. This result is obtained with an assumed uncertainty of 2.5 % for the temperature measurement and 0.2 % (full scale) for the pressure measurement. An absolute error of 2 µm is assumed for the geometrical measurements. The measurement error on the mass flow rate as measured by the Coriolis flowmeter is estimated to be 2 %. Figure 3 shows the estimated error on the mass flow rate measured by a DN25 venturi flowmeter, once all the uncertainties are taken into account. A significant variation is observed (up to a factor 5) depending on the operating point of the venturi tube. Figure 3 shows the manufacturing workflow which has been implemented in order to simplify the required tests and to ensure the quality of all the venturi tubes. The pressure transducer test is independent of the venturi tube manufacturing so it does not appear in this scheme. A metrology step has been implemented immediately after the venturi has been machined, but before any welding operations take place. This measurement of the geometry of the gas stream is made on all venturis, and results in a reduction of the geometrical uncertainties. Moreover, this allows CEA-SBT to perform a statistical estimation of the flowrate coefficient . After the metrology step, the venturi undergoes welding operations before being subjected to a thermal shock, and then helium leak test under pressure. The venturis are prepared in one of two intermediate configurations, see figure 4a and 4b. In the "closed" configuration (figure 4a) three of the four open ports are welded closed with a plug. A VCR fitting is welded to the remaining port (the upstream pressure tap). It is through this port that the venturi is pressurised with helium gas up to the test pressure of 43 bar. The venturi is placed inside a dedicated vacuum vessel (figure 5a) for the leak test. The "connected" configuration (figure 4b) is adapted to the mass flowrate qualification of the venturi. Dedicated fittings are welded onto all four open ports to facilitate connection to the warm or cold helium flowrate measurement benches. (Figure 5b and 5d). In order to perform a leak test of a venturi in the "connected" configuration, all the openings (except the upstream pressure tap) are closed using plugs and metal seals. Manufacturing organization In the final step, the venturis are returned to the manufacturer for removal of all fittings by spark machining before delivery to ITER (figure 4c). Leak test bench (figure 5a) This test bench has two purposes: firstly to measure the leak tightness of the venturi tube by connecting the vacuum chamber of the bench to a helium spectrometer and by injecting helium inside the venturi. The second is a test under pressure which is required for pressure equipment. For these tests, only one connection (the upstream pressure tap) on the venturi is used, and this is the case for all sizes of venturi tube. Five venturi tubes are tested during each run to reduce the manpower required and the vacuum pumping time. In the distribution piping, the number of pressure transducer had been reduced to only 1 thanks to a system of valves. Each time a valve is opened (to connect to one venturi under test), the increase of volume entails a decrease of pressure in the line. The recovery to the initial pressure demonstrates that the venturi under test has indeed been pressurized with helium. The pressure in the venturi as well as the measured leak rate is recorded. If a leak is detected, the venturi tube which has failed can be identified and isolated from the other venturis under test. Magnetic bench (figure 5b) The aim of this test is to identify pressure transducers which are able to accurately measure the pressure when subjected to a magnetic field of 100 mT. The pressure transducers in the ITER machine will be subjected to such a magnetic field. The behaviour of 8 sensors will be characterised in a constant magnetic field applied in three directions. Four different technologies of pressure transducer will be tested: piezoelectric, capacitive, resistive and optical. These tests will be performed in a superconducting coil available in CNRS-CRETA (France). A dedicated test bench has been designed to react the magnetic forces and to guarantee the position of the pressure transducers. During the tests, the pressure transducers will be mounted in a cubic stainless steel frame. The frame is inserted in a cubic support which allows orientation in three directions in the applied magnetic field. This cubic support has two functions: to maintain the sensor mechanically and to connect the sensor to the gas piping. This piping is also connect to a valve panel with two pressure transducers, located far from the magnetic field, which are used as reference sensors. Warm flowrate measurement bench (figure 5c) The warm (300 K) flowrate measurement bench will be used to qualify sixteen "warm" venturi tubes, one at a time. The principle is to determinate the flowrate coefficient  by using a coriolis flowmeter as a reference sensor. This estimation could be made with helium gas (supplied from the warm station kof the 400 W @1.8K cryorefrigerator) or with nitrogen gas. As advocated in [1], the absolute pressure and temperature are measured at the upstream pressure tap of the venturi tube. The differential pressure is measured between the two pressure taps. In order to estimate the pressure drop across the whole of venturi tubes, a differential pressure transducer is also installed between the upstream and the downstream ports of the venturi. Cold flowrate measurement bench (figure 5d) This bench is a modification of our supercritical and cryogenic loop, called "HELIOS" (HElium Loop for hIgh lOaded Smoothing). This bench will be used to qualify the performance of the three sizes of cryogenic venturi. Three examples of each venturi size will be tested in series (figure 6). The principle of this bench is similar to that of the warm bench: -The flowrate coefficient is estimated by comparison with a reference flowmeter at cryogenic temperatures. - The pressure losses in the venturi tube are also measured. The reference measurement of the mass flowrate will be performed in two ways: by thermal estimations and/or by coriolis flowmeter (as describe in [2]). For the largest mass flow rates, only the comparison with coriolis flowmeter can be performed due to the limited power of our refrigerator. Each series of venturi will be tested in stable conditions. A set of switching valves is used to reduce the number of pressure sensors. As shown by the measurement accuracy calculations, the thermometry needs to be particularly well implemented in order to obtain good overall flowrate measurement accuracy. Dedicated thermometric supports (with copper 'fingers' inserted into the fluid stream) will be used. Conclusion At the date of the conference, the project is still in the pre-production phase. The warm flowrate bench and the leak test bench have already been manufactured. The magnetic test bench is currently being manufactured, and the technical specifications for the modification to the cold flowrate test bench are being finalised. The supply contract for the manufacture of the venturi tubes is due to be signed imminently with a subcontractor and we are confident to our manufacturing process. In addition to the manufacturing know-how, the critical points of the project are the measurement of temperature and mass flowrate in cryogenic conditions. For the former, CEA-SBT will use its skills to perform the measurement with good accuracy (CEA-SBT is also in charge of the supply of 2200 thermometric chain for the ITER Magnets). For the latter, we have already used a coriolis flowmeter [2] in previous applications with good results. Furthermore, two new coriolis flowmeters have been procured and benchmarked against older coriolis flowmeters; the initial analysis indicates very good correlation between these measurements.
3,376.8
2015-12-18T00:00:00.000
[ "Physics" ]
An Efficient Super-Resolution Network Based on Aggregated Residual Transformations In this paper, we propose an efficient multibranch residual network for single image super-resolution. Based on the idea of aggregated transformations, the split-transform-merge strategy is exploited to implement the multibranch architecture in an easy, extensible way. By this means, both the number of parameters and the time complexity are significantly reduced. In addition, to ensure the high-performance of super-resolution reconstruction, the residual block is modified and simplified with reference to the enhanced deep super-resolution network (EDSR) model. Moreover, our developed method possesses advantages of flexibility and extendibility, which are helpful to establish a specific network according to practical demands. Experimental results on both the Diverse 2K (DIV2K) and other standard datasets show that the proposed method can achieve a good performance in comparison with EDSR under the same number of convolution layers. Introduction In recent years, single image super-resolution (SISR) has attracted a lot of attention from researchers in the field of computer vision.SISR aims to reconstruct a high-resolution image I HR from a single low-resolution image I LR [1], and it has been widely used in many fields, such as remote sensing [2], medical imaging [3], and environmental monitoring [4][5][6][7].To our knowledge, the interpolation technique based on sampling theory was the earliest method to solve the super-resolution problem.However, there are serious shortages in predicting details and realistic textures.To address this problem, techniques that learn the mapping relationship between I LR and I HR have been proposed, such as neighbor embedding [8][9][10][11] and sparse coding [12][13][14][15][16].In the last few years, deep learning-based approaches for super-resolution are constantly emerging [16][17][18][19][20]. Dong et al. first applied CNN (convolutional neural networks) into super-resolution [18], with a satisfactory effect in its practical use.Later, Kim et al. designed SRResNet (residual network for super-resolution) [20] based on the well-known residual network ResNet [19].Benefiting from the jump connection and recursive structure, deeper layers are easy to realize for better performance.To simplify SRResNet, enhanced deep super-resolution network (EDSR) [1] was proposed for super-resolution by Lim et al., which optimizes the architecture of residual blocks by removing unnecessary modules.Although these ResNet-based models can improve the quality of reconstruction due to deeper layers, they all met the same problem: a sharp increase in the number of parameters.Especially in engineering practice, the cost of a large number of residual blocks and parameters has hampered the wider use of ResNet-based models.Therefore, the question of how to reduce the number of model parameters without reconstruction quality loss has become one of the hottest research issues. Nowadays, there are various methods reported to reduce the number of parameters [21][22][23][24].Network pruning, SVD (singular value decomposition), and split-transform-merge strategy are three representative methods.In 1990, LeCun et al. first proposed the concept of network pruning, which decreased the model size by cutting off the redundant parameters of the neural network [21].This method requires a lot of iterative training to ensure network performance.In 2014, Denton et al. proposed the SVD method to reduce the number of weights [22].In the SVD method, the complex matrix is represented by multiplying smaller and simpler submatrices, which can significantly reduce network parameters.However, with the increase of the matrix scale, the calculation of the singular value becomes complicated and difficult.In recent years, the split-transform-merge strategy attracted more and more attention from researchers.Based on this strategy, the Inception models were developed with less computational complexity and a fewer number of parameters [23].In the Inception models, the input is split into several low-dimensional embeddings (by 1×1 convolutions), then converted through a set of specialized filters (3×3, 5×5, etc.) and finally merged by connection [24].However, because the hyper-parameters of each branch need to be set properly, it is hard to find a simple design method for the construction of an Inception network.In 2016, Xie et al. proposed the ResNeXt [24] network based on aggregated transformations, which can be regarded as the improvement of the split-transform-merge strategy.However, the ResNeXt was originally designed for image classification, therefore, its structure must be changed and optimized when applying it to super-resolution. In this paper, an efficient multibranch residual network for the super-resolution task is proposed.The multibranch architecture is built on the basis of aggregated transformations.In the meantime, we optimize the residual block with reference to EDSR.According to the proposed network structure, two specific models are established and given as examples in this work.Experiments show that our models can achieve a good reconstruction quality with a significant reduction of network parameters. Related Work Inception: The Inception network is a typical multibranch architecture based on the splittransform-merge strategy.Each branch in the network is carefully designed to gain good performance in terms of speed and accuracy.However, the customized size and number of each filter in the branch make the Inception network hard to implement. SRResNet: SRResNet is a super-resolution reconstruction network which is inspired by the residual network [20].Based on the original residual structure, the network removes the active layer after the residual block and obtains a good image reconstruction result in human vision. EDSR: EDSR is a state-of-the-art super-resolution network which further modifies the residual block structure based on SRResNet [1].Since BN (batch normalization) layers get rid of the range flexibility from networks and consume a lot of memory, EDSR removes two BN layers in the residual block.Benefiting from the structural modification, EDSR has great improvements in image reconstruction and reduction in the usage of graphics processing unit (GPU) memory. ResNeXt: Based on the residual block architecture, ResNeXt exploits the split-transform-merge strategy in an easy, extensible way-namely, aggregated residual transformations [24].This method involves stacking a series of homogeneous, multibranch residual blocks with only a few hyperparameters to set [24].Branches of ResNeXt each preform their set of convolutions and merge at the end of the block.Compared with ResNet, ResNeXt shows better performance and less computation complexity in the task of image classification. Grouped convolution: Grouped convolution was first proposed in the AlexNet paper [25] in 2012.The given motivation by the author was to distribute the model over two GPUs to solve the limited hardware resources of a single GPU.Grouped convolution divides the feature maps into multiple GPUs for convoluting and subsequently aggregates the obtained results of multiple GPUs. Methods EDSR has achieved good results in the super-resolution field, but there is little improvement on the parameter quantity compared with other algorithms.To reduce the number of parameters, the aggregation transformation method is applied to EDSR in this paper.The aggregation transformation method, by which the multibranch architecture of networks can be built in an easy way, is originally presented in ResNeXt.This method can reduce the parameter and time complexity without significantly decreasing the accuracy of image classification. A simple and obvious way to directly transform EDSR into multibranch architecture is by the aggregation transformation method.However, the original residual block of EDSR with two convolution layers is inconsistent with the aggregation transformation method [24].This direct transformation would result in a wild and dense model, which not only has no benefit but adds more complexity.To solve this issue, we must redesign the model with multibranch architecture.Three or more convolution layers are required in the residual block of the new model.To simplify the structure of the residual block and enhance the feature extraction capability, we adopted three convolution layers in this work.Compared with the original residual block as shown in Figure 1a, our rebuilt residual block removes the unnecessary rectified linear unit (ReLU) and BN layers with reference to the EDSR structure.This removal operation helps improve the performance of image reconstruction. Methods EDSR has achieved good results in the super-resolution field, but there is little improvement on the parameter quantity compared with other algorithms.To reduce the number of parameters, the aggregation transformation method is applied to EDSR in this paper.The aggregation transformation method, by which the multibranch architecture of networks can be built in an easy way, is originally presented in ResNeXt.This method can reduce the parameter and time complexity without significantly decreasing the accuracy of image classification. A simple and obvious way to directly transform EDSR into multibranch architecture is by the aggregation transformation method.However, the original residual block of EDSR with two convolution layers is inconsistent with the aggregation transformation method [24].This direct transformation would result in a wild and dense model, which not only has no benefit but adds more complexity.To solve this issue, we must redesign the model with multibranch architecture.Three or more convolution layers are required in the residual block of the new model.To simplify the structure of the residual block and enhance the feature extraction capability, we adopted three convolution layers in this work.Compared with the original residual block as shown in Figure 1a, our rebuilt residual block removes the unnecessary rectified linear unit (ReLU) and BN layers with reference to the EDSR structure.This removal operation helps improve the performance of image reconstruction. As shown in Figure 1, the convolutional layer (Conv) was used to perform feature extraction, and ReLU to rectify the network output.The BN layer was used to normalize the features, and Addition represents the additional layer that the network adds as needed.It is also known from the experiment by Lim et al. [1] that increasing the number of feature maps above a certain level would make the training process numerically unstable.The typical solution is to place a constant scaling layer (also called as MulConstant layer) after the last convolutional layer of each residual block.Owing to the use of aggregation transformations, the number of feature maps per convolution layer can be significantly reduced in comparison with the original EDSR model, therefore, the model proposed in this paper does not require the constant scaling layer.From the results in the following Experiment section, we can see that adding a constant scaling layer could worsen the performance.After removing the constant scaling layer, the architecture of our multibranch network is modeled and shown in Figure 2. The detailed description of ResBlock (residual block) has been given in Figure 1c.Upsample (upsampling structure) can magnify the image to the desired multiple.As shown in Figure 1, the convolutional layer (Conv) was used to perform feature extraction, and ReLU to rectify the network output.The BN layer was used to normalize the features, and Addition represents the additional layer that the network adds as needed. It is also known from the experiment by Lim et al. [1] that increasing the number of feature maps above a certain level would make the training process numerically unstable.The typical solution is to place a constant scaling layer (also called as MulConstant layer) after the last convolutional layer of each residual block.Owing to the use of aggregation transformations, the number of feature maps per convolution layer can be significantly reduced in comparison with the original EDSR model, therefore, the model proposed in this paper does not require the constant scaling layer.From the results in the following Experiment section, we can see that adding a constant scaling layer could worsen the performance.After removing the constant scaling layer, the architecture of our multibranch network is modeled and shown in Figure 2. The detailed description of ResBlock (residual block) has been given in Figure 1c.Upsample (upsampling structure) can magnify the image to the desired multiple.As shown in Figure 3, we design with different configurations for our multibranch architecture: EDSRSP-3×3 and EDSRSP-1×1.The number represents the size of the first and third convolution kernel.The configuration of the residual block in EDSRSP-3×3 is as the same as that in EDSR, i.e. 3×3 convolution kernel, 256-d input and 256-d output.It is seen from Table 1 that the number of parameters in EDSRSP-3×3 is reduced by 1/3 compared with EDSR.To further decrease the parameters, the configuration of EDSRSP-1×1 is properly adjusted and shown in Figure 3b As shown in Figure 3, we design with different configurations for our multibranch architecture: EDSRSP-3×3 and EDSRSP-1×1.The number represents the size of the first and third convolution kernel.The configuration of the residual block in EDSRSP-3×3 is as the same as that in EDSR, i.e., 3×3 convolution kernel, 256-d input and 256-d output.It is seen from Table 1 that the number of parameters in EDSRSP-3×3 is reduced by 1/3 compared with EDSR.To further decrease the parameters, the configuration of EDSRSP-1×1 is properly adjusted and shown in Figure 3b.The detailed adjustments include using the 1 × 1 convolution kernel in the first and third layers and the 512-d input and output in the second layer.EDSRSP-1×1 is similar to the bottleneck structure of ResNet, only with a little modification on the output dimension in the first layer.Due to the use of a 1 × 1 convolution kernel, the number of parameters in EDSRSP-1×1 are reduced to 1/4 of those in EDSR.As shown in Figure 3, we design with different configurations for our multibranch architecture: EDSRSP-3×3 and EDSRSP-1×1.The number represents the size of the first and third convolution kernel.The configuration of the residual block in EDSRSP-3×3 is as the same as that in EDSR, i.e. 3×3 convolution kernel, 256-d input and 256-d output.It is seen from Table 1 that the number of parameters in EDSRSP-3×3 is reduced by 1/3 compared with EDSR.To further decrease the parameters, the configuration of EDSRSP-1×1 is properly adjusted and shown in Figure 3b.The detailed adjustments include using the 1 × 1 convolution kernel in the first and third layers and the 512-d input and output in the second layer.EDSRSP-1×1 is similar to the bottleneck structure of ResNet, only with a little modification on the output dimension in the first layer.Due to the use of a 1 × 1 convolution kernel, the number of parameters in EDSRSP-1×1 are reduced to 1/4 of those in EDSR.For the implementation of aggregation transformation, our model has two equivalent structures as shown in Figure 4.The two structures have the same-level reconstruction performance, but the structure based on group convolution (Figure 4b) has the distinct advantages of time complexity and memory usage.Therefore, we use group convolution to realize the aggregation transformation. Datasets For our experiment, the newly proposed Diverse 2K (DIV2K) dataset [26] is used due to its high-quality (2K) resolution for the image reconstruction tasks.The DIV2K dataset consists of training images, 100 validation images, and 100 test images.Since the test dataset ground truth has not been published, the performance comparison was made on the validation dataset.We also compared the performance on three standard benchmark datasets: Set5 [9], Set14 [12], and B100 [27]. PSNR and SSIM Criteria Peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) are the two most-used indicators in the field of super-resolution reconstruction, which can measure the similarity between the reconstructed image and the original high-resolution image [28,29].The mathematical expression of PSNR is as follows: where is the number of bits per pixel, and mean square error (MSE) is defined as shown below: Datasets For our experiment, the newly proposed Diverse 2K (DIV2K) dataset [26] is used due to its high-quality (2K) resolution for the image reconstruction tasks.The DIV2K dataset consists of 800 training images, 100 validation images, and 100 test images.Since the test dataset ground truth has not been published, the performance comparison was made on the validation dataset.We also compared the performance on three standard benchmark datasets: Set5 [9], Set14 [12], and B100 [27]. PSNR and SSIM Criteria Peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) are the two most-used indicators in the field of super-resolution reconstruction, which can measure the similarity between the reconstructed image and the original high-resolution image [28,29].The mathematical expression of PSNR is as follows: where n is the number of bits per pixel, and mean square error (MSE) is defined as shown below: where f (i, j) and f (i, j) represent the original and reconstructed images, respectively.Both of them are of size M × N, and (i, j) stands for the pixel coordinate.The larger the value of PSNR, the better effect of image reconstruction.SSIM is another popular criteria to compare the reconstructed image x and the original high-definition image y.The formula of SSIM is shown as follows: where u x , u y are the mean value of x, y. σ x 2 , σ y 2 are the variance of x, y. σ xy is the covariance of x and y. c 1 = (k 1 L) 2 and c 2 = (k 2 L) 2 are constants to maintain formula validity, avoiding the denominator being zero.L represents the dynamic range of the pixel value.k 1 = 0.01 and k 2 = 0.03 by default. The larger the value of SSIM, the better similarity of the two images. Training Details For training, we use and adjust the training parameters given in Lim et al. [1].Neither the pre-training model nor the geometric self-ensemble strategy is used in this training.The chop size is set to 4.0 × 10 4 and patch sizes of ×3/×4 were set to 96.We also learnt from the code published by the EDSR paper and trained the models by using NVIDIA Titan Xp GPUs.According to the official baseline model, the used EDSR model is retrained with no modifications other than those mentioned above.It takes seven days to train EDSR compared with three days for our models. Comparison between the Cases with and without MulConstant Layer To analyze the effect of the MulConstant Layer in our designed residual block, we performed experiments on the EDSRSP-1 × 1 × 4 model and the EDSRSP-3 × 3 × 2 model.The three experiments correspond to three different cases: (1) without the MulConstant layer; (2) MulConstant layer with the factor set to 0.1; (3) MulConstant layer with the factor set to 0.01.From the experimental results as shown in Figure 5, we can see that removing the MulConstant layer in our model results in better performance. them are of size × , and (, ) stands for the pixel coordinate.The larger the value of PSNR, the better effect of image reconstruction. SSIM is another popular criteria to compare the reconstructed image x and the original high-definition image y.The formula of SSIM is shown as follows: where , are the mean value of , . 2 , 2 are the variance of , . is the covariance of and . 1 = ( 1 ) 2 and 2 = ( 2 ) 2 are constants to maintain formula validity, avoiding the denominator being zero. represents the dynamic range of the pixel value. 1 = 0.01 and 2 = 0.03 by default.The larger the value of SSIM, the better similarity of the two images. Training Details For training, we use and adjust the training parameters given in Lim et al. [1].Neither the pre-training model nor the geometric self-ensemble strategy is used in this training.The chop size is set to 4.0×10 4 and patch sizes of ×3/×4 were set to 96.We also learnt from the code published by the EDSR paper and trained the models by using NVIDIA Titan Xp GPUs.According to the official baseline model, the used EDSR model is retrained with no modifications other than those mentioned above.It takes seven days to train EDSR compared with three days for our models. Comparison between the Cases with and without MulConstant Layer To analyze the effect of the MulConstant Layer in our designed residual block, we performed experiments on the EDSRSP-1×1 ×4 model and the EDSRSP-3×3 ×2 model. Evaluation on DIV2K Dataset For the performance evaluation, a comparison between the retrained EDSR model and our model is made and shown in Figure 6.The detailed evaluation method is given and described in Lim et al. [1].Using PSNR and SSIM criteria, the evaluation is conducted on 10 images of the DIV2K validation set.Concretely, we use full RGB channels and ignore the (6 + scale) pixels from the border. Evaluation on DIV2K Dataset For the performance evaluation, a comparison between the retrained EDSR model and our model is made and shown in Figure 6.The detailed evaluation method is given and described in Lim et al. [1].Using PSNR and SSIM criteria, the evaluation is conducted on 10 images of the DIV2K validation set.Concretely, we use full RGB channels and ignore the (6 + scale) pixels from the border.The small difference between EDSR and our models could verify the performance of the proposed method. Electronics 2019, 8, x FOR PEER REVIEW 7 of 11 The small difference between EDSR and our models could verify the performance of the proposed method.Table 2 gives PSNR and SSIM scores of EDSR and our models on the DIV2K validation set, where the results are consistent with those in Figure 6.In addition, visual comparisons of the super-resolution images are shown in Figure 7.It can be seen, intuitively, that our models show high quality regardless of details or textures.Table 2 gives PSNR and SSIM scores of EDSR and our models on the DIV2K validation set, where the results are consistent with those in Figure 6.In addition, visual comparisons of the super-resolution images are shown in Figure 7.It can be seen, intuitively, that our models show high quality regardless of details or textures.We also performed the running time test on the pictures in Figure 7.The experimental results are shown in Table 3.As can be seen from the data in the table, the proposed model has a faster running time than EDSR.We also performed the running time test on the pictures in Figure 7.The experimental results are shown in Table 3.As can be seen from the data in the table, the proposed model has a faster running time than EDSR. Evaluation on Other Datasets More experiments were implemented on the standard datasets of B100, Set5, and Set14.For comparison, we measured PSNR and SSIM on the y-channel, ignoring the same number of pixels as the boundary scaling.The MATLAB code was provided by the EDSR paper for this evaluation.As can be seen from Table 4, our models can achieve the same level performance as EDSR with fewer parameters, in theory. Evaluation on Other Datasets More experiments were implemented on the standard datasets of B100, Set5, and Set14.For comparison, we measured PSNR and SSIM on the y-channel, ignoring the same number of pixels as the boundary scaling.The MATLAB code was provided by the EDSR paper for this evaluation.As can be seen from Table 4, our models can achieve the same level performance as EDSR with fewer parameters, in theory.It can be seen from the experimental results that under the premise of ensuring the reconstruction quality, the proposed models have obvious advantages in time complexity and space complexity.This also means a reduction in the demand for hardware resources in practical applications, which makes our models easier to implement in real conditions. Conclusions In this paper, we propose an efficient super-resolution network based on aggregated residual transformations.Based on the proposed network, two specific models were designed and built in this work.Each of the two models has its own advantages regarding the reconstruction performance and the number of parameters.Experiments on both the DIV2K and other standard datasets were implemented to evaluate the performance of our network.The experiment results proved that our method is effective and easy to implement.Compared with EDSR, the number of parameters is significantly reduced with the same-level performance. Figure 1 . Figure 1.Comparison of residual blocks in the original ResNet, enhanced deep super-resolution network (EDSR), and our model.(a) Original ResNet residual block; (b) EDSR residual block; (c) Our proposed residual block. Figure 1 . Figure 1.Comparison of residual blocks in the original ResNet, enhanced deep super-resolution network (EDSR), and our model.(a) Original ResNet residual block; (b) EDSR residual block; (c) Our proposed residual block. Figure 2 . Figure 2. The architecture of the proposed multibranch network. Figure 2 . Figure 2. The architecture of the proposed multibranch network. Figure 2 . Figure 2. The architecture of the proposed multibranch network. Figure 7 . Figure 7. Super-resolution reconstruction results on the DIV2K dataset. Figure 7 . Figure 7. Super-resolution reconstruction results on the DIV2K dataset. Table 1 . Parameters of EDSR and our models. Table 1 . Parameters of EDSR and our models. Table 1 . Parameters of EDSR and our models. Table 3 . Running time (s) comparison between EDSR and proposed models. Table 3 . Running time (s) comparison between EDSR and proposed models.
5,590.2
2019-03-20T00:00:00.000
[ "Computer Science" ]
Prediction of Welding Deformation and Residual Stresses in Fillet Welds Using Indirect Couple Field FE Method : Fillet welds are extensively used in shipbuilding, automobile and other industries. Heat concentrated at a small area during welding induces distortions and residual stresses, affecting the structural strength. In this study, indirect coupled-field method is used to predict welding residual stresses and deformation in a fillet joint due to welding on both sides. 3-D nonlinear thermal finite element analysis is performed in ANSYS software followed by a structural analysis. Symmetrical boundary conditions are applied on half of the model for simplification. Results of FE structure analysis predict residual stresses in the specimen. A comparison of simulation results with experimental values proves the authenticity of the technique. The present study can be extended for complex structures and welding techniques. INTRODUCTION Fillet joints are widely used in bridges and ship structures.Fillet welded joints usually suffer various welding deformation patterns such as longitudinal shrinkage, transverse shrinkage, angular distortion and bending.The concentrated thermal gradient followed by cooling during the welding process induces residual stresses and distortions.Excessive distortions of welded components have negative effects on fabrication accuracy, external appearance and various strengths of the structures.Various corrective measures like post weld heat treatment, flame straightening, vibratory stress relief, induction heat treatment and cold bending can be used to lower the distortion level.However, these methods are costly and time consuming.Welding induced residual stresses may cause early yielding and reduce buckling strength.Therefore, prediction and control of welding deformation and residual stresses is critical to improve the quality and reliability of the structure.Withers and Bhadeshia (2001) defined residual stresses and summarized their measurement techniques.Experimental methods for the prediction of residual stress include stress relaxation, x-ray diffraction, ultrasonic and cracking (Teng et al., 2001).All these methods are either destructive or expensive, which drive the requirement of simulation techniques. A weld simulation model involves geometrical constraints, material nonlinearities, all physical phenomena and welding parameters such as welding speed, current, voltage, efficiency.Improved and complex simulation models also include number and sequence of passes and filler material.Researchers have been working in the field of computational welding mechanics in order to accurately predict welding residual stresses and deformations (Goldak 2005;Lindgren and Karlsson, 1988;Lindgren, 2001). Welding process is treated as a transient nonlinear problem in finite element thermo-elastic-plastic analysis.Camilleri et al. (2003Camilleri et al. ( , 2005) ) computed welding temperature field by FE methods and validated the results by experiments.Lee et al. (2008), Ueda et al. (1988), Ueda and Yuan (1993) and Barroso et al. (2010) predicted the effect of different shapes and material properties on welding residual stresses and distortions.Mollicone et al. (2006) described modeling strategies to simulate the thermo-elastic-plastic stages of the welding process and compared FE model with experiments.Iranmanesh and Darvazi (2008) presented a FE based calculation process to study temperature field and residual stresses using 2 and 3-dimensional models in ANSYS 9.0. Gao and Zhang (2011) addressed moving heat source, latent heat of phase change and characteristic parameters of materials in the simulation model.Moraitis and Labeas (2009) developed a 3D FE model to predict keyhole formation and thermo-mechanical response during laser beam welding of steel and In this study, temperature distribution due to fillet welding on both sides of the web is calculated at each load step followed by structure analysis using the temperature field data.It is assumed that the structural results do not affect the thermal anal only unidirectional coupling is carried out.are performed to validate the simulation results.The computed deformations are compared experimental results measured at several point and residual stresses are predicted. SIMULATION METHOD FE modeling: Model geometry used in this study is shown in Fig. 1a.Material of both flange and web is low carbon steel.For FE analysis the half of the model is considered and symmetric boundary conditions are applied.The temperature gradient is considerably lower in the regions away from the weld location.Therefore, bigger element size is used to reduce the number of degrees of freedom and the computation time (Fig. 1b). Thermal analysis: Non-linear thermal analysis is conducted using solid 70, eight node brick elements.Welding arc is considered as a moving surface heat source.Temperature history of the plate using three dimensional transient thermal analys In this study, temperature distribution due to fillet welding on both sides of the web is calculated at each load step followed by structure analysis using the temperature field data.It is assumed that the structural results do not affect the thermal analysis.Therefore, only unidirectional coupling is carried out.Experiments are performed to validate the simulation results.The compared with experimental results measured at several point and SIMULATION METHOD Model geometry used in this study is shown in Fig. 1a.Material of both flange and web is low carbon steel.For FE analysis the half of the model is considered and symmetric boundary conditions are e temperature gradient is considerably lower in the regions away from the weld location.Therefore, bigger element size is used to reduce the number of degrees of freedom and the computation time (Fig. 1b).linear thermal analysis is node brick elements.is considered as a moving surface heat plate is evaluated three dimensional transient thermal analysis.Heat source model:In this study, at any time t, the heat of the welding arc is modeled by a surface heat source with a Gaussian distribution (Gao and Thus, points lying on the surface of the work piece within the arc beam radius r a receive distributed heat fluxes q t as follows: where, ‫ݎ‬ ௧ is the radial distance instantaneous arc center on the surface of the work piece and ܳ is the heat input from the welding arc. Where Q = ηVI is the energy of the welding arc the arc efficiency, ܸ is voltage and current respectively.The value of welding parameters are given in Table 1. Heat transfer model: Equation ( 2) is the governing Eq. of 3D transient heat transfer in such methods while Eq.(3) represents the heat loss due to convection and radiation. where Q is the internal heat energy released or consumed per unit volume (J/mm 3 ), T is temperature, T 0 is ambient temperature, In this study, at any time t, the heat of the welding arc is modeled by a surface heat source with a Gaussian distribution (Gao and Zhang, 2011).Thus, points lying on the surface of the work piece eceive distributed heat ൨ (1) measured from the instantaneous arc center on the surface of the work is the heat input from the welding arc. the energy of the welding arc, ߟ is is voltage and ‫ܫ‬ is the welding current respectively.The value of welding parameters Equation ( 2) is the governing Eq. of 3D transient heat transfer in such methods while Eq.(3) represents the heat loss due to convection and is the internal heat energy released or ), ‫ݍ‬ ௦ is the heat loss, is ambient temperature, t is time, k is thermal conductivity (W/mm °C), ߩ specific heat (J/g °C), h is a convection coefficient, the Stefan-Boltzman constant and ∈ Considering a quasi-steady state situation, Eq. ( 2) can be rewritten in the form of Eq. ( 4), where the velocity in the x-direction. ൌ Material model: Figure 2 shows temperature dependent thermal properties of the material (Khurram et al., 2011).For interpretation of heat transfer by convection in the weld pool, an exaggerated value of the thermal conductivity is considered for temperatures above the melting point.The latent heat of fusion is combined in the material model by increasing the specific heat at the melting temperature.It is also seen that Young's modulus E, the yield stres expansion coefficient are primary mechanical properties in the thermo-mechanical analysis.The physical mechanical material properties for low carbon steel are given in steady state situation, Eq. ( 2) can be rewritten in the form of Eq. ( 4), where ‫ݑ‬ (mm/s) is Figure 2 shows temperature dependent thermal properties of the material (Khurram ., 2011).For interpretation of heat transfer by convection in the weld pool, an exaggerated value of the thermal conductivity is considered for temperatures above the melting point.The latent heat of fusion is combined in the material model by increasing the specific heat at the melting temperature.It is also seen that Young's modulus E, the yield stress and thermal expansion coefficient are primary mechanical mechanical analysis.The physical mechanical material properties for low carbon steel are given in Table 2 (Iranmanesh and Darvazi, r transient structural analysis is conducted just after the thermal analysis.The same half model for thermal analysis is utilized for structural analysis except for the boundary conditions and element type.Symmetrical boundary conditions are applied to simplify the process.SOLID185 is used for 3-D modeling of solid structures.Results of transient thermal analysis are used as body force in the mechanical analysis.The total strain comprises of elastic, plastic and thermal strains as in Eq. ( 5) (Iranmanesh and Darvazi, 2008). ߳ ൌ ߳ ߳ ߳ ௧ The elastic strain is modeled using the isotropic Hook's Law with temperature-dependent Young's module and Poisson's ratio.For the plastic strain of the model with yield level of von misses, temperature dependent mechanical properties and hardening linear kinematic model is obtained.Heat strain is calculated using coefficient of thermal expansion given in Table 2. RESULTS Welding deformations: Transverse normal to the weld bead as shown in Fig. 3 and are a result of thermal strains produced during welding.Expansion and contraction during welding in the direction parallel to the welding line causes longitu shrinkage in Fig. 4 and 5 represents out of plane deformation which is the basis for angular distortion.2.07 0 0.447 0.465 0.46 is yield stress, Et is tangent module, E is young module and ʋ is poison's and element type.Symmetrical boundary conditions are implify the process.SOLID185 is used for D modeling of solid structures.Results of transient thermal analysis are used as body force in the mechanical analysis.The total strain comprises of elastic, plastic and thermal strains as in Eq. ( 5) (5) The elastic strain is modeled using the isotropic dependent Young's module and Poisson's ratio.For the plastic strain of the yield level of von misses, temperature dependent mechanical properties and hardening linear kinematic model is obtained.Heat strain is calculated using coefficient of thermal expansion given in Table 2. Transverse deformation are normal to the weld bead as shown in Fig. 3 and are a result of thermal strains produced during welding.Expansion and contraction during welding in the direction parallel to the welding line causes longitudinal represents out of plane deformation which is the basis for angular distortion.The measured and simulated transverse shrinkage at various points at the mid section of the plate is shown in Fig. 6.Longitudinal deformation at the two extreme ends of the plate are depicted in Fig. 7. Welding residual stresses:Residual stress distribution is not uniform across the thickness of the plate with maximum at the top surface and decreases gradually to minimum at the bottom (Khurram et al., 2011).Therefore all stresses are computed at the mid thickness of the plate.Every point within the plate experiences variable stresses during and after welding.Figure 9 shows the stress history of a point at mid thickness of the flange.Results demonstrate that stress is maximum in transverse direction. Simulation results of transverse and longitudinal residual stresses ሺ , ࢠ ሻ at the flange mid-centerare shown in Fig. 10 and 11, respectively.The residual stresses perpendicular to the flange or out of plane are due to non-uniform expansion and contraction with in the thickness as shown in Fig. 12. CONCLUSION This research provides basic theory and instruction to simulate welding residual stresses and deformations in fillet weld joint.Due to symmetry, only half model is considered for analysis.A non-linear transient thermal analysis is performed using a Gaussian distribution based moving heat source.Temperature distribution is computed at each time step independently.Using the results of thermal analysis and applying symmetric boundary conditions, a transient coupled 3D finite element structural analysis is performed.Experiments are also conducted to validate simulation results.Conclusions of this study are summarized as following: • Simulation results are in a good agreement with experimental values, which prove the authenticity and reliability of the simulation technique. • Transverse stresses values dominate other stresses through out the welding cycle. Fig. 1 : Fig. 1: (a) Model geometry (b) Simplified FE model aluminum pressure vessel or pipe butt (2008) presented the FE method based on the strain theory to simulate welding distortion in multi pass girth butt welded pipes of different wall thickness.Sulaiman et al. (2011) investigated the capability of linear thermal elastic numerical analysis to predict the welding distortion due to GMAW by FEM software WELDPLANNER.Mrvar et al. (2011) simulated welding of pipe with finite element program SYSWELD.In this study, temperature distribution due to fillet welding on both sides of the web is calculated at each load step followed by structure analysis using the temperature field data.It is assumed that the structural results do not affect the thermal anal only unidirectional coupling is carried out.are performed to validate the simulation results.The computed deformations are compared experimental results measured at several point and residual stresses are predicted. Res. J.Appl.Sci.Eng.Technol., 5(10): 2934-2940, 2013 2935 : (a) Model geometry (b) Simplified FE model aluminum pressure vessel or pipe butt-joints.Xu et al. (2008) presented the FE method based on the inherent strain theory to simulate welding distortion in multipass girth butt welded pipes of different wall thickness. .(2011) investigated the capability of linear thermal elastic numerical analysis to predict the o GMAW by FEM software .(2011) simulated welding of pipe with finite element program Fig. 9 : Fig. 9: Stress history of a point at mid thickness Fig. 12 : Fig. 12: Out of plane stress at the middle of the plate Table 2 : Temperature dependent mechanical properties of low carbon steel is coefficient of heat conduction, ρ is density, σy is yield stress, Et is tangent module, E is young module and • Transverse and longitudinal stresses are compresive in nature near the weld line.Their value gradually decreases as the distance from the weld line increases and eventually, become tensile near the edges of the plate.• Out of plane stresses near the weld are are tensile, which gradually decrease and become compressive.However the values are much lower in contrast to other stresses.• The current method can be used to simulate complex geometries and various welding technologies.
3,316.6
2013-03-01T00:00:00.000
[ "Materials Science" ]
Training for the Algorithmic Machine In thinking about the ubiquity of algorithmic surveillance and the ways our presence in front of a camera has become engaged with the algorithmic logics of testing and replicating, this project summons Walter Benjamin’s seminal piece The Work of Art in the Age of Its Technological Reproducibility with its three versions, which was published in the United States under the editorial direction of Theodore Adorno. More specifically, it highlights two of the many ways in which the first and second versions of Benjamin’s influential essay on technology and culture resonate with questions of photography and art in the context of facial recognition technologies and algorithmic culture more broadly. First, Benjamin provides a critical lens for understanding the role of uniqueness and replication in a technocratic system. Second, he proposes an analytical framework for thinking about our response to visual surveillance through notions of training and performing a constructed identity—hence, being intentional about the ways we visually present ourselves. These two conceptual frameworks help to articulate our unease with a technology that trains itself using our everyday digital images in order to create unique identities that further aggregate into elaborate typologies and to think through a number of artistic responses that have challenged the ubiquity of algorithmic surveillance. Taking on Benjamin’s conceptual apparatus and his call for understanding the politics of art, I focus on two projects that powerfully critique algorithmic surveillance. Leo Selvaggio’s URME (you are me) Personal Surveillance Identity Prosthetic offers a critical lens through the adoption of algorithmically defined three-dimensional printed faces as performative prosthetics designed to be read and assessed by an algorithm. Kate Crawford and Trevor Paglen’s project Training Humans is the first major exhibition to display a collection of photographs used to train an algorithm as well as the classificatory labels applied to them both by artificial intelligence and by the freelance employees hired Introduction Today one's face has come to replace one's fingerprint as the primary unit of identification.Currently, there are over 30 companies across different sectors such as banking, beauty brands, food and beverage brands, and hotels that are developing and testing facial recognition technologies ("Facial recognition," 2019).Among them is the retail giant Amazon which in 2018 unveiled its affordable software for facial recognition Rekognition.According to Amazon's website: Rekognition is an image recognition service that detects objects, scenes, and faces; extracts text; recognizes celebrities; and identifies inappropriate content in images.It also allows you to search and compare faces.Rekognition Image is based on the same proven, highly scalable, deep learning technology developed by Amazon's computer vision scientists to analyze billions of images daily for Prime Photos.(Amazon, 2020a) For under $10, now anyone can deploy this computervision, deep learning AI driven tool to identify 'targets' and 'innocents' based on photographs or video footage (Amazon, 2020a).Rekognition has been deployed in a variety of contexts.For example, the Oregon Police Department uses the software to identify 'persons of interest' (Fazzini, 2018); Aella Credit on the other hand has deployed the software as means of identification of potential borrowers in emerging markets while Daniel Wellington relies on this technology to identify customers who come to return items bought in their highend jewelry stores (Amazon, 2020b).The customer list posted on Amazon's website also boasts working with the dating company Soul to "to detect objectionable content before it's posted while minimizing the need of human involvement" and with the children-oriented app Go Girls, the photo service Sen Corporation, the summer camp platform CampSite.My point is that the use of facial recognition software, be it Amazon's or that developed by one of the other tech giants such as Google and Facebook, has become a ubiquitous part of our everyday life.It is used in digital and analog spaces to identify and track all of us adults as well as our children. The Rekognition software enables the recognition of both loyal customers and those deemed undesirable.Rendered through the Rekognition algorithm, the individual becomes either a celebrity or a stalker; in other words, either a legitimized public figure or a criminalized private citizen.Rekognition is indeed being sold to celebrities as a way to manage fans and stalkers and this dichotomy is anchored in the public description of the algorithm itself.In a sense then, algorithmic surveillance is constantly categorizing the humans that it detects into honorific and repressive categories.The repressive use of the algorithm is particularly problematic because of the perceived veracity and actual factual inaccuracy with which it operates.Recently, Pasco County Sherriff's office deployed a biased algorithmic predictive system that "generates lists of people it considers likely to break the law, based on arrest histories, unspecified intelligence and arbitrary decisions by police analysts" and then sends deputies to interrogate the targeted individuals (McGregory & Bedy, 2020).In verifying the criminal status of individuals, facial recognition has also often proven to be inaccurate; this inadequacy has been demonstrated by multiple studies and incidents.Notably, Robert Julian-Borchak Williams was arrested based on a comparison of two photographs taken by a surveillance camera and his driver's license (Allyn, 2020).The match was justified by an argument that algorithms are objective and can identify criminals better than humans based on an assessment of similarity between visual images.This was one of few cases in which the "police admitted that facial recognition technology, conducted by Michigan State Police in a crime lab…prompted the arrest" (Allyn, 2020).Algorithms are deployed in all aspects of our lives and have come to guide biopolitical decisions on our behalf.What is different here is the biopolitics of everyday life are now entrusted in a technological system that is further curtailing the role of humans as the decision makers. Human agency, in other words, is relegated to the production of 'raw' material that is to be gathered, accessed, categorized, and acted upon through algorithmic means on behalf of technocratic corporations. As their training base, facial recognition algorithms often use 'scraped' consumer photographs (i.e., taken from the Internet without notifying users) such as selfies and digital images as well as state-issued photographs such as those used on driver licenses.Consumer photographs posted on Amazon Prime Photos were used without the explicit permission of the users who took and uploaded them for the training of the Rekognition algorithm.The Digital photographs became the basis for algorithmic surveillance and as such, they permeated not only the social media landscape but also the space of algorithmic culture more broadly.As the windows to our souls are reshaped into iris scans and the pictures of our minds become faceprints, it is important to note not only when and how these scans and prints are used to assess the risk that one poses to society but also when and how our eyes and faces became measurable windows/pictures in the first place. Algorithmic culture functions as a technological culture rather than simply as a digital media culture.Digital culture has traditionally articulated to the ways in which digital media has shaped culture, whereas in the context of algorithmic culture, digital and algorithmic technologies well beyond media are shaping society.As I have argued, "[I]n the context of an algorithmic culture, then, it is increasingly important to understand the ways in which algorithmic structures through recognition, calculation, automation, and prediction are shaping everyday life" (Hristova, 2021, p. 3).The term technological culture, as coined by Jennifer Slack and Gregory Wise (2015, p. 9), broadly describes the ways: Culture has always been technological, and technologies have always been cultural.Technologies are integral to culture and not separate from it….Human culture has always existed in relation to what we understand to be technologies: from voice, stone, and fire, to clock, computer, and nanotechnology. As such, the contemporary moment can be seen as the orientation of a culture towards a new technologynamely that of algorithmic technologies and should be discussed in the context of technological culture alongside notions of mediated culture.The term algorithmic culture "draws attention to the reality that culture is increasingly explained by, responsive to, and shaped in and by the pervasive work of algorithms" (Slack & Hristova, 2020, p. 18).Algorithmic culture thus accounts for the ways in which this new form of digital technology is changing all aspects of everyday life, not just our engagement with media.The study of algorithmic culture then, as articulated through its technological and cultural aspects, necessitates critical perspectives that grapple with the nexus of new technological developments, politics, economics, and practices of resistance.Arguably, the current moment is not the first time that we have encountered the problem of pervasive surveillance coupled with the proliferation of right-wing regimes worldwide.Indeed, critical theory as articulated by Theodore Adorno and Walter Benjamin emerged under a similar historical context and is indeed quite relevant for addressing our contemporary predicaments.Benjamin's work offers important concepts that "differ from others in they are completely useless for the purposes of fascism.On the other hand, they are useful for the formulation of revolutionary demands in the politics of art" (Benjamin & Jennings, 2010).Moreover, Benjamin's ruminations on technology offer myriad concepts that help us untangle the technological transformations in the context of the increased presence of right-wing ideology and right-wing authoritarian governments.For Benjamin, understanding the ways in which visual apparatuses construe us and actively training to perform a desired identity in the context of technological surveillance holds the possibility of technological disruption.In other words, being knowledgeable of how technology frames us allows for a more intentional presentation of the self, which in turn holds the potential to render technologies themselves impotent or useless to autocratic regimes of power. In the context of algorithmic culture, surveillance has become an increasingly important topic (Benjamin, 2019;Gates, 2011;Monahan, 2006;Noble, 2018;Pasquale, 2015).In exploring the ways in which our presence in front of a camera has become engaged with the algorithmic logics of testing and replicating, I summon Walter Benjamin's seminal piece The Work of Art in the Age of Its Technological Reproducibility with its three versions, which was published in the United States under the editorial direction of Theodore Adorno (Benjamin, 2002(Benjamin, , 2003;;Benjamin & Jennings, 2010).The first version was written in 1935, while the second version of the essay from 1936 is a "revision and expansion…of the first ver-sion… [and] represents the form in which Benjamin originally wished to see the work published" (Benjamin, 2002, p. 122).The third and most popular in the United States version, which Benjamin completed in 1939, was modified based on the editorial input of Adorno who facilitated the translation, publication, and popularization of this work (Benjamin, 2003, p. 270).The third version backtracks some of the celebratory stances awarded to the notion of replication and reproducibility (that are found in the first two versions) and bears clear traces of Adorno's fascination with the authentic as well as his disdain for mass art.It also moves away from an understanding of visual and visualization technologies and towards a narrower articulation of visual media.In this essay, I highlight two of the many ways in which the first and second versions of Benjamin's influential essay on technology and culture resonate with questions of photography and art in the context of facial recognition technologies and algorithmic culture more broadly.First, Benjamin provides a critical lens for understanding the role of unique-ness and replication in a technocratic system.Second, he proposes an analytical framework for thinking about our response to visual surveillance through notions of training and of performing a constructed identity-hence, being intentional about the ways in which we visually present ourselves.These two conceptual frameworks help to articulate our unease with a technology that trains itself using our everyday digital images in order to create unique database identities that further aggregate into elaborate typologies and to think through a number of artistic responses that have challenged the ubiquity of algorithmic surveillance.Adapting Benjamin's conceptual apparatus and his call for understanding the politics of art, I focus on two projects that powerfully critique algorithmic surveillance.Leo Selvaggio's URME (you are me) Personal Surveillance Identity Prosthetic offers a critical lens through the adoption of algorithmically defined three-dimensional printed faces as performative prosthetics designed to be read and assessed by an algorithm.Kate Crawford and Trevor Paglen's project Training Humans is the first major exhibition to display a collection of photographs used to train an algorithm as well as the classificatory labels applied to them both by AI and by the freelance employees hired to sort through these images. Replication for Whom: Humanistic and Technological Assemblages Benjamin articulated his well-known concept of reproducibility as operating on two different levels: One in which "objects made by humans could always be copied by humans and another in which the reproduction was articulated through technology and thus became 'technological reproduction"' (Benjamin & Jennings, 2010, p. 12).This second mode of reproducibility was articulated through the emergence of the woodcut, became amplified through the technology of lithography, and culminated in the introduction of photography, which was seen as a technology further displacing the human from the process of reproduction by delegating the process of "pictorial reproduction…to the eye alone" (Benjamin & Jennings, 2010, p. 14).In this context, the eye follows the primary mediation of the original conducted by the lens.In terms of algorithmic culture, the processes of reproduction and detachment are further amplified and dehumanized.Indeed, this dehumanization emerges as a fundamental process that accompanies the move away from the prehistoric connection between technology and ritual towards a machine age driven by technological reproducibility.Writing prophetically in the 1930s, Benjamin foresees the continued displacement of the human and humanity towards technological autonomy.In thinking about the distinction between ritual-based and machine-based technologies, he wrote: Whereas the former made the maximum possible use of human beings, the latter reduces their use to the minimum.The achievement of the first technology [seen in a prehistoric context for example] might be said to culminate in human sacrifice; those of the second [rooted in the Machine age], in the remote-controlled aircraft which needs no human crew.The results of the first technology are valid once and for all….The results of the second are wholly provisional (it operates by means of experiments and endlessly varied test procedures).(Benjamin, 2002, p. 107) Under the auspice of photography, the process of reproduction became one that is exclusively visual and continuously technological.The eye here is further displaced as photography was seen as "bringing out aspects of the original that are accessible only to the lens" (Benjamin & Jennings, 2010, p. 14).Like the remote-controlled aircraft without a human pilot, replication through photographic means is now directed not for the human eye itself but rather it is distilled into a set of features that are accessible only to the lens and the algorithm: faces become faceprints, eyes become iris scans.Trevor Paglen has theorized the emergence of images in relation to machine learning and AI as "invisible images" embedded in "machine-to machine seeing" in which "digital images are machine-readable and do not require a human in the analytic loop" (Paglen, 2019, p. 24).Whereas for Benjamin visual film-based technology (photography and film) revealed optical unconscious properties unattainable to "natural optics" such as "enlargement or slow motion" but are ultimately made perceptible to human vision (Benjamin, 2002, p. 102).In the contemporary context, however, visual algorithmic technology reveals properties that are even less unattainable by human perception as they articulate a set of data points meaningful only to algorithms.For example, an iris scan consists of at least 240 data points and thus distills the world in a manner that is understood by machine vision and machine knowing (learning).While the photograph captured faces, an algorithmdriven camera now sees face models that are meaningless to human vision. This translation of face-to-face model in the context of facial recognition algorithms is evident in reading Amazon Rekognition's developer guide where a face model becomes defined as a bounding box and further given coordinates for the expected elements: eyes, nose, mouth: "FaceModelVersion": "3.0," "SearchedFaceBoundingBox": { "Height": 0.06333333253860474, "Left": 0.17185185849666595, "Top": 0.7366666793823242, "Width": 0.11061728745698929 }, "SearchedFaceConfidence": 99.99999237060547 Amazon's surveillance software articulates personhood first as the presence of a face and second through the existence of posture.Furthermore, a face is conceived as consisting of a left eye, a right eye, a nose, the right side of one's mouth, and the left side of one's mouth.The face thus becomes the locus of personhood in the context of algorithmic surveillance (Amazon, n.d.).Through this process of technologically reproducing people through visual capture of either subjects or photographs of subjects, the image is distilled into image-data.This distillation obfuscates the relevance of the real, the original beyond its datafied existence.Within the context of facial recognition technology, this process informs the technological articulation of both the input and output of the technological reproduction process. Portraits, selfies, and photographs of people, in general, are particularly susceptible to this transformation as bodies in front of a camera are captured by its lens and further translated into data for an algorithm.The endpoint of the camera is no longer a photograph.It is data.The lens then produces not an image but a dataset.Facial recognition algorithms use consumer photographs such as selfies and digital images as well as state-issued photographs such as those used on driver licenses as their training base.Once within the sphere of the algorithm, the human body is relevant only as data and the image itself becomes a useless intermediary.These data-points are articulated in big data structures from which typologies emerge.Thus, the individual who stood in front of the camera for a portrait or selfie, or simply walked in front of a consumer or commercial camera is simply understood in algorithmic terms, as an example of a larger 'measurable type' (Cheney-Lippold, 2017). In an algorithmic culture, the authentic individual is replaced with an entity enthralled in a projected typology in which common habitual traits are replicated and reproduced.In other words, the uniqueness of individuals or their aura is the main fuel of the algorithmic machine.The machine relies on difference and differentiation in order to trace unique database ids through time and space.Benjamin's critique on the insistence of holding on to the notion of authenticity, of customization, of uniqueness is quite powerful.In an algorithmic culture, if the original is already a replica without an aura, then the process of technological reproduction is disempowered.For the algorithm to work, individual behavior must demonstrate patterns or 'trends' but it also much be distinct enough as to articulate a separate data point or big data.In other words, individuation is useful to an algorithm as it provides a point into a set of big data.Without multiple individual points, there is no big data, and thus the algorithm has nothing to work with.The individuation we are currently afforded is a superficial one-one that is based on quantitative difference: We can buy a blue case for our similar iPhone, or choose to purchase a pink Rumba to clean our floors.We, however, are seen as static unique entries that wear pink or blue (variation) but remain constant and unique at the same time.Benjamin proposes an alternative framework in which individuals, not just art, might consider operating as consciously reproducible entities without an aura.In a post-aura technological landscape, accepting a level of sameness on a mass scale can defeat the big data impetus of algorithms and thus render us useless to this technology.The level of sameness here addresses the attempt of algorithms to reconstruct us as digital selves, as unique digital identities within group clusters.In a culture of corporate standardization and surveillance capitalism, algorithms attempt to reinstate algorithmic aura by defining the terms that make us unique in a way that is inaccessible to us (Zuboff, 2019).What is authentic and what is replicable about our own selves and our behavior is no longer a choice that we as humans can make but is rather relegated to an algorithmic calculation.Our algorithmic aura is neither comprehensible nor accessible to ourselves. This theme of the non-original is visualized in Leo Selvaggio's project URME (you are me) Personal Surveillance Identity Prosthetic in which he offered his own face as a 3D printed mask in order to flood the streets with Leos as far as facial recognition technologies are concerned.Selvaggio's project mobilizes reproducibility, reproduction, and replication as a political tactic against the reappearance of the algorithmic aura and its dominating uniqueness.With the prosthetic, while the human eye is able to detect the mask, the replication for the algorithmic eye is flawless and the algorithm 'sees' a series of Leos.This distinction is important.Masks traditionally have been seen as technologies of resistance.As Zach Blas (2013) wrote, "The mask is the most popular implementation of defacement, a celebration of refusal and transformation."Masks are valuable defacement mechanisms in a human and algorithmic context.Selvaggio's project both builds upon and moves away from masks as a mechanism for defacement and towards an exploration of masks as standardized humanoid surfaces.His work is a prime example of an artistic anti-surveillance camouflage practice that asks individuals to explore the practice of algorithmic reproducibility as an act of resistance.This project "involves the design of masks that are eerie reproductions of his own face, potentially spawning swarms of expressionless doppelgangers moving through the street" (Monahan, 2015, p. 166).These masks were tested with Facebook's recognition systems and proven to trigger the detection of Selvaggio's face.Selvaggio's narration of the project is quite poignant: I imagine a future where everyone wears my face, literally.Take a moment to consider this future.As you walk down the street to the subway, you pass by me over and over and over again.The sliding doors of the train open to a swarm of Leos.(Selvaggio, 2015, p. 165) Thus, forgoing the process of individuation renders the face when understood as face model useless. Training for the Camera: Constructing Identities in the Age of Machine Vision Algorithms are trained on our images.This primary framework for training is precisely what Amazon's Recokgnition software deployed without the knowledge of the Internet users whose faces were used for the establishment of surveillance categorizations.If we are to understand ourselves as constantly being subjected to processes of surveillance and further replication through the lens of algorithmic calculations, we must consider the intentionality underlying the adapting of our everyday behavior.We should consider training ourselves in order to understand how the algorithms work in order to resist this new apparatus of surveillance.In an age where technology is further displacing the idea of humanity away from authenticity and towards replicability with the illusion of an algorithmic aura, Benjamin sees film as a training ground for resistance through the medium's ability to help us to understand the mechanism that guides reproduction and learn how to be present for the technological apparatus.With regard to the potentiality of film, Benjamin wrote that "the function of film is to train human beings in the apperceptions and reactions needed to deal with the vast apparatus whose role in their lives is expanding almost daily" (Benjamin, 2002, p. 108).His distinction between the stage actor and the film actor is helpful here for understanding the new way in which our replicas percolate in the algorithmic technological landscape.For the film actor, the "original form, which is the basis of the reproduction, is not carried out in front of a randomly composed audience but before a group of specialists" (Benjamin & Jennings, 2010, p. 22).This process enables training with experts of the technology and thus responding to the primary modality of algorithms training on humans without the permission or even knowledge of the ladder.As photography, film, and social media bleed into algorithmic facial recognition systems, a similar call is being issued by prominent artists today.For example, Paglen powerfully noted that: The point here is that if we want to understand the invisible world of machine-to-machine visual culture, we need to unlearn how to see like humans.We need to learn how to see a parallel universe composed of activations, keypoints, eigenfaces, feature transforms, classifiers and training sets.(Paglen, 2019, p. 27) Understanding machine vision is crucial in order to be able to train and perform identities suited to this new technological landscape.Much like the actor, each of one of us is encouraged to understand and intentionally train in front of the algorithmic apparatus.In the context of film, or rather filming, the actor practices the act until it is made perfect for the lens: a "single cry for help, for example, can be recorded in several different takes" (Benjamin & Jennings, 2010, p. 22).Thus, for the film actor, being in front of the camera for the film actor is a "performance produced in a mechanized test" of a premediated fictional role and an intentionally constructed identity.Intentionality here is key because the film actor is allowed to train with the help of experts whereas workers are subjected to the same exact tests and judgment but participate in them 'unaware.'Furthermore, Benjamin warned that "the film actor performs not in front of an audience but in front of an apparatus [in which] the film director occupies directly the same position as the examiner in an aptitude test" (Benjamin & Jennings, 2010, p. 22).It is this awareness and intentionality that bring humanity back into the process as "for the majority of city dwellers, throughout the workday in offices and factories, have to relinquish their humanity in the face of the apparatus" (Benjamin & Jennings, 2010, p. 23).In a film test, it is only the performance of the character that is captured in this test, not the authenticity of the actor. Selvaggio's project engages precisely with this intentional performative model.He suggests that "when we are watched we are fundamentally changed.We perform rather than be" (Katz, 2014).Thus, this performance thus is not an act of hiding, it is an act of modifying one's performance for the camera, much like an actor performing a character would.Selvaggio revealed the strategy behind his project as one that "rather than hide a face, substitute[s] it" (Selvaggio, 2015, p. 174).This substitution is articulated in the context of facial recognition technologies deployed precisely in relation to crime. For Benjamin, the film apparatus provided a training ground for the ways in which one's mirror image became replicated and distributed across networks.His observations could be translated to the context of digital photography and algorithmic surveillance, where the selfie has become the mode par excellence of self-broadcast to the world via social media networks and algorithmic surveillance is seen as the most pervasive modality of non-consensual capture and datafication of selfies, digital portraits, and street photography.The notion of being aware of the ways in which the camera and the algorithm translate our physical selves into reproducible dataselves is crucial here.Being unaware of the surveillance regimes in which we are embedded removes individual agency.Thus, it is critical to understand how algorithmic surveillance works and how one can test in front of it and perfect a performance of an identity that is intentionally crafted to respond to the technological apparatus.Unfortunately, we are asked to consider both conscious and unconscious behavior at a micro-level.Consider the millisecond you spend while scrolling on Facebook while looking at sponsored content or the ways in which you raise your eyebrows while reacting to digital content.One implies an interest in a product and sells your potential consumer power.The other outright renders the consumer into a product to be evaluated: If your eyes are too close to your eyebrows your Body Mass Index becomes elevated and your health score decreases.The more we know about the metrics that are judging us the more we can intentionally counter them. The ubiquity of surveillance coupled with its invisibility or rather seamless blending with reality deeply resonates with Benjamin's observation that "the apparatusfree aspect of reality has…become artifice, and the vision of unmediated reality the Blue Flower in the land of technology" (Benjamin & Jennings, 2010, p. 28).A vision of unmediated reality is thus seen as an inaccessible, romanticized ideal as the Blue Flower represents "the unattainable object of the romantic quest, the incarnation of desire" (Hansen, 1987, p. 204).This does not mean that one must surrender to the idea of technological mediation and should abolish efforts to challenge the technological and political systems that drive mediation.Rather, Benjamin suggested, an open acknowledgment of our predicament, an awareness of the way that it 'sees' us, and an effort to mindfully attempt to craft our presence.Mitra Azar's (2018) work on algorithmic facial images is of particular relevance here.Azar has made a compelling argument that "when a selfie becomes mediated by new tracking technologies for security system and entrainment based on face-recognition algorithms, the selfie becomes an 'Algorithmic Facial Image"' (Azar, 2018, p. 27).In the appropriation of the photograph from selfie to a facial image, Azar noticed an important change: If in the early 2000s the selfie seemed to be characterized by a certain degree of (calculated) spontaneity, an analogically constructed liveness and a form of human agency, this new form of selfie is rather defined by trackability, its algorithmically constructed liveness, and its non-human agency.(Azar, 2018, pp. 27-28) In this transition, the camera itself becomes in the words of Deleuze and Guattari a 'faciality machine' (Deleuze & Guattari, 1987 p. 199).The algorithmic machine that I am referring to in this project is indeed a faciality machine.What is notable here is the emergence of the selfie as a particular type of performance for the camera facing us rather than the world and the potentiality for the augmentation of this act when the visualization technology becomes understood as a faciality machine.In other words, we have already trained to perform a 'selfie' for the camera and are now in the moment of retraining once more-this time, in the context of algorithmic visuality. Kate Crawford and Trevor Paglen's project Training Humans highlights precisely the ways in which selfies, portraits, and state-issued identification have been harnessed in the training of facial recognition algorithms without the knowledge of the people in these photographs.Training Humans was "the first major photography exhibition devoted to training images: the collections of photos used by scientists to train AI systems how to 'see' and categorize the world" (Crawford & Paglen, 2020).As the authors note, they are reintroducing into the gallery photographs that "aren't really meant for humans [as] they're in collections designed for machines" (Crawford & Paglen, 2019).Here Crawford and Paglen exposed the inner workings of algorithmic classification and, in a sense, acted as the experts who allowed audiences to understand and train for the new algorithmic machine.The exhibit provided historical context about the ways in which anthropometrics and biometrics have historically been deployed in the articulation of human typologies.They further displayed the images used to create algorithmic classifications uncovering the duality of the photograph as an honorific and repressive entity.The most powerful part of this project is the real-time visualization of the algorithmic decisionmaking process as it evaluates the gender, age, and emotion of the people it 'sees' (Crawford & Paglen, 2019).According to Crawford, they: Wanted to engage directly the images that train AI systems, and to take those images seriously as a part of a rapidly evolving culture.They represent the new vernacular photography that drives machine vision.To see how this works, [they] analyzed hundreds of training sets to understand how these 'engines of seeing' operate.(Crawford & Paglen, 2020) Furthermore, Crawford characterized this training process as two-pronged: as everyday photographs training algorithms and as algorithms training humans how to behave.Training Humans, alongside Crawford and Paglen's Excavating AI project, raises an important question about the lack of awareness by the people in the photographs about the ways in which their faces are harnessed for algorithmic testing.Unlike Benjamin's actor and much like his worker, those posing for a selfie or a digital image were often unaware of the algorithmic classificatory systems they helped shape and ultimately became trapped by: "Harvesting images en masse from image search engines like Google, ImageNet's creators appropriated people's selfies and vacation photos without their knowledge, and then labeled and repackaged them as the underlying data for much of an entire field" (Crawford & Paglen, n.d.).What Crawford and Paglen's projects reveal is not only how the actor or worker is trained but also how the machine apparatus, the technology is 'learning' as well.In other words, the training goes both ways.The training of AI requires "vast amounts of data contained within datasets made up of many discrete images" (Crawford & Paglen, n.d.).The training of the human in front of the lens requires knowledge and intentionality. Alan Sekula has eloquently argued that photographs have always participated in an honorific and repressive systems of representation as the portrait and the mugshot have been intimately connected since the invention of photography (Sekula, 1986, p. 10).This connection that as Sekula argues introduced "the panoptic principle into daily life" (Sekula, 1986, p. 10) has been further amplified in the context of AI where the number of images analyzed is well into the several hundred million as seen by 2009's ImageNet project (Deng et al., 2009).While the scope of "scraped" images is impressive, so is the extensive classificatory schema behind it.This classificatory schema, developed through 'crowdsourcing' on Amazon's labor marketplace Mechanical Turk, is then reflected back to the unsuspected users of the digital world.As Crawford and Paglen note, here not only race, gender, and economic status are encoded to algorithmic data and back as cultural identity, but so are value judgments about people: As we go further into the depths of ImageNet's Person categories, the classifications of humans within it take a sharp and dark turn.There are categories for Bad Person, Call Girl, Drug Addict, Closet Queen, Convict, Crazy, Failure….There are many racist slurs and misogynistic terms.(Crawford & Paglen, n.d.)The classification schema was developed to aid the recognition and sorting processes driven algorithms and benefits the owners of the technological apparatuses and not the humans who were 'processed' as training data. In Training Humans, they further provide an extensive genealogy specific to the ways in which algorithmic facial recognition participates in narratives of human classification.As such this project is a direct extension of what Sekula, as well as Crawford and Paglen, trace to be a genealogy of eugenics rooted in the 19th century phrenology and physiognomy work of Francis Galton, Alphonse Bertillon, and Cesare Lombroso (Crawford & Paglen, 2019, p. 21).The distillation of images into data for the purposes of algorithmic capitalist surveillance is yet the latest instance of the enmeshment of photography with eugenics.Crawford and Paglen's project exemplifies par excellence the claim that Lea Laura Michelsen has aptly made, "Digital biometrics can be perceived as a physiognomic renaissance" (Michelsen, 2018, p. 37). Art projects that enable the public to see how they are being judged by algorithms have been developed not only for art galleries but also through digital platforms with greater access.One example is Tijmen Schep's How Normal Am I interactive documentary project (Schep, 2020).In it, the audience is asked to turn on their camera and is guided through a series of algorithmic decisions while Schep narrates the inner workings of facial recognition.He reveals the ways in which beauty is judged on platforms such as Tinder, where people with similar scores are considered to be a match, and unpacks how health insurance industries use facial recognition to predict BMI indexes and thus assess health risk.In this project, the audience is also given the opportunity to train for the algorithm: "By giving access to your webcam you can also experience how these AI systems rate your own face" (Schep, 2020).The experience is coupled with useful tips; for example, raising one's eyebrow leads the algorithms to assume a greater BMI index and thus the risk of obesity.Both Training Humans and How Normal am I allow for subjects in front of the camera to test their behavior and see the different outcomes live.They are given tips on how to perform and then are allowed to see if their behavior is gaged based on their expectations.The training in front of the camera is responsive and guided by experts who understand the inner workings of the algorithm.The training here begins in the context of art and raises awareness about the ways in which assessments are made about our conscious and unconscious behavior. Understanding and regulating the processes of data gathering processes as well as algorithmic development practices are crucial components for the development of a more equitable algorithmic culture: A culture that asks how apparatuses of assessment are created and indeed can them move forward to challenge them and perhaps call for their abolishment.While structural resistance is absolutely vital, so is micro-level training on how we are being judged by facial recognition platforms.Politicized algorithmic art allows us to bring back intentionality and awareness in front of the camera and to practice the ways in which to carry and present ourselves in front of this new capitalist surveillance assemblage in a safe space.In other words, by engaging with projects that are critical of facial recognition, we can start to understand and adapt to the inner workings of this new modality of technological reproduction and also challenge the deployment of these technologies altogether.If we are going to live in an increasingly algorithmic world, we must adapt to it mindfully in the meantime and resist the entrenchment of technocratic political orders in the long term. Disruptive Practices: Unleashing the Revolutionary Potentials of Art and Performativity The rise of AI and facial recognition surveillance has yet again elicited questions about the ways in which the algorithms can be designed to be more accurate, less biased, subjected to legal systems, decoupled from authoritarian regimes, and last but not least individually resisted.For Benjamin, technological reproduction offered an escape from "enslavement" and this liberation was to come "only when humanity's whole constitution has adapted itself to the new productive forces which the second technology has set free" (Benjamin, 2002, p. 108).This freedom to play once "liberated from drudgery" was seen as possible only "when the collective makes this technology it's own" (Benjamin, 2002, p. 124).The discourse on technological liberation from chores for the sake of convenience and play still resonates today in discourses about how now computing and autonomous technology are allowing more playtime.This celebratory stance towards collective ownership of technology that takes at its heart the rejection of authenticity and ritual and the embrace of popular and replicable is challenged by Theodor Adorno (n.d.) in a letter to Benjamin.Adorno rightfully insisted on considering the larger economic structure that makes mass art possible.In the contemporary context, visual technologies ranging from digital photography to algorithmic surveillance are not democratized but rather lay in the hands of few corporations.The reproducibility that they offer under the guise of play is articulated in terms that are useful for the machines themselves and the capitalist frameworks of alienation in which they operate.On the other hand, Benjamin sees the politics of art or further the politicizing of art as a powerful antidote to authoritarian and exploitative regimes (Benjamin & Jennings, 2010, pp. 12, 36). The idea that art can be a powerful agent of change has been challenged by critics of state and capitalism surveillance.As Torin Monahan has aptly noted, in the age of increased surveillance, there has been a rise in antisurveillance camouflage in the form of artistic projects and products centering on "masking of identity to undermine technological efforts to fix someone as unique entity apart from the crowd" (Monahan, 2015, p. 159).He has questioned the effectiveness of such projects: Anti-surveillance camouflage of this sort flaunts the system, ostensibly allowing wearers to hide in plain sight-neither acquiescing to surveillance mandates nor becoming reclusive under their withering gaze.This is an aestheticization of resistance, a performance that generates media attention and scholarly interest without necessarily challenging the violent and discriminatory logics of surveillance societies.(Monahan, 2015, p. 160) Monahan proceeded to situate this right to hide in relation to the surveillance practice of the state which has embraced the right to look and denied the right to look back.This position on the uselessness of art has been countered by a strong justification of the role of surveillance art in the larger cultural landscape.Monahan insists on the importance of challenging the institutional, economic, and legal systems in which algorithmic surveillance operates, and rightly so.However, art offers yet another track of resistance that does not assume the erasure of other oppositional positions, but rather amplifies the struggle against these normative technological apparatuses.As Elise Morrison has written: Surveillance art, as a genre of political activism and performance, combats the common tendency within surveillance society to succumb to a kind of amnesia of convenience, an ambivalent state in which the majority of user-consumers are willing to forget or look past the risks of using surveillance technologies in prescribed ways because of perceived economic, political and social gains.(Morrison, 2015, p. 126) The dialectic here is one that questions the role of the arts in conversations about technology and culture.I side here with Morrison's sentiment that art allows for a critical framework through which naturalized relations can be brought back to a reflective practice.I think that the greatest contribution to the artistic projects described above is their contemplative nature or to come back to Benjamin's work-their intentionality in situating ourselves in the position of the aware film actor rather than the unaware mechanized worker. Whereas artistic practice is already embedded in critical reflective practice, everyday posturing in front of the digital mobile camera is hardly so.With regards to art, Torin Monahan asked a poignant question: "By claiming what can be framed as a 'right to hide,' instead of a 'right to look,' what, exactly, does anti-surveillance camouflage perform" (Monahan, 2015, p. 166).In thinking about mass strategies for addressing algorithmic surveillance, I want to address both the potential role of training to look back at the camera as well as training to hide from the camera.Reflective posturing could be seen as an example of resisting surveillance capitalism through the paradigm of the right to hide.The activists have also been deploying facial recognition as an apparatus reinstating the 'right to look.'Among them is Christopher Howell who has turned the camera back to the Portland police officers "since they are not identifying themselves to the public and are committing crimes" (Howell, Strong, Ryan-Mosley, Cillekens, & Hao, 2020).Resistance to the technological panopticon created by facial recognition algorithms must be multi-fold and multi-directional.Through individual reflective practice based on awareness of the assessment mechanism behind the camera or through collective reflective action in turning the camera onto the surveyors themselves, intentional visuality might just be a power full for resisting the rise of both the surveillance state as well as surveillance capitalism. Conclusion As we move between a digital media world in which digital selves are articulated through algorithms for the purposes of advertising to an algorithmic culture where algorithms monitor and evaluate our conscious and unconscious behavior through thousands of cameras embedded in both public and private spaces, it is crucial to continue to explore modes of critique and resistance.Walter Benjamin's first and second versions of this famous essay The Work of Art in the Age of Its Technological Reproducibility offer an important apparatus for challenging algorithmic surveillance.Benjamin's assessment of the role of reproduction on one hand and training in front of the camera on the other offer important insights into our contemporary conditions.His writing on art and film in the context of fascism is indeed deeply relevant to an analysis of surveillance art in the context of a global proliferation of right-wing authoritarian regimes.Benjamin offers a powerful critique of the ways the camera reproduces not just art but also human behavior and one might say specters of the humans themselves and in this process excises the original-be it again the artwork or the human caught in the reproduction loop.One of the mechanisms for challenging this technocratic framework that he offers is the emphasis on reflection and intention.This reflection process entails an intentional reversal of the basic assumptions that structure algorithmic technology and thus the introduction of deflective methods of resistance.Some of these deflective methods have been harnessed by contemporary artists as a critique of algorithmic culture.Just as algorithms look for individual data points, artists challenge our algorithmic aura.Just as the algorithm trains on humans, artists help humans in training for the algorithm.Until we can dismantle the contemporary algorithmic panopticon, a game of hide and deflect might be in order.
10,234
2021-04-06T00:00:00.000
[ "Art", "Computer Science", "Philosophy" ]
Human electrocortical, electromyographical, ocular, and kinematic data during perturbed walking and standing Active balance control is critical for performing many of our everyday activities. Our nervous systems rely on multiple sensory inputs to inform cortical processing, leading to coordinated muscle actions that maintain balance. However, such cortical processing can be challenging to record during mobile balance tasks due to limitations in noninvasive neuroimaging and motion artifact contamination. Here, we present a synchronized, multi-modal dataset from 30 healthy, young human participants during standing and walking while undergoing brief sensorimotor perturbations. Our dataset includes 20 total hours of high-density electroencephalography (EEG) recorded from 128 scalp electrodes, along with surface electromyography (EMG) from 10 neck and leg electrodes, electrooculography (EOG) recorded from 3 electrodes, and 3D body position from 2 sensors. In addition, we include ∼18000 total balance perturbation events across participants. To facilitate data reuse, we share this dataset in the Brain Imaging Data Structure (BIDS) data standard and publicly release code that replicates our previous event-related findings. a b s t r a c t Active balance control is critical for performing many of our everyday activities. Our nervous systems rely on multiple sensory inputs to inform cortical processing, leading to coordinated muscle actions that maintain balance. However, such cortical processing can be challenging to record during mobile balance tasks due to limitations in noninvasive neuroimaging and motion artifact contamination. Here, we present a synchronized, multi-modal dataset from 30 healthy, young human participants during standing and walking while undergoing brief sensorimotor perturbations. Our dataset includes 20 total hours of high-density electroencephalography (EEG) recorded from 128 scalp electrodes, along with surface electromyography (EMG) from 10 neck and leg electrodes, electrooculography (EOG) recorded from 3 electrodes, and 3D body position from 2 sensors. In addition, we include ∼180 0 0 total balance perturbation events across participants. To facilitate data reuse, we share this dataset in the Brain Imaging Data Structure (BIDS) data standard and publicly release code that replicates our previous event-related findings. Value of the Data • Our dataset contains multiple human biosignals, including high-density EEG, that can be used to further our understanding of how the human body adapts to unexpected balance perturbations during a mobile beam-walking task. • This multi-modal dataset can benefit researchers interested in the neural correlates of balance control, the physiological effects of virtual reality headsets, sensory integration, corticomuscular connectivity, mobile neuroimaging, and noninvasive neural decoding. • Our data is formatted in the Brain Imaging Data Structure (BIDS) data standard to facilitate data reuse, and we provide freely accessible code to replicate the findings from our related research article. • We include data from 30 participants, each with 149 data channels and ∼600 perturbation events, which can be used to benchmark signal processing techniques for mobile tasks and assess inter-participant variability across multiple recording modalities and tasks. • We have precisely synchronized all recording modalities in this dataset, enabling researchers to explore how eye movements, head position, and neck muscle activity contribute to EEG motion artifact during mobile tasks. • The multi-modal aspect of our dataset also provides an opportunity to explore sensor fusion and multi-modal decoding strategies for robust, noninvasive brain-computer interfaces. Data Description Our dataset contains high-density electroencephalography (EEG), electrooculography (EOG), neck/leg electromyography (EMG), and motion capture recordings from 30 human participants during sensorimotor balance perturbations. Participants either stood or walked on a treadmillmounted balance beam while experiencing brief visual field rotation or side-to-side pull perturbations. For each participant, we recorded four 10 min sessions: (1) standing during pull perturbations ( pull stand ), (2) standing during rotation perturbations ( rotate stand ), (3) walking at 0.22 m/s during pull perturbations ( pull walk ), and (4) walking at 0.22 m/s during rotation perturbations ( rotate walk ) [3] . All data files are separated by session for each participant and formatted in the Brain Imaging Data Structure (BIDS) data standard to facilitate data reuse [4,5] . All biosignal data recordings are saved as EEGLAB .set and .fdt files. The .fdt files contain the time-series data, while the .set files include relevant metadata. Both data files can be loaded into Matlab using EEGLAB [6] . Once loaded into EEGLAB, the data field will contain the timeseries data. The first 128 rows of this field contain EEG data, ordered according to BioSemi's 128-channel layout ( https://www.biosemi.com/headcap.htm ). The next rows are neck EMG (2 rows), EOG (3 rows), leg EMG (8 rows), three-dimensional position at the head and sacrum (6 rows), and finally pull force recordings (2 rows). The specific label, data type, and units for each row can be found in the _channels.tsv file in the same folder as the .set and .fdt files. In addition, the _electrodes.tsv file contains precisely measured three-dimensional positions for all EEG, neck EMG, and EOG electrodes in meters. During each 10 min recording session, participants were exposed to 150 perturbation events. Only one type of perturbation (pull or rotation) occurred in each session. Event information can be found in the event field after opening the .set / .fdt files or in the _events.tsv file in the same folder. For each event, we provide the type of event, onset time, and duration. The type of event includes both type of perturbation performed (pull or rotation) and which direction of the perturbation (left or right for pulls, clockwise or counterclockwise for rotations). We also include relevant noisy electrode and source localization information in the etc field after opening the .set / .fdt files. The etc.good_chans field contains the indices of all electrodes identified as not noisy, based on the criteria listed in the next section. We additionally provide the weight and sphering matrices from running adaptive mixture independent component analysis [7] , which can be found in etc.icaweights and etc.icasphere , respectively. The etc.good_comps field includes the indices of the independent components that both authors agreed represent neural sources based on visual inspection of power spectra shape (decreasing power with increasing frequency [8] ) and position within the head. Finally, we include the estimated DIPFIT2 equivalent dipole information for each independent component in the etc.dipfit field [9] . Experimental Design, Materials and Methods We collected data from 30 healthy, young adults (15 females, 15 males; 22.5 ± 4.8 years old [mean ± SD]). All participants self-identified as right hand/foot dominant and had normal or corrected vision. We screened participants for any orthopedic, neurological, or cardiac conditions as well as for motion sickness in virtual reality. All participants provided written informed consent. Our protocol was approved by the University of Michigan Institutional Review Board. 1. Experiment design and overview of recorded data streams. Our dataset was recorded from 30 participants during four 10 min sessions where participants were exposed to brief side-to-side pulls or visual field rotations while either walking or standing on a treadmill-mounted balance beam. Each session contains 150 perturbation events (75 in each direction). During each session, high-density electroencephalography (EEG), electrooculography (EOG), surface electromyography (EMG), three-dimensional body position (via motion capture), and pull force were recorded and synchronized at a 256 Hz sampling rate. Experimental design Participants underwent four 10 min recording sessions of either standing or walking on a treadmill-mounted balance beam. The balance beam was 2.5 cm tall and 12.7 cm wide, which enforced tandem gait and tandem stance. In all sessions, participants wore a body-support harness for safety and crossed their arms. We instructed participants to move only their hips sideto-side while balancing and to avoid rotating across the longitudinal axis of their body [10,11] . During walking sessions, participants walked heel-to-toe at 0.22 m/s. For standing sessions, we instructed participants to stand with their right foot in front of their left foot. In each session, participants were exposed to one of two sensorimotor perturbations: a virtual-reality-induced visual field rotation or a mediolateral pull at the waist ( Fig. 1 ). We used an Oculus Rift DK2 virtual reality headset to present visual field rotation perturbations by displaying a passthrough view from a video camera mounted to the headset (Logitech C930e; Logitech, Lausanne, Switzerland), located near the participant's nose. At the onset of each rotation perturbation, this passthrough view was instantly rotated 20 °clockwise or counterclockwise using Unity 5 software (Unity Technologies, San Francisco, USA). This rotated view lasted for 0.5 s before instantaneously reverting to the original, unrotated view. We performed mediolateral pull perturbations using two electromechanical motors placed on either side of each participant. Each motor was fastened to one end of a thin 30.5 cm-long metal bar, with a steel cable connected at the other end. This cable was attached to the body-support harness close to the participant's waist. At the start of each pull perturbation, one motor would be commanded (dSPACE GmbH, Paderborn, Germany) to rotate the attached bar 90 °away from the participant for 1 s, pulling the participant towards their left or right. Participants were separately exposed to each perturbation type while either walking or standing, resulting in four sessions total. During each session, participants were perturbed 150 times (75 in each direction) in a pseudo-random sequence. For each perturbation type, participants always performed the standing session first, followed by the walking session. We randomly selected half of the participants to perform the rotation sessions first while the other half was exposed to the pull perturbations first. Data Acquisition We recorded multiple biosignals during each session, including high-density EEG, EOG, neck and lower leg EMG, and motion capture ( Fig. 1 ). We performed EEG recordings with a 136electrode BioSemi ActiveTwo system with gelled electrodes (512 Hz sampling rate; BioSemi BV, Amsterdam, Netherlands). All electrode positions were precisely measured using an ELPOS Digitizer (Zebris Medical GmbH, Isny, Germany). We used two of the BioSemi electrodes to measure posterior neck muscle activity. In addition, we placed three BioSemi electrodes around the eyes to record EOG (see Fig. 1 for placement). We recorded surface EMG (10 0 0 Hz sampling rate; Biometrics. Ltd, Newport, UK) from 4 lower leg muscles on each leg: tibialis anterior, soleus, medial gastrocnemius, and peroneus longus. We selected leg muscles that were relevant to walking and mediolateral balance. In addition, we recorded three-dimensional positions of the head and sacrum using reflective motion capture markers, sampled at 100 Hz. We also attached tensile load cells (10 0 0 Hz sampling rate; Omega Engineering, Norwalk, USA) in series with the both cables to record pull perturbation force and onset times. Leg EMG, motion capture, and load cell data streams were recorded synchronously using Vicon Nexus software (Vicon Motion Systems, Oxford, UK). EEG, EOG, and neck EMG pre-processing We pre-processed EEG, EOG, and neck EMG together using custom EEGLAB scripts [6] . Data were downsampled to 256 Hz, high-pass filtered at 1 Hz, referenced to the common median of all electrodes, and processed with Cleanline to minimize line noise at 60 Hz and its harmonics ( https://github.com/sccn/cleanline ). We also identified noisy EEG electrodes that had abnormally high standard deviation, had kurtosis > 5 standard deviations above the average electrode, or were uncorrelated for > 1% of the time [12,13] . This process identified 17 ± 7 (mean ± SD) noisy electrodes per participant. Data synchronization and alignment We synchronized all data streams using a 0.5 Hz square pulse sent to every recording device. All data streams were aligned to the pre-processed EEG data with 256 Hz sampling. Prior to alignment, we low-pass filtered the leg EMG and load cell data using a 4th order Butterworth filter with a 250 Hz cutoff frequency to avoid anti-aliasing effects. We also identified and visually verified corresponding sync rising and falling edges across data streams. Next, we used the timing of each rising and falling edge in order to segment the leg EMG, motion capture, and load cell data streams such that each ∼1 s segment started and ended when the sync signal either rose or fell. Because the sync rising and falling edges are aligned across all signals, these segments are synchronized across data streams, but need to be resampled to match the EEG sampling rate. To achieve this, we interpolated each segment to the number of EEG timepoints between the corresponding sync signal edges using MATLABs interp1 function. We chose this interpolation procedure to minimize alignment errors due to dropped frames in any of the data streams. Perturbation Event Timings We identified the onset times for both sensorimotor perturbation types. For visual field rotations, we programmed virtual keyboard button presses to occur at the onset of each rotation, with different keys distinguishing between clockwise and counterclockwise rotations. These button presses were automatically synchronized to the EEG recordings using Lab Streaming Layer [14] . We estimated pull perturbation onset times by identifying the peaks in detrended load cell data and then finding when the load cell first went 3 standard deviations above baseline voltage before each peak. We visually inspected all peak detections and pull onset event times to ensure accuracy.
3,008.6
2021-11-25T00:00:00.000
[ "Biology", "Computer Science" ]
A spot laser modulated resistance switching effect observed on n-type Mn-doped ZnO/SiO2/Si structure In this work, a spot laser modulated resistance switching (RS) effect is firstly observed on n-type Mn-doped ZnO/SiO2/Si structure by growing n-type Mn-doped ZnO film on Si wafer covered with a 1.2 nm native SiO2, which has a resistivity in the range of 50–80 Ω∙cm. The I–V curve obtained in dark condition evidences the structure a rectifying junction, which is further confirmed by placing external bias. Compared to the resistance state modulated by electric field only in dark (without illumination), the switching voltage driving the resistance state of the structure from one state to the other, shows clear shift under a spot laser illumination. Remarkably, the switching voltage shift shows a dual dependence on the illumination position and power of the spot laser. We ascribe this dual dependence to the electric filed produced by the redistribution of photo-generated carriers, which enhance the internal barrier of the hetero-junction. A complete theoretical analysis based on junction current and diffusion equation is presented. The dependence of the switching voltage on spot laser illumination makes the n-type Mn-doped ZnO/SiO2/Si structure sensitive to light, which thus allows for the integration of an extra functionality in the ZnO-based photoelectric device. Methods The structure is vertical stacks with the n-type Mn-doped ZnO film grown on Si wafer. The (111) Si wafer was of 0.3 mm thick, which had a resistivity in the range of 50-80 Ω•cm, covered with thin native SiO 2 of 1.2 nm. The n-type Mn-doped ZnO film was deposited by co-coping specially designed Al-doped ZnO ceramic (composed of 2% Al 2 O 3 and 98% ZnO) and Mn metallic (99.99%) targets at room temperature. The base vacuum of the chamber was better than 4.5 × 10 −5 Pa prior to deposition and the working argon pressure of 0.85 Pa was maintained during deposition. The dopant concentration of Mn was controlled by the DC magnetron sputtering power of Mn metallic target when the radio frequency power of the ceramic target was maintained at 50 W. The alloying indium electrode for electrical measurement, which showed no measurable rectifying behaviour, had a diameter less than 1 mm. All current-voltage (I-V) characteristics were measured by Keithley-4200 semiconductor characterization system. During the sweeping process, the sweeping rage is 6 s/curve with 0.1 V step, which would be obtained in all measurements. The optical transmittance spectra of the deposited films were determined by UV-Vis-NIR spectrophotometer in range of 200-1000 nm on glass substrate. The spot laser used in the study is from a 532 nm laser focused into a roughly 50 µm-diameter, whose output optical power ranges between 0 and 10 mW through an optical attenuation. A light emitting diode (LED) is used as light source to avoid any influence of heat transfer into the sample. Results and Discussion In Fig. 1 we present the typical I-V characteristic of the proposed structure measured in dark condition (without illumination) at room temperature in linear (a) and semi-log (b) patterns, separately. The voltage is swept in four parts following a sequence marked in Fig. 1 adopting a current compliance of 1 mA herein to avoid permanent dielectric breakdown of the device. The structure is initially in a low resistive state (LRS) under −10V negative bias, while a voltage above a critical value (about −7V) switches it to a high resistive state (HRS). There exists a significant difference of the output current in two states, corresponding to ON/OFF state, which is a capacitive induced phenomenon and working principle of thin film transistors 18,[22][23][24] . For forward bias, which corresponds to a positive sweeping voltage applied to the electrode A, only a small current is observed independent of the voltage value. This I-V curve evidences the prepared structure a rectifying junction. With respect to the inverse sweeping process (part 3 and 4), the LRS is read at a voltage of about −9V. According to polarities of applied voltages, this RS behaviour was categorized into unipolar type, which was a novel phenomenon for memory material grown on Si substrate 4 . To exhibit the polarity of RS effect obtained in the prepared structure, we further exploit the I-V characteristic curves of the junction adopting a compliance of 0.1 A to record actual current value. As is shown in Fig. 2(a), the I-V curves measured in inverse mode show axial symmetry property. When the sweeping electrode A (demonstrated as inset) connected as ground terminal, the switching voltage is reversed to 7V from −7V correspondingly, implying the electric field driven origin. For further verification, we also re-measured I-V curves with placing electrode B at different bias, displayed as Fig. 2(b). For simplicity, all curves presented in following were measured with electrode A connecting up sweeping voltage without specific caption. As is shown in Fig. 2(b), the switching voltage presents linear shift with the bias within limited range owing to the linear offset of the bias and sweeping voltage. Meanwhile, the offset linearity deteriorated significantly with a high bias voltage (either positive or negative), which also suggested the junction current characteristic of the structure. When the spot laser was introduced as co-stimulation to modulate the RS effect, the switching voltage shows a dual dependence on both the illumination position and the laser power. To investigate this dual influence, we re-measured the I-V characteristic curves of the prepared structure in two different designed modes, defined as CP (constant power) mode and CD (constant distance between the illuminating position and the electrode) mode, respectively. Figure 3 presents the I-V curves response to spot laser illuminating different positions on both Si substrate and film surface in CP mode (with a 2 mW output laser power). As is shown in Fig. 3(a,b), the switching voltage varies obviously as the spot laser illumination moves away from the electrode position. To display the influence of illumination position on the switching voltage clearly, the enlarged views of the current response to the voltage during the switching process are exhibited in Fig. 3(c,d), respectively. Compared to the switching voltage need without illumination, the switching voltage shift reaches highest as the spot laser illuminates nearby the electrode. On film side, the switching voltage shifts to −7.4V from −7V. With the increase of the distance between the illumination position and electrode, the switching voltage shift becomes less and less and finally is negligible once the distance reaches a threshold value. On film side, the switching voltage returns back to −7V with a 5 mm distance, indicating a threshold value of 5mm. By contrast, there is a little difference on the substrate side. When the spot laser illuminated at the position of 5mm away from the electrode, the switching voltage is −7.1V, which remains away from the initial value. We ascribe this to the different influence of the photo-generated carrier in two sides, which would be explicated in detail later. In Fig. 4, we give the I-V curves response to different power laser illumination in CD mode, in which the laser illuminating position was fixed nearby the electrodes on both film and Si substrate side. The output optical power ranges from 0 to 6.8 mW, which is modulated through an optical attenuation. As is shown in Fig. 4(a,b), the switching voltage shift becomes more and more obvious on both film and Si substrate with the increasing laser power and reaches highest under a 2 mW laser illumination. On film side the highest switching voltages is −7.4V, and on Si substrate it reaches −7.6V. As the illumination laser power exceeds 2 mW, the switching voltage stops shifting and becomes statured on both film side and Si substrate, as is shown in Fig. 4(c,d). On the film side the switching voltage keeps at −7.4V constantly. In the same condition, the constant switching voltage is kept at −7.6V on Si substrate side. Noticeably, there exists a slight degree of difference during the switching process as the laser power exceeds the saturated value. The proposed structure has switched to high resistance state with laser illuminating on film side. By contrast it tends to remain in low resistance state on Si substrate. The mechanism behind RS has been controversial for a long time. Now there is consensus on conducting filament model due to the direct evidence given by the high-resolution conducting atomic force microscope (CAFM) 7,11,31 . Here we just focus on the dual dependence of the switching voltage on the position and power of the illuminating laser, which is ascribed the offset effect of enhanced internal barrier of the hetero-junction induced by the redistribution of photo-generated electron-hole pairs overlapped on the switching voltage. As is shown in Fig. 5(a), a local electric filed opposite to the built-in electric field is formed during the switching process (marked as E SV ) between the electrodes. Without laser illumination, the switching voltage overcomes the built-in field (marked as E built-in ) and drives the structure to a different resistance state by forming localized conducting filaments 32 . When the prepared structure is illuminated by the spot laser either on the film or on the Si substrate nearby the electrode, most of photos would be absorbed by the Si substrate due to the high transmittance of the film, which is well confirmed by the transmittance spectra of the n-type Mn-doped ZnO film displayed in Fig. 5(b), as well as the Mn-doped and pure ZnO films. The absorption produces electron-hole pairs in a restricted region in Si substrate equal to the irradiated area, which diffuse to junction where they are separated by the electric force. The holes are swept into the films and the electrons remain in substrate side. This process produces a photo voltage at the illumination position, exhibiting as an extra electric field (marker as E EHPS ) in the same direction with built-in field, which therefore enhances the internal barrier. This photo voltage V EHPS , representing the added internal barrier generated by the electron-hole pairs, is related to the number of electron-hole pairs separated by the junction by the following relationship: where q is the magnitude of the electronic charge, k is Boltzmann's constant and T is the absolute temperature. And J S is the saturation current. Noticeable here the spot laser is monochromatic, so the number of photo generated holes (electrons) f in the illumination area can be written as: here P is the incident laser power, ξ is the quantum efficiency and T represents the transmittance of laser to the film. Quantized photon energy is described as hν according to quantum theory (h is the Planck's constant and ν is the laser frequency). As the laser illuminates away from the electrode, the photo generated holes would diffuse along the interface from the illumination position, shown as Fig. 5(a). Thus the photo voltage generated by the excess holes approaching the area of E SV would become as follow: where r represents the distance between the illumination position and electrode, r 0 is the diffusion length of the photo generated carriers along the interface. The process is similar to the case calculated in ref. 33 . To verify the relation between the switching voltage shift and laser position and power, the dependence of switching voltage shift on the laser position and power are shown as Fig. 5(c,d). As is shown, the switching voltage shift presents a quasi-linear dependence on the distance between the electrode and the laser position within 5 mm in both FI (film illuminated) and SI (substrate illuminated) condition, displayed in Fig. 5(c). The better linearity and remaining effect in SI condition with r = 5 mm is ascribed to fewer defects existing in Si substrate compared to the n-type Mn-doped film. Besides, the dependence of switching voltage on laser power in FI and Si condition displayed in Fig. 5(d), clearly exhibits the raise of switching voltage shift with the increasing laser power at first, and then gets saturated as the laser power exceeds the threshold value. However, the switching shift observed in SI condition is much larger in CP mode than in FI condition. This is a result of the total absorption of photons in Si substrate in SI condition. The total absorption generates more electron-hole pairs, having a larger influence on the switching voltage shift according to eq. (3). In summary, a spot laser modulated RS effect is firstly observed on n-type Mn-doped ZnO film/SiO 2 /Si structure, in which the Si substrate has a resistivity in the range of 50-80 Ω•cm with a native SiO 2 layer. The prepared structure works as a rectifying junction, in which the change of resistance is a capacitive induced phenomenon. By combining spot laser illumination as co-existing stimulation, the switching voltage varies with both laser illumination position and power, which adds a completely new degree of freedom of RS modulation. Based on the redistribution of photo-generated electron-hole pair's mode, the added internal barrier induced by these carriers is proposed to account for the dual dependence of switching voltage on laser position and power. These achievements suggest a novel approach to improve the RS effect performance. The light sensitive character also makes n-type Mn-doped ZnO/SiO 2 /Si structure an excellent candidate for multi-functional photoelectric device in RS based storage technology.
3,078.6
2017-11-09T00:00:00.000
[ "Materials Science", "Physics" ]
Teleosemantics, Structural Resemblance and Predictive Processing ( Philosophy, Cognitive Science and Representation Philosophy and cognitive science have a complicated relationship when it comes to representation.Here is an illustrative caricature of that relationship.Cognitive science departments generate data, and attempt to explain that data using theories.Sometimes those theories posit representational content.At this point, philosophy departments sit up and take notice.Representational content is a long-contested notion in philosophy, and we can't have other disciplines using it without proper analysis.Philosophers then assess how content could be attributed to cognitive systems in the context of the new theory.In a manner of speaking, then, philosophers licence the use of representational content. 1 Predictive processing is a new, ambitious theory in the cognitive sciences.Proponents of the view treat the brain as a sophisticated hypothesis testing system.Models of the world are used to produce predictions of future sensory input, which are then updated based on any difference between predictions and actual sensory input (called prediction error).This process results in more accurate predictions, which in turn means the system minimises prediction error over the long term (Clark, 2013;2016;Friston & Kiebel, 2009;Hohwy, 2013).Linked probabilistic models of this sort are called "generative hierarchies" due to their ability to recreate incoming sensory states via top-down prediction (Hinton, 2007). Advocates of the theory refer to "models of the world" (Hohwy, 2016, p. 281) being "encoded" and "updated" in the brain (Clark, 2017, p. 12) (Friston et al., 2011, p. 138) (Hohwy, 2016, p. 280) (Wiese & Metzinger, 2017, p. 10).It is also typical to speak of cognitive systems using these models to "compute predictions" (Clark, 2017, p. 9;Wiese & Metzinger, 2017, p. 5).A framework that appeals to encoded models of the world which compute predictions suggests an interpretation in terms of information-bearing structures that are produced, manipulated and stored by the brain.Consequently, it seems proponents of predictive processing will require a licence for representational content. 2 In other words, we need some way of understanding how it might be that the various parts of a generative hierarchy come to be content-bearing. Traditionally, it has been assumed that philosophy departments should issue one type of licence.This in turn has generated a lot of disputes among philosophers as they argue the case for their chosen account of content (Cummins, 1996;Dretske, 1981;Fodor, 1990;Millikan, 1984).Often, it is alignment with philosophical intuitions that guide these debates and constrains theory construction.But, as Shea succinctly puts it, "When it comes to subpersonal representations, it is unclear why intuitions about their content should be reliable at all" (Shea, 2018, p. 28).This suggests it is worth exploring other approaches to the problem.Another strategy, which has only gained interest more recently, acknowledges that finding one overriding account of representation for the cognitive sciences is unlikely to be successful.As such, philosophers should be sensitive to the fact that cognitive scientists employ a range of different notions of representation (Godfrey- Smith, 2004;Planer & Godfrey-Smith, 2021;Shea, 2018).We should hence be in the business of providing pluralist licences for content, precisely because the explanatory work facing cognitive science produces a range of different approaches to representation, which in turn require different notions of content.This involves a particular view on the role of 1 How much attention cognitive science departments pay to this licensing system varies by department, but at least some appear to take it seriously. 2There are those who deny that predictive processing should be understood in representationalist terms; e.g.Hutto (2018).Here we sideline such debates.Our aim is to provide a teleosemantic analysis of signals in predictive processing systems for those who want to understand such systems in representational terms. philosophers of science in such debates, one which is more sociologically, or practice oriented (in what follows, we'll use the latter term).The task facing philosophy is not to isolate a particular concept that covers all cases.Rather, it is to describe and clarify the range of different concepts that are used, or that might be used, to explain the workings of a successful scientific practice.Accordingly, philosophical intuitions do not play a central role in guiding theory construction in the practiceoriented approach. 3Our pluralism is motivated by this line of thinking. To date, attempts to assign content to predictive processing architectures have appealed to structural representations (Gładziejewski, 2016;Kiefer & Hohwy, 2018;2019).According to this view content is determined by a structural resemblance between an internal cognitive state and an external state of affairs.When applied to predictive processing, this is understood as the claim that the causal-probabilistic structure of generative hierarchies resemble the causal-probabilistic structure of the external world.We do not disagree with this approach; however, we think appealing to other theories of content, that have themselves been applied in cognitive science more broadly, can also be applied to predictive processing.Specifically, we appeal to teleosemantic thinking.This allows us to target a tightly specified sub-part of predictive processing machinery.Our approach is to outline how signals in generative hierarchies-that is, predictions and prediction errors-can be given a teleosemantic treatment.In what follows, we use Millikan's sender-receiver model to argue that predictions represent external states of affairs and prediction errors represent the discrepancy between predictions and the states of affairs they predict.We thus advocate an account of the content-determining structures in predictive processing systems that appeals to both teleosemantics and structural representations.In other words, we issue a pluralist licence. We have two main goals.Our primary goal is to show how a teleosemantic account of the content of signals in generative hierarchies would work.This takes up the majority of the paper.A secondary goal is to make the case for pluralism.We do not spend too much time on this task, as the fact that practice-oriented pluralism (as outlined above) is a position in the literature is reason enough to explore such treatments of predictive processing.Nonetheless, it is interesting to explore how pluralism plays out in this specific case.Predictive processing is claimed to be a highly general theory of action and perception, which applies to all cognitive systems (Hohwy, 2013;Clark, 2016).As such, it will need to be applicable across the phylogenetic spectrum.We think having teleosemantics on the table will help in this task.Accordingly, we expand on this motivation for our approach, and identify some specific cases where a pluralist treatment might be useful. We proceed as follows.Section 2 provides a brief overview of predictive processing.Section 3 outlines Gładziejewski's causal-probabilistic resemblance account of content in generative hierarchies.Section 4 provides a primer on teleosemantics.Section 5 gives our teleosemantic account of predictions and prediction errors.Section 6 makes the case for pluralism.Section 7 concludes. Predictive Processing The literature on predictive processing is a large and complicated body of work, of which there are some excellent introductions (Clark, 2016;Hohwy, 2013).The overview we offer below is a general gloss, and is necessarily selective in the aspects it focuses on. 4In particular, we aim to draw out the sender-receiver structure of generative hierarchies in order to tie this with teleosemantic theory. Our overview focuses on two features of the theory: (i) hierarchical prediction and prediction error; and (ii) prediction error minimisation. 5We address each in turn. Hierarchical Prediction and Prediction Error The nature of bottom-up and top-down processing is re-conceived on the predictive processing framework.Top-down processing is understood in terms of prediction; more specifically, as attempts to predict future sensory input.Bottom-up processing is understood as the transfer of prediction error, where prediction error is the difference between predicted sensory input and actual sensory input (see Fig. 1). Predictions are generated by encoded models of the world, which in turn are produced via experience, learning and evolution.These models incorporate hypotheses about the causes of sensory input, and generate predictions about future sensory input.They are hierarchically organised according to the spatiotemporal scales of the causal regularities they address.At lower levels in the hierarchy, models generate predictions at faster time scales and at more fine-grained spatial resolution; for instance, about which sensory transducers will be activated in the immediate future given those that are currently activated.At higher levels in the hierarchy, models generate predictions at slower time scales and at a broader level of spatial resolution; for instance, about the change in temperament of a friend after the birth of their first child.The predictions of models at the lowest level target the states of sensory transducers, whereas the predictions of any model above the lowest level target the states of the model directly below it. Bottom-up processing is also reformulated on this account.Rather than being an encapsulated process in which perceptual experience is constructed from the raw data of sensory input, bottom-up processing is understood as the transfer of prediction error.At any given layer in the hierarchy, a model will receive prediction error signals from the model below it, attempt to explain away this error by refining its model, and forward any residual error that it cannot explain to the model above it. Prediction Error Minimisation According to predictive processing, the central goal of a cognitive system is to minimise prediction error over the long-term.There are two ways in which the brain can deal with an active error signal.One option is to formulate a new hypothesis regarding the cause of the sensory input generating the prediction error.This can then be used to produce new predictions which can account for the error signal.On the predictive processing framework, this is the mechanism underlying perception, and is known as perceptual inference.Perception is understood as the product of the system's ability to settle on a hypothesis that best explains sensory input; which is to say that prediction error is minimised.This process exhibits a mind-to-world direction of fit, in so far as states of the brain are adjusted in order to accommodate states of the world.Perceptual inference implies that, at every layer in the hierarchy, models are able to adjust their parameters according to the content of bottom-up prediction error signals.The content of these signals is, broadly speaking, the difference between (the content of) predicted sensory input and actual sensory input. However, the brain also has the option of exploiting the world-to-mind direction of fit in minimising prediction error.In other words, it can adjust its place in the world in order to accommodate states of the brain.In this case the brain does not alter its hypotheses; instead it acts to bring about changes such that future sensory input matches the predictions of those hypotheses.On the predictive processing Fig. 1 The mechanism at the core of predictive processing.Top-down transfer of predictions and bottom-up transfer of prediction errors across a hierarchy of models framework, this is the mechanism underlying action, and is known as active inference.More precisely, the brain generates action by predicting the proprioceptive sensory input given a hypothetical action, and then minimises the difference between its predicted sensory input and actual sensory input by changing the world or its position in the world.Importantly, active inference is recapitulated in the activity of each individual model in the hierarchy.Every model uses action-here the generation of predictions-to influence the states of the model below it in ways that will alter incoming prediction error, and hence the sensory states of the original model.That is, each model uses its active states to influence its sensory states.This topdown influence of higher models on lower models is typically described in terms of "modulation" or "guidance" (Clark, 2016, p. 146;Kirchhoff et al., 2018). So, according to predictive processing, both perception and action are products of the more general imperative to minimise prediction error, and hence are explained by appeal to a single computational mechanism.Moreover, the theory implies that every model in the hierarchy is able to produce contentful predictions and prediction errors, and is in turn capable of adjusting its parameters in response to contentful predictions and prediction errors.This part of the predictive processing mechanism will be the target of our teleosemantic analysis.6 The Sinister Figure Example A simple example (one that will be familiar to most) illustrates the mechanism being proposed here.Imagine that you have just woken up in the middle of the night.As you yawn and stretch, you happen to glance toward the corner of your room, and see what looks to be a sinister figure lurking there.Startled, you quickly sit up and turn on the light.Thank God, you gasp-it was just a pile of clothes strewn across a chair!According to predictive processing, this case should be analysed as follows.The hypothesis that the cause of your initial sensory input was a (sinister) figure provides an excellent explanation of that input.As such, the best way to minimise prediction error was to deploy the sinister figure hypothesis; which in turn explains the character of your visual experience.But the alarming nature of that experience immediately brings about the need to investigate further.The sinister figure hypothesis generates the prediction that you will get a better look at whoever it might be if you sit up and turn on the light.This high-level prediction modulates the behaviour of models below it, which in turn produce further predictions, and so forth down the hierarchy.A cascade of predictions relating to the hypothetical action-you turning on the light-are thus generated.If the hypothesis 'I am turning on the light' is held fixed, this will result in a corresponding cascade of prediction errors rising up the hierarchy, as sensory input will not match predictions.By moving in such a way as to turn on the light, this error signal is minimised.However, the new sensory state generated by turning on the light is not explained by the original (sinister figure ) hypothesis.So again we have a difference between predicted sensory input and actual sensory input.Consequently, a new hypothesis must be deployed to suppress the error rising through the system.The hypothesis that there is a pile of clothes on a chair in your room explains the new sensory input well.By producing a new hypothesis-the untidy chair hypothesis-the error signal can be explained away.Prediction error is then minimised if the system settles on this hypothesis.Although just a toy example, this gives us an idea of how predictive processing understands the computational link between action and perception.In the end, both are strategies the brain uses to minimise prediction error.Furthermore, the combined processes of predicting sensory input and updating models in response to prediction error allow the system to build increasingly accurate models of the world.A cornerstone of the framework is that every model in the hierarchy is able to produce and respond to contentful predictions and prediction errors. The preceding discussion raises two important questions.First, in what sense do models become 'increasingly accurate'?Second, how do prediction and prediction error signals get their content?In the next three sections we address these questions. A Structural Resemblance Account of Content for Generative Hierarchies We noted in the introduction that previous attempts at ascribing content to predictive processing architectures have appealed to structural resemblance.We agree that this strategy constitutes a plausible theory of content for generative hierarchies.In this section, following Gładziejewski (2016), we outline the sense in which internal models structurally resemble the external world.In the following sections, we outline a teleosemantic theory of the content of signals in predictive processing architectures. The core claim put forward by proponents of structural representations is that content is determined, to some extent, by a structural resemblance between an internal cognitive state and an external state of affairs.The challenge is then to determine precisely what this structural resemblance amounts to, in any particular case of representation.Gładziejewski (2016, p. 566) cites cartographic maps as the "golden standard" for structural representations.This is because: (1) they are representational; (2) they guide the actions of their users; (3) they do so in a detachable way; and (4) they allow their users to detect representational errors.Fulfilling the latter three conditions is an important part of any theory of representation (especially if, following Gładziejewski, we want to meet Ramsey's (2007) job description challenge).However, here we will focus on the first condition: how exactly is it that models in predictive processing architectures structurally resemble external states of affairs? When it comes to cartographic maps, the structural resemblance relation is spatial.For example, if my map of the university depicts the cognitive science department as being closer to the cricket pitch than the philosophy department, then we can conclude that the layout of the university itself is such that cognitive science department is closer to the cricket pitch than the philosophy department.Of course, in the case of predictive models, it is implausible that the structural resemblance relation is between spatial quantities.Rather, the claim is that the causal-probabilistic structure of internal models resembles the causal-probabilistic structure of external states of affairs.Gładziejewski (2016, pp. 571-572) argues that causal-probabilistic resemblance has three dimensions.The first of these is a probability distribution, which defines a likelihood.According to predictive processing, variables in a model encode the probability of some sensory input occurring given some external state of affairs. 7he claim, then, is that the relation between variables in a model and lower-level sensory activity structurally resembles the relationship between worldly causes of that sensory activity and the activity itself.For an example we will repeatedly draw on below, consider the capacity of a trained rat to press a lever to retrieve food.The rat's hierarchical model represents the lever in terms of the probability that certain sensory patterns are produced; from short-term time scales-such as the colour and shape of the lever-to more long-term time scales-such as the interoceptive sensations associated with the digestion of food.The probabilistic relationship between the lever-representing model and sensory input thus structurally resembles the causal relationship between the actual lever and sensory input. However, models do not predict sensory input in a straightforward manner.As we have seen, the system as a whole predicts sensory input transitively, in that higherlevel models produce predictions of activity in lower-level models.This suggests a causal-probabilistic structural resemblance between (on the one hand) the values of interacting variables evolving via inter-model dynamics and (on the other) causal relationships between objects in the world.If, for example, there is a causal relationship between lever-pressing and food, then this relationship should be recapitulated in the way that the values of different variables across models influence one another.So levers can be represented not only in terms of their relationship to future sensory input, but also in the way they causally interact with other objects.This is the second dimension of structural resemblance. Models also structurally resemble causal-probabilistic relationships in the world via encoded priors.If a generative hierarchy is to realise Bayesian reasoning, it must be capable of comparing the probability that a lever is the cause of current sensory input with the probability that the system would encounter a lever, independently of the evidence provided by current sensory input.For instance, if it is more likely that our trained rat encounters actual functioning levers, rather than objects that look like levers but cannot be pressed, then the system should prefer the former hypothesis.The values of priors thus structurally resemble the experience-independent causal-probabilistic structure of the world.This is the third dimension of structural resemblance. We now have a sketch of how content in generative hierarchies might be understood in terms of causal-probabilistic resemblance with the world.However, given our practice-oriented approach, it will be useful to have more than one account of content on the table.This will allow predictive processing to be applied in case studies that might require different notions of content.In Sect.5, we will outline how teleosemantics can provide an account of content for signal passing between models.In Sect.6, we explain why this is important and describe such a case study.But first we offer a brief primer on teleosemantic theory. Teleosemantics Teleosemantics defines a representation as an intermediary between two cooperating devices: (1) a sender, which produces the intermediary, and (2) a receiver, which conditions its behaviour on the intermediary. 8The sense in which these devices must be 'cooperating' is cashed out in terms of proper functions.A proper function is a causally downstream outcome that a device has been selected for bringing about, either through natural selection, reinforcement learning, explicit design or some other appropriate selection process. 9e will briefly introduce proper functions before describing their role in the definition of representational content.Many biological devices are adaptations, having selected effects that contribute to their proliferation.The mammalian heart, for example, has a selected effect to pump oxygenated blood around the body.In achieving this effect hearts contribute to the reproduction of the genes that produced them, thereby contributing to the production of more hearts in future.When causal effects lead devices to be reproduced, teleosemantics calls those effects proper functions.However, the term is not only applied to devices produced by genes proliferating due to natural selection.Any device that owes its present form to selection on the effects of its 'ancestors' has a proper function.Consider again the capacity of a trained rat to press a lever to retrieve food.This capacity has lever-pressing as a proper function.A lever-pressing disposition has been reinforced by the reliable appearance of food after individual lever-pressing events.The disposition 'proliferates' because previous manifestations of that disposition were followed by consumption of food.For a disposition to proliferate here means being more likely to occur in a given environment than other possible dispositions.Reinforcement is therefore construed as selection (Hull et al., 2001); it is differential retention of a certain disposition (lever-pressing) and is relevantly similar to the kind of process exemplified by natural selection.In the case of reinforcement learning, the 'ancestors' of a present behaviour are earlier instances of that disposition performed by the learner. How do proper functions generate representational content?Entities that stand in a sender-receiver relationship to each other, and have a shared proper function as a consequence of selection, endow their intermediaries with representational content.The justification for this definition is as follows.The shared proper function is a downstream causal effect that the receiver must exercise causal influence to bring about, modelled in Fig. 2 as a certain value of the 'Effect' variable.However, external states of the world also have causal influence on the effect, meaning the receiver cannot simply act to produce the desired value.If the receiver could condition its behaviour on the external state, it could produce an appropriate act in order to ensure the effect takes the value required.But it cannot observe the state directly: the best it can do is condition its behaviour on the intermediary.When conditioning on the intermediary leads to greater success than acting unconditionally, teleosemantics asserts that this must be due to a relation between the intermediary and the external state.Teleosemantics identifies this relation as the basic form of representational content.10When these circumstances hold, the intermediary is a representation and the external state is its truth condition. There are in fact two kinds of basic representational relation.The one more commonly referred to is the descriptive relation, which holds between the signal and the external state.The other is the directive relation, which holds between the signal and the proper functional effect it is supposed to help bring about.Because teleosemantics was originally developed as a theory of human natural language, the two basic relations are usually associated with indicative sentences (that say how the world is) and imperative sentences (that say what action to take).In basic systems, these two aspects are tightly coupled.A signal will have one particular state to which it Fig. 2 The basic teleosemantic model.The Receiver has a proper function to bring about some Effect (in a causal model, this function would be specified as a requirement to set the effect variable to a certain value).However, the receiver is hindered by interference from some State, causally upstream of the effect, on which the receiver cannot directly condition its behaviour.The Sender, which has a proper function to help the receiver achieve its function, produces a Signal on which the receiver can condition its behaviour.Teleosemantics asserts that when the receiver conditions its behaviour on the signal and is more successful than it would have been otherwise, this increased success can only be fully explained by adverting to a relation between the signal and the state.corresponds, and simultaneously one particular act it is supposed to prompt.In more complex systems, descriptive and directive aspects can come apart.There can be purely descriptive signals, which correspond to individual states of the world but do not prompt any single action.Complex systems can combine descriptive signals to form an accurate picture of the world and guide flexible behaviour.There can also be purely directive signals, which prompt specific actions but need not be tied to specific environmental circumstances. The basic teleosemantic framework depicted in Fig. 2 occurs within models of cognition, and practitioners often draw on concepts of signalling, messaging, information or representation in giving explanations.The theory thus offers an attractive option for understanding the content of prediction and prediction error signals in generative hierarchies, especially within the context of the practice-oriented approach. A Teleosemantic Account of Content for Predictions and Prediction Errors In this section we bring together teleosemantics, predictive processing, and structural resemblance.Our goal is to show how predictions and prediction error signals get their content. Models in the Hierarchy are Senders and Receivers Predictions and prediction errors are signals sent between models in the generative hierarchy.Models play the role of senders and receivers in the teleosemantic framework.Consequently, our initial task is to address the following question: what is the proper function of a model in a generative hierarchy?At first pass, there look to be at least two plausible answers to this question. In the broadest sense, a model is adaptive in so far as it is accurate with respect to the world.As we have seen, on Gładziejewski's structural resemblance account, models resemble the causal-probabilistic structure of the world.To increase a model's accuracy is thus to increase its causal-probabilistic resemblance with the world.All other things being equal, this allows an organism to interact more successfully with its environment.For instance, in the case of a trained rat, an accurate model will more reliably bring about the pressing of a lever that delivers food.So we might want to say that, in general, the proper function of a model is to accurately represent the world. However, a model does not have direct access to the world; how then can it accurately represent it?In the case of our rat, the problem is that the success-relevant effect-that is, the pressing of the lever-requires having an accurate model of a state of the world-that is, the lever itself.But the model cannot directly condition its behaviour on that state.What the model can directly access is the incoming sensory signal, and the flow of top-down predictions and bottom-up error signals.As we have seen, a core commitment of predictive processing is that by conditioning their behaviour on these signals, models will become more accurate with respect to the world.Hohwy (2013, pp. 50-51) argues that, for a model in a predictive processing hierarchy, increasing mutual information with worldly affairs is extensionally equivalent to minimising prediction error.On the structural resemblance account outlined above, a model increasing its mutual information means that the values of hidden variables will come to map more reliably on to causal-probabilistic relationships between objects in the world and an organism's sensory states.Consequently, in a more restricted sense, we can say that models are adaptive in so far as they minimise prediction error.It is hence possible to understand prediction error minimisation as the proper function of a model. The upshot is this.Minimally, the proper function of a model is to minimise prediction error.However, given this entails that mutual information between a model and the world is maximised, this is extensionally equivalent to saying that the proper function of a model is to accurately represent the world.And in any specific case, this will cash-out as the need to accurately represent some particular part of the world.For instance, an accurate model of a lever is selected for in a rat via learning because it aids in the pressing of the lever, which delivers food. The core commitments of teleosemantics and predictive processing thus mesh together well.Predictive processing offers a mechanism for understanding how the brain overcomes the central inferential problem it faces: identifying the external structure of the world from the noisy, uncertain signals it has direct access to.The structure of this mechanism should be familiar to teleosemanticists: by conditioning its behaviour on an internal signal, a device can aid an organism by producing adaptive responses to the external environment.What teleosemantics offers is a way of understanding why predictions and prediction error signals can be understood as representational.This is because explaining the increased success produced by more accurate models requires positing a relation between intermediaries-predictions and prediction errors-and external success-relevant circumstances.In the remainder of this section, we run through the mechanics of this proposal in more detail. The Content of Prediction Signals According to predictive processing every model throughout the generative hierarchy is constantly issuing predictions about the sensory input of the model directly below it.More specifically, higher models in the hierarchy issue predictions of future sensory input which determine prior distributions used by lower models.When these predictions fail to match the sensory input the lower model receives from even further down, error begins to rise in the system.By adjusting states of the world and their place in it, organisms can reduce this error.From a teleosemantic perspective we can understand the higher model in the hierarchy as the sender, the lower model as the receiver, and the prediction as the signal (see Fig. 3). The two models are a pair of cooperating devices.The proper function of the receiver-model is to minimise prediction error over the long term and thus maximise its accuracy with respect to the causal-probabilistic structure of the world.But attaining these success conditions involves tracking circumstances that the model cannot directly access (long-term error minimisation and states of the world).The sender-model emits a prediction signal, on which the receiver-model conditions its behaviour.More specifically, the prediction signal modulates the priors of the receiver-model, such that they reflect (at a finer spatio-temporal grain) the priors of the sender-model.The organism will then act to reduce the error that arises from the predictions produced when model priors are set in this way.This process of actively testing predictions against the world minimises prediction error over the long term.Consequently, by conditioning its behaviour on the prediction signal, the receivermodel is better able to achieve its proper function. On the teleosemantic analysis, explaining this success requires positing a relation between the internal signal and an external success-relevant condition.In the case of our trained rat, successful active inference will more reliably bring about leverpressing.The goal of lever pressing is selected at the highest level in the rat's cognitive system.Each model in the system then modulates its priors according to topdown predictions regarding the sensory input expected from pressing the lever.The priors of the models are held fixed, and hence the only way to reduce the ensuing prediction error rising up through the system is to move in such a way as to match the initial predictions.This then brings about the actions required to complete the goal of lever-pressing.There is hence a descriptive relation between the prediction signal and the lever.In the case of active inference, initially this descriptive relation will mis-represent the lever.That is, it will predict the sensory input associated with the pressed lever, and not as the lever currently is (unpressed). 11The prediction Fig. 3 The content of prediction signals.P: Prediction; M1: a lower model in the hierarchy; M2: a higher model in the hierarchy.M2 emits P, which determines the priors of M1.These quantities are then held fixed, such that minimising the error raised against them results in bringing about the effect that is the proper function of M1.Over the long term, this process will both increase mutual information between models and the world and increase the accuracy of the system's predictions.According to teleosemantics, explaining this success requires positing a relation between P and the external success-relevant circumstances.A descriptive relation (represented with a dashed line) holds between P and upcoming sensory input of M1.In the case of active inference, the content of P will mis-represent some state of the world.A directive relation (represented with a dashed line) holds between P and the effect that it is the proper function of M1 to bring about: altering the priors that encode its expectations about future sensory input, and eventually raising a prediction error if that input diverges from P signal will come to accurately represent lever-pressing when the motor system has moved the body in such a way as to reduce error and bring about the system's goal.Thus there is a directive relation between the prediction and the external effect of lever-pressing. The portrayal of action as a form of inference highlights a clash of perspectives between active inference and teleosemantics.Proponents of active inference say that since the process by which actions are chosen is relevantly similar to the process by which models are updated, we should describe action as a form of inference.Contrariwise, proponents of teleosemantics say that since anything that plays the role of action in the teleosemantic schema counts as action, and updating a model counts as action in the schema, so perceptual inference (which consists in updating a model) counts as action.We believe this is a difference of perspective rather than a disagreement over matters of fact.M2: a higher model in the hierarchy.M1 emits PE, on which M2 updates its priors in order to account for the error.Conditioning its behaviour in this way will both increase mutual information between itself and the world and increase the accuracy of the model's predictions.According to teleosemantics, explaining this success requires positing a relation between PE and the external success-relevant circumstances.A descriptive relation (represented with a dashed line) holds between PE and the magnitude of the difference between earlier predictions of M2 and sensory input received by M1.Because it concerns the content of the original prediction signal, the prediction error signal is a metarepresentation.A directive relation (represented with a dashed line) holds between PE and the effects that it is the proper function of M2 to bring about: either updating its priors (inference), or effecting some change in the world (action); either of which should serve to quash future prediction errors Footnote 11 (continued) of which is ensured by action) (Smith et al., 2022).Since we're telling the story in terms of misrepresentation, we might be subject to a broader set of issues that have been raised for teleosemantics in the past.We leave open whether these problems, if they arise, should be confronted directly, or whether the appropriate response is to switch to the 'true prediction' account.Thanks to an anonymous reviewer for raising this point. The Content of Prediction Error Signals Recall that on the predictive processing story, bottom-up processing involves the transfer of prediction error.More specifically, each model in the hierarchy receives error signals from the one below it, adjusts its priors in an attempt to account for the error, and forwards any residual error to the model above it.This is the mechanism of perceptual inference.From a teleosemantic perspective we can treat the lower model in the hierarchy as the sender, the higher model as the receiver, and the prediction error as the signal (see Fig. 4). The two models are a pair of co-adapted, cooperating devices.The proper function of the receiver-model is to minimise prediction error over the long term and thus maximise its accuracy with respect to the causal-probabilistic structure of the world.But attaining these success conditions involves tracking circumstances that the model cannot directly access (long-term error minimisation and states of the world).The sender-model emits an error signal, on which the receiver-model conditions its behaviour.More specifically, the receiver-model will update its parameters in an attempt to account for the incoming error signal.If this process is successful the model increases its accuracy, which has the effect of producing more accurate predictions in the future and hence minimises prediction error over the long term. Prediction errors appear to be metarepresentational.Their content concerns the content of predictions, in that they say whether and how much a prediction was Fig. 5 Simplified form of the actor-critic framework discussed by Shea (2014, p. 320, Fig. 1).The system employs a decision procedure Π that chooses acts A i in proportion to their expected payoffs V i .The actual payoff, r, of an act at the previous timestep is used to update the system's estimates of V i .This is done by generating a prediction error signal indicating the magnitude of the difference, , between the expected reward and the actual reward.The system's representation of the expected reward is updated based on this error and a learning parameter .We have added a dashed-line box picking out the subsystem that can be generalised to a model-to-model relationship within a generative hierarchy (Fig. 6) inaccurate.Shea (2014) has argued that a particular class of signals in the brain, bearing some similarities to the prediction errors discussed here, are metarepresentational.The context of the argument is a particular computational model of neural processing, the actor-critic framework, within which a reward prediction error signal appears (Fig. 5). 12Shea argues that error signals in this framework are metarepresentational, with their contents being about the inaccuracy of another (first-order) representation. It is worth seeing whether Shea's account applies to prediction error signals in the predictive processing hierarchy, and so it is worth outlining similarities and differences between the hierarchy and the actor-critic framework on which Shea's account is based.First, Shea is making claims about specific signals that have been discovered in the brain.Computational cognitive scientists have established that the actorcritic framework is a good way to understand the dynamics and function of this part of the brain, and so the prediction error signals that appear in that framework are appropriately identified with the brain signals that play the equivalent prediction error role.We by contrast are discussing hypothetical prediction error signals that would be found in the brain if the generative hierarchy turns out to be an accurate depiction of brain activity.We don't regard it as settled that the brain contains generative hierarchies but, if it does, we are committed to the claim that the contents of prediction errors are as we describe them here.Second, the actor-critic framework is much simpler than the predictive processing framework.The computations carried out by an actor-critic system are called model-free, in that there is no component representing causal relationships.There is just a point estimate representing the expected reward for a particular behaviour.It is this point estimate whose inaccuracy the prediction error signal indicates.By contrast, the predictive processing hierarchy is decidedly not model-free: it contains models whose purpose is to represent causal-probabilistic features.So the first-order representation whose content the prediction error signal indicates cannot be exactly the same component in the actorcritic framework and in the predictive processing framework.Instead, the prediction error indicates the inaccuracy of the prediction itself, not the model that emitted the prediction. Although the first-order representation whose content the prediction error signal concerns is the prediction rather than the model that emitted it, a version of Shea's argument in favour of metarepresentational content still goes through.Prima facie, the prediction error signal is metarepresentational.Its content is that the prediction was accurate or inaccurate.The content of the prediction error signal is that the prediction was in error by such-and-such an amount.It is this metarepresentational content that explains why the model updates its priors; when the signal correctly indicates the error in the prediction, the model's updates cause it to produce more accurate predictions in future.In a way, this is a more general case of the actor-critic framework (Fig. 6).In the actor-critic framework, the system keeps track of just one feature of the external world (the expected reward) and emits just one kind of prediction (also the expected reward).In the predictive processing framework, a model keeps track of multiple features of the external world (every causal-probabilistic relationship that model represents) and emits multiple kinds of prediction (anything the creature could encounter that it is that particular model's job to keep track of; i.e. anything at the appropriate level of spatiotemporal grain).Predictive processing systems are multi-tasking actor-critic systems.If we accept Shea's claim of metarepresentational content in the latter, there is no special reason to withhold it from the former. By conditioning its behaviour on the error signal, the receiver-model is better able to achieve its proper function.As we have seen, according to teleosemantics explaining this success requires positing a relation between the internal signal and an external success-relevant condition.Take the case of a model in a rat's cognitive system whose proper function is to aid lever-pressing.The model adjusts its priors according to the bottom-up error signal.The proper function of the model determines the correspondence the error signal bears to the lever.Importantly, the general content of an error signal will always be the difference between predicted sensory input and actual sensory input.And in this particular case, the content will be the difference between the prediction initially issued by the model regarding expected Fig. 6 The boxed portion of the actor-critic framework (Fig. 5) is a degenerate kind of predictive processing architecture.The main text leverages Shea's argument to establish the claim that prediction error signals have metarepresentational content.Note that the component types in this figure do not match component types in the generative hierarchy, because the actor-critic framework is a 'model-free' means of using feedback to update representations.That is why the model at level n + 1 here appears in a circle, while the model at level n appears inside a rectangle: the actor-critic framework is cast in terms of representations and linear operations, rather than models and signals sensory input caused by the lever and actual sensory input caused by the lever.There is hence a descriptive mapping relation between the prediction error signal and the lever, and a directive mapping relation between the prediction error signal and the external effect of lever-pressing. The Sinister Figure Example: Teleosemantics Version Let's now run the sinister figure example through our hybrid structural resemblanceteleosemantic account.Initially, when you wake, the sinister figure hypothesis dominates.Prediction error is minimised if that hypothesis is deployed, as it best explains your current sensory input.Models in the system adjust their priors and issue predictions accordingly.Both predictions and prediction errors bear a descriptive relation to the untidy chair, with the indicative content <there is a sinister figure>.Of course, here that content is inaccurate with respect to the world. 13The sinister figure hypothesis also allows the system to raise new predictions, such as the prediction that turning on the light will reveal the identity of the sinister figure.This will produce corresponding prediction error, which can be minimised if you act in such a way as to bring the prediction about.Predictions (and hence prediction errors) bear a directive relation to the external state of affairs of turning on the light, with the imperative content <turn on light>.Here the system exploits a world-to-mind direction of fit.However, in this case the outcome of turning on the light will generate a mismatch between predicted sensory input and actual sensory input.In order to eliminate this error, a new hypothesis will be raised: the untidy chair hypothesis. Here the system exploits a mind-to-world direction of fit.The fact that models in the system condition their behaviour on the error signal here indicates that there is a representational relation between the error signal and the success-relevant external circumstances; that is, the untidy chair.The new hypothesis produces predictions bearing a descriptive relation to the untidy chair, with the indicative content <there is an untidy chair>. This illustrates the neat way in which predictive processing and teleosemantics mesh.By minimising error, predictive brains are able to increase the accuracy of their models, despite having no direct link to the causes of their sensory inputs.Via appeal to success-relevant circumstances, teleosemantics gives us an account of how the flow of predictions and error can bear content about the external world; again, despite the brain having no direct contact with those circumstances. 14The overall picture we are advocating is that generative hierarchies are able to increase their structural resemblance with the world by processing signals with teleosemantic content. Two Objections We now consider two important objections to our account. 15The first is that it seems wrong to treat higher-level models as senders and lower-level models as receivers. The second is that it seems wrong to treat the content of a first-order representation (i.e. a model) as dependent on the content of a meta-representation (i.e. an error signal).We address each in turn. Intuitively, it seems strange to assign the role of sender to a higher-level model and the role of receiver to a lower-level model.Higher models lie 'deeper' within the cognitive system, further from the sensory surface and thus further from the world which they are supposed to be providing information about.Signals are supposed to provide information about external states of affairs.But how can a model that is physically further away from the world provide a model that is physically closer to the world with information about the world?By contrast, the usual way the sender-receiver framework is applied to cognitive systems treats sensory apparatus as the sender and motor apparatus as the receiver; this makes sense because sensory apparatus has access to worldly information that motor apparatus does not.Our application of the framework to the predictive processing hierarchy seems to get things the wrong way round. To respond, our application of the sender-receiver framework makes sense when we consider the different information that is stored in models at different levels.Higher models store information that is relevant on longer timescales or that concerns objects and events that are more causally opaque.It is true that they build up this information from the signals that are passed to them from the lower levels.But it need not be true that the predictions they pass back down the hierarchy contain information that those lower levels already possess.For one thing, there could be multiple lower models serving a single higher model, such that the higher model is able to integrate information and generate predictions that no single lower model could have access to.For another, the lower models might simply fail to encode and store information that is nonetheless transmitted further up the hierarchy, such that it is news to them when it comes back in the form of predictions.Consider by way of analogy a housebound analyst who receives letters from servants gathering information from the outside world.If the servants were numerous enough and forgetful enough, eventually the analyst could gather more information (and issue more accurate predictions) than any single servant. 16he second objection stems from our characterisation of prediction error signals as metarepresentational.Our picture seems to suggest that the accuracy of a firstorder representation (i.e. a model in the hierarchy) is made possible by a metarepresentation (i.e. an error signal).This looks problematic: presumably metarepresentations cannot be prior to the first-order representations they metarepresent.We should instead tell a story on which first-order representations come first and metarepresentations are defined subsequently. To respond, first note that Shea's account has the same consequence.We characterised predictive processing hierarchies as multi-tasking actor-critic systems, and in both cases a first-order representation is kept attuned to the world by use of an error signal.The use of an error signal to improve the accuracy of a first-order representation does not threaten its status as first-order.There is a difference between how the first-order representation gets its content and how it is kept accurate.So if we can give an account of how the first-order representation gets its content independent of any metarepresentational updating, we will have avoided the problem.And our account is just that the content of a model derives from its structural resemblance with external affairs.A model is a structural-resemblance representation that does not depend on error signals for its representational status or for its content, though it does utilise error signals to improve its accuracy.One might wonder how a model can gain representational status before the predictive processing hierarchy is 'brought to life', so to speak, with its first bouts of signalling.One possibility is to appeal to innate priors, such that a hierarchy has some amount of in-built structure that very loosely tracks (i.e.structurally resembles) features of the world.Brains are imbued with these in-built first-order representations, that may be vague or inaccurate at the outset, and are then iteratively updated through experience.This is one possible way in which models can be attributed first-order representational content before the predictive processing hierarchy kicks into life; there may be others.The important point is that first-order representations do not depend on metarepresentations for their content or representational status, even if they do depend on them to remain accurate. Why We Should Issue Pluralist Licences We have offered a pluralist account of content for predictive processing architectures: models in generative hierarchies get content in virtue of their causal-probabilistic resemblance with the world; while signals get their content in virtue of their etiology.In this section we explore in more detail the motivating reasons for adopting a practice-oriented pluralism. Practice-Oriented Pluralism Some may worry about pluralism.Shouldn't we want to give a single overarching account of content in predictive processing architectures?Isn't a unified account preferable to meshing together two different accounts?After all, the claim that content is determined by histories of selection and the claim that content is determined by structural resemblance are very different claims: why think they will play nicely together?Methodological pluralism is not always a good thing, especially if you inherit the problems of both theories. We think there are good reasons to adopt a pluralist approach to cognitive representations despite these concerns.Here we align with those who express pessimism at the chances of ever finding a single unifying theory of representation via philosophical means alone.Although the prospects for such a theory looked promising in the 1980s-particularly through the work of Fodor, Dretske and Millikan-problems persist.17As a result, many feel those projects failed to deliver (Godfrey- Smith, 2004; see also Planer & Godfrey-Smith, 2021;Shea et al., 2017).One reason for this is that cognitive science spans the domains of folk-psychology and scientificpsychology.This requires-to borrow Wilfrid Sellars' famous terms-going back and forth between the manifest and scientific images.Given such disciplinary complexity, we should expect to see a diversity of accounts of content emerge.Peter Godfrey-Smith puts the point as follows: Cognitive scientists forge different kinds of hybrid semantic concepts in different circumstances-in response to different theoretical needs, and different ways in which scientific concepts of specificity and folk habits of interpretation interact with each other.Godfrey- Smith (2004, p. 160) Given this situation, what is the role of philosophers of cognitive science working on content?One answer is that the goal is to use philosophical analysis to distill a core, unifying concept that will cover all cases.However, as above, there are many who worry this project is not achieveable.Another answer is as follows: the goal is to describe the range of different concepts at play in cognitive science, and account for their explanatory purchase.On this view, the business of licensing content needs to be sensitive to the variety of representational concepts at play in cognitive science.Pluralism, then, looks unavoidable. Recent work by Nick Shea builds on this idea.Shea's approach is to look at the way cognitive scientists use notions of representation to successfully explain behaviour.The result of this process is a "varitel" semantics, which combines teleosemantics and structural correspondence (Shea, 2018, Chapter 2).Both offer organisms a relation with external circumstances that they are able to exploit.On Shea's view, pluralism is a commitment of this explanatory strategy: We may get one theory of content that gives us a good account of the correctness conditions involved in animal signalling, say, and another one for cognitive maps in the rat hippocampus.There is no need to find a single account that covers both.Shea (2018, p. 43) For both Godfrey-Smith and Shea, exploring pluralist strategies offers the best way forward for those attempting to produce naturalised theories of content.Our account is developed with this general methodological commitment in view.But why is building in an etiological account of the content of signals in generative hierarchies useful?Our answer to this question is that there are, and are likely to be, many cases where doing so can help account for explanatory success in cognitive science.And if predictive processing-as a general theory of cognition-is to be applied to these cases, then building in teleosemantics is an important project.Covering the range of cases that might require teleosemantic treatment is well beyond the scope of this paper.However, below we run through a brief case study in order to illustrate the thinking behind it. Practice-Oriented Pluralism and Predictive Processing As we have outlined, on Nick Shea's view philosophical theories of content should be guided by cases of explanatory success in the cognitive sciences (Shea, 2018).And, given that cognitive science deals with such a broad range of cases, it is unsurprising that this process will produce a range of different approaches to content.Here we briefly run through an illustrative case: that of decision making in Rhesus monkeys.However, it is worth noting that Shea offers a wide variety of cases, from neural network models (Shea, 2018, Section 4.3) to animal signalling (Shea, 2018, Section 4.5).It is also important to note what is being claimed by Shea (and ourselves) in these cases.The claim is not that no other account of content might be capable of explaining the results produced in these studies.Rather the claim is that, when we look to these studies, we find that the type of content used to do the explanatory work is best captured by teleosemantics.To put this another way, the question is not "which theory of content best covers all these cases?", it is "which theory best accounts for explanatory success in this particular experimental case?".This reflects the practice-oriented approach: the role of philosophy is to describe the representational concepts that are being employed in successful scientific practice. Teleosemantics is an outcome-oriented theory of content.Shea incorporates this notion into his theory of function, using the term consequence etiology (Shea, 2018, p. 48).Roughly the idea is that certain processes, such as natural selection and learning, stabilise traits in an organism.Shea's account of function differs from the notion of proper function we've been working with, and the magnitude of that difference depends on the use to which the notions are put.One thing they have in common is that they fit naturally with studies employing reward-based learning paradigms, in particular the research cluster around the neurophysiology of reward.Many studies in this area aim to identify the values and likelihoods of reward functions, where those values represent external circumstances that are good or bad outcomes for the experimental subject.Behaviour stabilises in a subject-such as our lever-pushing rat-because certain signals in the subject's cognitive system start to reliably correlate with specific rewards.In the opening paragraph of his overview on the neurophysiology of reward paradigm, Wolfram Schultz writes: The functions of rewards are based primarily on their effects on behavior and are less directly governed by the physics and chemistry of input events as in sensory systems.Therefore, the investigation of neural mechanisms underlying reward functions requires behavioral theories that can conceptualize the different effects of rewards on behavior.The scientific investigation of behavioral processes by animal learning theory and economic utility theory has produced a theoretical framework that can help to elucidate the neural correlates for reward functions in learning, goal-directed approach behavior, and decision making under uncertainty.Schultz (2006, p. 87) It is easy to see why teleosemantics is well-placed to "conceptualize the different effects of reward on behaviour", and more why this research program aligns well with a consequence etiology account of function.It gives us a precise way of showing how learning processes in a system can come to represent the utility of beneficial external outcomes. For instance, in a study presented by Kiani and Shadlen (2009), Rhesus monkeys were given a post-decision wagering task.Subjects were required to make decisions about the overall direction of motion in a dynamic random dot display.The difficulty of this task was specified by the percentage of coherently moving dots and the length of time the display was viewed for.Saccadic eye movement was used to identify the monkey's decision, directed toward either a right or left visual target.Correct decisions were given a liquid reward, while incorrect decisions were not.Finally, the monkeys were given a "sure target"; that is, a target in the centre of the screen that guaranteed a reward, but at approximately 80% of the liquid reward for a correct choice.The thought was that the monkeys would opt for the sure target as the difficulty of the task went up, which in turn would reflect the level of certainty they had in their ability to successfully complete the initial task.Kiana and Shadlen's results supported this hypothesis.Now, suppose we want to understand this experimental data using a predictive processing framework.We need some way of understanding how the value of an external success-condition (the reward) comes to be represented by internal mechanisms, such that we can explain the behaviour of the subjects, and in particular the way the uncertainty and reward values are balanced.As a teleosemantic treatment of internal signals gives us a consequence etiology account of function, it is well placed to deliver on this explanatory task.More broadly, this shows that, if we adopt the practice-oriented approach, developing a range of theories of content for predictive processing systems is an important task.This is because it gives us the tools to explain the broad range of experimental paradigms and results we find across the cognitive sciences. Conclusion Our goals in this paper were twofold.First, we wanted to show how a teleosemantic account of content for prediction and prediction error signals could mesh with a broader causal-probabilistic account of generative heirarchies.We argued this process revealed important similarities between the explanatory motivations and conceptual machinery employed by teleosemantics and predictive processing.Second, we wanted to advocate the virtues of pluralist approaches to representational content.We followed Peter Godfrey-Smith and Nick Shea in maintaining that a single, overarching account of content for cognitive science is unlikely to be successful.Cognitive scientists employ a range of different content-invoking concepts, and philosophers should be developing frameworks that respect this theoretical diversity.We think this is a good reason to issue predictive processing with a pluralist licence for content. Fig.2The basic teleosemantic model.The Receiver has a proper function to bring about some Effect (in a causal model, this function would be specified as a requirement to set the effect variable to a certain value).However, the receiver is hindered by interference from some State, causally upstream of the effect, on which the receiver cannot directly condition its behaviour.The Sender, which has a proper function to help the receiver achieve its function, produces a Signal on which the receiver can condition its behaviour.Teleosemantics asserts that when the receiver conditions its behaviour on the signal and is more successful than it would have been otherwise, this increased success can only be fully explained by adverting to a relation between the signal and the state.This relation is then the basic representational relation, or descriptive relation.The signal bears a directive relation to the proper functional effect (descriptive and directive relations illustrated with dashed lines).This figure and caption first appeared inMann & Pain (2022). Fig. 4 Fig.4The content of prediction error signals.PE: Prediction Error; M1: a lower model in the hierarchy; M2: a higher model in the hierarchy.M1 emits PE, on which M2 updates its priors in order to account for the error.Conditioning its behaviour in this way will both increase mutual information between itself and the world and increase the accuracy of the model's predictions.According to teleosemantics, explaining this success requires positing a relation between PE and the external success-relevant circumstances.A descriptive relation (represented with a dashed line) holds between PE and the magnitude of the difference between earlier predictions of M2 and sensory input received by M1.Because it concerns the content of the original prediction signal, the prediction error signal is a metarepresentation.A directive relation (represented with a dashed line) holds between PE and the effects that it is the proper function of M2 to bring about: either updating its priors (inference), or effecting some change in the world (action); either of which should serve to quash future prediction errors
13,934.2
2024-07-01T00:00:00.000
[ "Philosophy" ]
A Plasma Proteomic Signature of Skeletal Muscle Mitochondrial Function Although mitochondrial dysfunction has been implicated in aging, physical function decline, and several age-related diseases, an accessible and affordable measure of mitochondrial health is still lacking. In this study we identified the proteomic signature of muscular mitochondrial oxidative capacity in plasma. In 165 adults, we analyzed the association between concentrations of plasma proteins, measured using the SOMAscan assay, and skeletal muscle maximal oxidative phosphorylation capacity assessed as post-exercise phosphocreatine recovery time constant (τPCr) by phosphorous magnetic resonance spectroscopy. Out of 1301 proteins analyzed, we identified 87 proteins significantly associated with τPCr, adjusting for age, sex, and phosphocreatine depletion. Sixty proteins were positively correlated with better oxidative capacity, while 27 proteins were correlated with poorer capacity. Specific clusters of plasma proteins were enriched in the following pathways: homeostasis of energy metabolism, proteostasis, response to oxidative stress, and inflammation. The generalizability of these findings would benefit from replication in an independent cohort and in longitudinal analyses. Introduction Mitochondrial oxidative phosphorylation is the major source of energy production for all cellular functions [1]. Accordingly, impaired mitochondrial function, one of the hypothetical mechanisms that drives the aging process [2], has been associated with the development of phenotypical and functional manifestations of aging and with age-related diseases [3][4][5]. Mitochondrial oxidative capacity measured in vivo in skeletal muscle declines with aging and is associated with lower walking speed, muscle strength, and physical activity independent of age [6][7][8], as well as with chronic inflammation [9]. Consistent with the decline of oxidative capacity with aging, discovery proteomic studies in skeletal muscle from healthy individuals over a wide age-range have shown a substantial decline of mitochondrial proteins with aging, including proteins of electron transport chain complexes (ETC), enzymes of the Krebs cycle as well as structural proteins [10]. In a recent study, aimed to define the proteomic signature of mitochondrial oxidative capacity in skeletal muscle, we identified muscle proteins that were differentially represented in individuals with higher and lower oxidative capacity, measured by phosphorus magnetic resonance spectroscopy ( 31 P-MRS) [11]. As expected, proteins overrepresented in muscle with higher oxidative capacity were enriched for pathways connected with mitochondrial metabolism and translation within mitochondria. Unexpectedly, we also found highly significant enrichment for mRNA processing/alternative splicing pathways, although this finding remains unexplained [11]. Although the identification of specific skeletal muscle proteins that are associated with a direct measure of mitochondrial function is important to gain insight into mechanisms of mitochondrial decline, it has limited clinical use because it is invasive and requires muscle biopsy specimens. In this study, we hypothesized that reduced oxidative capacity in skeletal muscle may be reflected by characteristic changes in circulating proteins and, therefore, we searched for a proteomic signature of muscular mitochondrial oxidative capacity in plasma. Results Demographic characteristics of 165 study participants are displayed in Table 1. Participants were 45% female, in the age range of 22-93 years (average 57.7 ± 20), and mostly Caucasian. Out of the 1301 SOMAmers analyzed, we identified 87 proteins significantly (p < 0.01) associated with muscle mitochondrial oxidative capacity (τ PCr ), adjusting for age, sex, and PCr depletion ( Figure 1, Table 2). Sixty proteins were negatively associated with τ PCr , therefore positively associated with a better oxidative capacity, while 27 proteins were associated with a poorer oxidative capacity. The top 10 proteins most strongly associated with τ PCr in the first model were endothelial cell-selective adhesion molecule (ESAM), insulin-like growth factor binding protein 3 (IGFBP-3), contactin 2 (CNTN2), p-selectin (coded by the SELP gene), proto-oncogene tyrosine-protein kinase Fyn (FYN), lactoperoxidase (PERL), dermatopontin (DERM), C-X-C motif chemokine ligand 16 (CXCL16), Ras-related C3 botulinum toxin substrate 3 (RAC3), and tyrosine-protein kinase Lyn (LYN) ( Table 3, Model 1). After adding race and BMI to the multivariable model, 62 proteins were significantly associated with τ PCr . The top 10 significant proteins were similar to those resulting from model 1, with the new appearance of kallikrein 11 (coded by the KLK11 gene), follistatin-related protein 3 (FSTL3), cathepsin F (CATF), and IGFBP-6 ( Table 3, Model 2). pH at baseline 7.07 (0.03, 6.97, 7.14) Figure 1. Volcano plot of the association between protein concentrations (gene ID in the plot) and mitochondrial oxidative capacity (τ PCr ), adjusted for age, sex, and phosphocreatine depletion. Negative beta: proteins were positively associated with a better oxidative capacity; positive beta: proteins were associated with a poorer oxidative capacity. When analyzing the patterns of functional enrichment, many gene ontology (GO) terms were significantly enriched among the 87 proteins significantly correlated with mitochondrial oxidative capacity ( Table 4). The 87 significant proteins were enriched for genes in the oxidative stress, inflammation, metabolism regulation, and proteostasis pathways ( Figure 2). Table 3. Top-10 most significant SOMAmers associated with τ PCr , an inverse measure of mitochondrial oxidative capacity, adjusting for age, sex, and amount of phosphocreatine depletion (Model 1); top-10 most significant SOMAmers associated with τ PCr adjusting for age, sex, phosphocreatine depletion, BMI, and race (Model 2). Discussion In this study we characterized the plasma proteomic profile associated with skeletal muscle oxidative capacity. Using the 1.3 k SOMAscan assay, we found that 87 proteins were associated with mitochondrial oxidative capacity, as measured by 31 P-MRS on the quadriceps muscle. Three of these proteins (heat shock protein family D member 1, HSP 60, the serine protease HTRA2, and cyclophilin F, coded by the PPIF gene) are defined as mitochondrial proteins, and many of the other proteins identified are relevant to mitochondrial functions, or to processes that have been linked with impaired mitochondria. This proteomic signature is reflective of the deranged metabolic mechanisms that are either causes or consequences of impaired mitochondrial function, independent of chronological age. Importantly, although a few proteins found in our analysis are specific to either the muscle (e.g., myostatin) or the blood tissues, most of the proteins are ubiquitous and can be detected in different tissues. From the available data, it is difficult to identify which tissues the plasma proteins represent. Whether the signature identified is a direct marker of muscle function or a marker of a more generalized energetic alteration reflected in plasma proteins cannot be determined. The most relevant proteins to the association with oxidative capacity are summarized based on our bioinformatic analyses of the enriched processes. Energy Metabolism Multiple proteins involved in the regulation of metabolism showed a strong association with muscle oxidative capacity. IGFBP-3 and IGFBP-6 are the carriers of insulin-like growth factor-1 (IGF-1), the primary effector of growth hormone. In addition to its insulin-like functions, IGF-1 stimulates cell growth and proliferation in most tissues of the body and inhibits apoptosis [12]. IGFBP-3 and IGFBP-6 prolong the half-life of IGF-1 and regulate the growth promoting effects of IGF-1, altering its interaction with cell surface receptors. IGFBP-3 also exhibits IGF-independent antiproliferative and apoptotic effects. Several other proteins significantly associated with τ PCr are involved in apoptosis, among which caspase 3 and the protease HTRA2. This finding is particularly interesting because increased apoptosis signaling has been implicated in the pathogenesis of age-related sarcopenia [13]. Decreased plasmatic levels of the adipose-derived hormone leptin were found associated with better oxidative capacity. Leptin is one of the main regulators of feeding and energy balance, connecting changes in energy stores to a set of adaptive physiologic responses. Leptin regulates numerous physiologic processes, among which feeding behavior, metabolism, thermogenesis, immune function, and the neuroendocrine axis [14]. Increased leptin has been related to obesity, inflammation, hypoxia, and has been implicated in ROS generation [15,16]. Any of these mechanisms could underlie the association showed by our data. Plasma concentrations of the AMP-activated protein kinase (AMPK) a2b2g1 complex were positively associated with better mitochondrial oxidative capacity in this analysis. AMPK is the master sensor of energy metabolism and its activity is principally modulated by the AMP/ATP ratio and, to a lesser extent, by the ADP/ATP ratio, which are direct biomarkers of the status of energy availability. AMPK responds to reduced energy availability by downregulating activities that are energy demanding such as protein and lipid synthesis and cell cycle, and improves energy production through increased catabolism [17]. Moreover, AMPK modulates fundamental mitochondrial processes such as biogenesis, fission, and autophagy, and promotes mitochondrial health [17]. Aldolase A and pyruvate kinase (PKM2) are two key glycolytic enzymes that were found positively associated with oxidative capacity. Other than allowing production of ATP through glycolysis, aldolase A also modulates the myocyte's shape and contractility, and its absence has been related to metabolic myopathy [18,19]. Aldolase A is prominently expressed in skeletal muscle. Nicotinamide phosphoribosyltransferase (PBEF, coded by NAMPT) catalyzes the rate limiting reaction of the mammalian nicotinamide adenine dinucleotide (NAD + ) salvage pathway. NAD + is an essential cofactor regulating several metabolic processes such as glycolysis, fatty acid oxidation, the tricarboxylic acid cycle, and oxidative phosphorylation, but also mitochondrial biogenesis [20]. NAD + levels have been implicated in aging, age-related diseases, and longevity [21]. PBEF is released by multiple cell types including myocytes, and it was associated with lower oxidative capacity in our analysis. Arguably, this could reflect either a spilling from damaged muscle cells, or a compensation for a dietary deficiency of NAD + precursors. Enrichment for the Kit signaling pathways, represented by proteins such as FYN, VAV, LYN, protein kinase C alpha (PKC-A), and proto-oncogene tyrosine-protein kinase Src (SRC), was found. Although it is best known for its role in hematopoietic stem cell differentiation, Kit plays an important role in the regulation of mitochondrial function and energy expenditure [22]. Kit promotes the expression of the peroxisome proliferator-activated receptor-γ (PPARγ) coactivator-1α (PGC-1α), the master regulator of mitochondrial biogenesis and function [23]. PGC-1α has been found to specifically promote mitochondrial biogenesis in skeletal muscle, and its deficiency has been correlated with metabolic derangements and muscle dysfunction [24]. Another pathway significantly enriched was that of the advanced glycation end products (AGEs). AGEs are a heterogeneous group of bioactive molecules formed by the nonenzymatic glycation of proteins, lipids, and nucleic acids. AGEs accumulate with aging in several tissues and contribute to oxidative stress, chronic inflammation, and are implicated in the pathogenesis of cardiovascular diseases and chronic kidney disease [25]. Proteostasis In this analysis, we found that the plasma levels of several proteins implicated in proteostasis were significantly different across mitochondrial function. Heat shock protein 27 (HSP 27), a low-weight molecular chaperone that maintains denatured proteins in a folding-competent state, in skeletal muscle plays an important role in stress resistance and actin organization [26]. HSP 60 and cyclophilin F participate in mitochondrial import and correct folding of proteins. In addition, cyclophilin F is a major component of the mitochondrial permeability transition pore (MPTP) highly involved in connecting mitochondrial metabolism and apoptosis [27]. Cyclophilin D, coded by PPID, is another enzyme assisting and accelerating the correct folding of proteins. HSP 27, HSP 60, cyclophilin F, and cyclophilin D plasma concentrations were positively associated with better oxidative capacity. DnaJ homolog subfamily B member 1 (DNJB1) interacts with HSP 70 and can stimulate its ATPase activity, facilitating ATP hydrolysis and protein folding. Previous studies found that decreased HSP 70 response was associated with age-related functional impairments in skeletal muscle [28], and that overexpression of HSP 70 in transgenic mice conveyed protection against age-related dysfunction [29], supporting the concept of chaperones as essential molecules to restore the normal cell function after an insult [30]. DNJB1 levels were associated with better mitochondrial oxidative capacity. Interestingly, levels of two proteins involved in protein degradation and turnover were associated with poorer oxidative capacity: cathepsin F, a major component of the lysosomal system, and ubiquitin-conjugating enzyme E2 G2 (UB2G2), which targets abnormal proteins and catalyzes the attachment of ubiquitin. Inflammation and Response to Reactive Oxygen Species A strong connection between mitochondrial impairment and chronic inflammation has been gaining increasing attention [31]. Dysfunctional mitochondria produce an excessive amount of reactive oxygen species (ROS), which trigger inflammation both directly and through oxidative damage to proteins, lipids, and nucleic acids [32]. Furthermore, products of damaged mitochondria released in the extracellular space act as damage-associated molecular pattern (DAMP) agents, activating the immune response [32]. Among the proteins associated with poorer mitochondrial function in our analysis, many were markers of inflammation. Proteins involved in the signaling of cytokines, in chemotaxis, and in the response to oxidative stress, showed a marked prevalence in the relevant clusters identified. Several proteins represented either components or activators of the mitogen-activated protein kinase (MAPK) signal transduction pathway, such as MAPK14, MAPK2, tyrosine-protein kinase Lyn (LYN), sphingosine kinase 1 (Q9NYA1), PKC-A, PKC-B, SLAM family member 5 (SLAF5), and fibroblast growth factor receptor 4 (FGFR4). The MAPK signaling cascade regulates survival and death, proliferation, and differentiation of cells, and is an important activator of the inflammatory response. A strict control is therefore crucial, and its disruption has been linked to the development of many diseases [33]. Many proinflammatory molecules have been identified among the members of the senescence -associated secretory phenotype (SASP). SASP is the secretome of senescent cells that contains hundreds of compounds, some of which have not been yet identified [34]. There is some evidence that the SASP can induce senescence in surrounding cells and cause damage accumulation both in cells and intercellular matrix, events that may contribute to the phenotypes of aging as well as to many chronic diseases [35,36]. It is well known that the energic crisis caused by mitochondrial dysfunction can induce cellular senescence, and the finding that common plasma biomarkers of senescence are dysregulated according to mitochondrial function is not fully surprising [37]. Growth Differentiation Factor 15 Previous studies have proposed growth differentiation factor 15 (GDF15) as a biomarker of mitochondrial dysfunction in aging and several age-related diseases [37]. Increased blood levels of GDF15 have been observed in aging and in mitochondrial disease, and this protein has been related to cardiovascular and brain disease [38,39]. GDF15 has been identified among the molecules expressed by senescent cells which constitute the SASP [34]. Although GDF15 did not appear among the most significant proteins in our analysis, it showed an association (p = 0.026) with τ PCr (Model 1); as expected, poor oxidative capacity was associated with higher levels of GDF15 (β coefficient = 0.011). Participants This study was conducted in 165 community-dwelling volunteers participating in the Baltimore Longitudinal Study of Aging (BLSA, N = 76) and the Genetic and Epigenetic Signatures of Translational Aging Laboratory Testing (GESTALT, N = 89) studies. The BLSA is a prospective open cohort study that has continuously enrolled participants aged 20 and older since 1958. The GESTALT study started in April 2015, aimed at discovering new and sensitive molecular biomarkers of aging in different cell types. Volunteers are eligible to participate in BLSA and GESTALT if they meet strict healthy inclusion criteria, where participants are free of major pathologies (with the exception of controlled hypertension) as well as functional and cognitive impairments at enrollment, and are followed for life regardless of changes in health and functional status (Supplementary Appendix A). This analysis was performed on samples collected during visits in which participants met the inclusion criteria. All assessments, which took place at the Clinical Research Unit of the Intramural Research Program of the National Institute on Aging, National Institutes of Health (NIH) during a 2.5-3.5-day visit, were performed by certified nurse practitioners and certified technicians according to standardized procedures. The protocol for both studies was approved by the NIH Intramural Institutional Review Board (BLSA (03AG0325) and GESTALT (15AG0063) were approved on 12 May 2020). After receiving detailed descriptions of the procedures at every visit, all subjects provided written informed consent. Demographic and health characteristics were assessed either through self-report questionnaires or using standard criteria and algorithms [40]. Body weight was measured in kilograms using a calibrated scale to the nearest 0.1 kg. Body height was measured in centimeters by a stadiometer to the nearest 0.1 cm [41]. Body mass index (BMI) was calculated by dividing body weight by the square of height in meters. Proteomic Assessment Plasma proteins were measured using overnight fasted plasma that was collected at a resting state and subsequently stored at −80 • C. Discovery proteomics was performed using the 1.3 k SOMAscan Assay (SomaLogic, Inc.; Boulder, CO, USA) at the Trans-NIH Center for Human Immunology and Autoimmunity, and Inflammation (CHI), National Institute of Allergy and Infectious Disease, National Institutes of Health (Bethesda, MD, USA). SOMAmer reagents are individually generated via an iterative process called SELEX (Systematic Evolution of Ligands by EXponential enrichment), which consists of affinity selection cycles aimed at increasing the specificity and avidity of oligomers to a target protein epitope. As a result, SOMAmer reagents are designed to be highly specific and sensitive. For a quantitative assessment of variability in the SOMAscan assay, see [42,43]. A discussion of caveats and limitations is provided in [44]. Of the 1322 SOMAmer reagents included in this version of the kit, 12 hybridization controls, four viral proteins (HPV type 16, HPV type 18, isolate BEN, isolate LW123), and five SOMAmers that were reported to be nonspecific (P05186; ALPL, P09871; C1S, Q14126; DSG2, Q93038; TNFRSF25, Q9NQC3; RTN4) were removed, leaving 1301 SOMAmer reagents for the final analysis. There are 46 SOMAmer reagents that are documented to target multicomplex proteins of two or more unique proteins (UniProt IDs). Conversely, there are 49 UniProt IDs that are measured by more than one SOMAmer reagent. The full list of SOMAmer reagents and their protein targets is provided as Supplementary Information. Thus, the 1301 SOMAmer reagents collectively target 1297 UniProt IDs. Of note, there are four proteins in the final protein panel that are rat homologues (P05413; FABP3, P48788; TINNI2, P19429; TINNI3, P01160; NPPA) of human proteins. The experimental process for proteomic assessment and data normalization has been previously described [42]. The data reported are SOMAmer reagent abundance in relative fluorescence units (RFU). The abundance of the SOMAmer reagent represents a surrogate of protein concentration in the plasma sample. Data normalization was conducted in three stages. First, hybridization control normalization removed individual sample variance on the basis of signaling differences between microarray or Agilent scanner. Second, median signal normalization removed intersample differences within a plate due to technical differences such as pipetting variation. Last, calibration normalization removed variance across assay runs. Furthermore, there was an additional interplate normalization process that utilized a CHI calibrator of pooled plasma from healthy subjects that allowed normalization across all experiments conducted at the CHI laboratory [42]. An interactive Shiny web tool was used during the CHI QC process [45]. Phosphorus Magnetic Resonance Spectroscopy Using a 3T MR scanner (Achieva, Philips Healthcare, Andover, MA, USA), in vivo 31 P-MRS measurements of the concentrations of the phosphorus-containing metabolites phosphocreatine (PCr), inorganic phosphate (Pi), and ATP were obtained from the vastus lateralis muscle of the left thigh, following a standardized protocol described previously [7,46]. Participants were positioned supine on the bed of the scanner, with a foam wedge placed underneath the knee to induce slight flexion, and with ankles, thighs, and hips secured with straps to reduce movement during exercise. Participants were required to perform a ballistic knee extension exercise inside the magnet with their left leg, while a resistance was added by foam pads placed above the left leg in order to enhance the intensity of the exercise. A series of pulse-acquire 31 P spectra were obtained before, during, and after the exercise, which had an average duration of 30 s, with a repetition time of 1.5 s, using a 10-cm 31 P-tuned surface coil (PulseTeq, Surrey, UK) fastened above the left thigh. An example of the acquired spectra is represented in Figure 3 [7]. Signals were averaged over four successive acquisitions for signal-to-noise ratio enhancement, so that the data consisted of 75 spectra obtained with a temporal resolution of 6 s. The duration of exercise was optimized by consistently requiring a depletion in PCr of 50-67% relative to initial baseline values, in order to standardize the measure of oxidative function across different subjects and to provide sufficient dynamic range to fit the PCr recovery curve. Whenever PCr depletion did not reach the threshold of 33%, data collected were excluded from further analysis. If intramuscular acidosis, defined as intracellular pH lower than 6.8, was detected at the end of the exercise, the test was repeated at a lower intensity after waiting for the participant to return to a resting condition [47]. The pH was determined according to the chemical shift of Pi relative to PCr [48]. Spectra were processed with jMRUI software (version 5.2, MRUI Consortium), and metabolite concentrations were calculated by nonlinear least squares fitting implemented through AMARES [49,50]. Post-exercise PCr recovery rates were calculated by fitting time-dependent changes in PCr peak area to the monoexponential recovery function: where PCr(0) is the end-of-exercise PCr signal area (i.e., the PCr signal area at the beginning of the recovery period), ∆PCr is the decrease in signal area from its pre-exercise baseline value, averaged from the multiple baseline scans, to PCr(0) resulting from in-magnet exercise, and τ PCr is the PCr exponential recovery time constant, measured in seconds [7]. This time constant is inversely proportional to the maximum in vivo oxidative capacity of skeletal muscle, with longer τ PCr reflecting slower recovery and therefore lower oxidative capacity [51]. Since the energy demands during post-exercise PCr resynthesis are minimal, 1/τ PCr reflects the maximum mitochondrial ATP production rate [7,[52][53][54]. ATPmax was finally estimated as [PCr baseline ] * (1/τ PCr ) [55]. Statistical Analysis Protein RFU values were converted to a z-score after natural log-transformation. Association of each protein with mitochondrial oxidative capacity (τ PCr ) was assessed using linear regression models adjusted for age, sex, and amount of PCr depletion. A second model was examined with further adjustments for race (white, black, other), and BMI. The analyses were performed using RStudio (v. 1.2.1335). A nominal p value of 0.01 was considered statistically significant. Enrichment Analysis To evaluate whether among the proteins significantly correlated with τ PCr appeared to be enrichment of specific biological processes or molecular functions, a gene enrichment analysis was run on the 87 plasma proteins significantly associated with τ PCr . For this purpose, the bioinformatic tool ClueGO was used [56]. ClueGO permitted the identification of functional gene ontologies (GO) and pathways associated with the most significant proteins in our analysis. To visualize the enriched pathways, the Cytoscape (v. 3.8.0) plug-in was used. The resulting enriched pathways were displayed as a network, in which each pathway was represented as a node and the edges connected similar pathways; pathways were functionally grouped, and the similarity between pathways was determined by kappa statistics [56]. Enrichment significance was represented by node size, and node colors were used to differentiate pathway clusters. The relevant pathways were filtered for > 4 genes after Bonferroni correction. Limitations and Conclusions The SOMAscan platform used in this study assesses 1301 proteins, which represent only a fraction of the proteins that are potentially important for aging or mitochondrial function. Furthermore, the SOMAscan technology does not provide an absolute measure of protein abundance, which makes it difficult to compare associations with mitochondrial function across proteins. In addition, while aptamers are designed to detect proteins in their native conformation, there is also a possibility of cross-reactivity between similar proteins, which can limit the accuracy of SOMAscan for proteins with high sequence homology [57]. Importantly, the list of proteins identified do not completely reflect those of the proteins that have been found to be associated with aging. This could be due to the fact that while mitochondrial oxidative capacity declines with aging, the rate of decline is highly heterogeneous across individuals, probably because of the effect of genetic heterogeneity and subclinical and clinical pathology. Due to the limited sample size and number of proteins studied we could not determine an exhaustive proteomic signature and fully understand what biological pathways are shared between aging and mitochondrial function. The inclusion criteria of BLSA and GESTALT ensure that study participants are exceptionally healthy, and therefore our findings may not be generalizable to a population affected by substantial morbidity or disability. In addition, the cross-sectional nature of our analysis makes it impossible to discriminate whether the proteins associated with differential mitochondrial function in this study represent causes or consequences. Further longitudinal studies should better disentangle the mechanisms underlying the associations identified. Finally, although 31 P-MRS has been long considered a procedure able to provide a measure of mitochondrial oxidative capacity generalizable to many tissues, the measure of mitochondrial oxidative capacity used in this study is relative to the quadriceps muscle and it is possible that the energetic status of this muscle may not be representative of the energetic status in other muscle groups or tissues. Hence, the interference of other tissues may have reduced the signal-to-noise ratio weakening the results of this study. In conclusion, mitochondrial oxidative capacity of skeletal muscle was associated with specific clusters of plasma proteins in this study, mainly representing the following pathways: homeostasis of energy metabolism, protein turnover, and inflammation. These findings need to be replicated in an independent population, possibly with longitudinal data and alternative measurements of mitochondrial function, before the results can be used to develop a clinical tool to assess mitochondrial function using the blood proteome.
5,856.2
2020-12-01T00:00:00.000
[ "Biology" ]
Lepton-flavour non-universality of $\bar{B}\to D^*\ell \bar\nu$ angular distributions in and beyond the Standard Model We analyze in detail the angular distributions in $\bar{B}\to D^*\ell \bar\nu$ decays, with a focus on lepton-flavour non-universality. We investigate the minimal number of angular observables that fully describes current and upcoming datasets, and explore their sensitivity to physics beyond the Standard Model (BSM) in the most general weak effective theory. We apply our findings to the current datasets, extract the non-redundant set of angular observables from the data, and compare to precise SM predictions that include lepton-flavour universality violating mass effects. Our analysis shows that the current presentation of the experimental data is not ideal and prohibits the extraction of the full set of relevant BSM parameters, since the number of independent angular observables that can be inferred from data is limited to only four. We uncover a $\sim4\sigma$ tension between data and predictions that is hidden in the redundant presentation of the Belle 2018 data on $\bar{B}\to D^*\ell \bar\nu$ decays. This tension specifically involves observables that probe $e-\mu$ lepton-flavour universality. However, we find inconsistencies in these data, which renders results based on it suspicious. Nevertheless, we discuss which generic BSM scenarios could explain the tension, in the case that the inconsistencies do not affect the data materially. Our findings highlight that $e-\mu$ non-universality in the SM, introduced by the finite muon mass, is already significant in a subset of angular observables with respect to the experimental precision. Improved form-factor determinations: There has been significant progress in the theoretical determination of hadronicB → D ( * ) form factors, both from lattice QCD computations [6][7][8][9] and from light-cone sum rules [10]. These determinations allow for precise predictions of the complete set of form factors inB → D * ν in the whole phase space [11,12]. These predictions are using the heavy-quark expansion and account for contributions up to and including O 1/m 2 c . They are a prerequisite for a general BSM analysis of these modes. Impending progress in experimental and theoretical precision: Both the experimental and the theoretical precision are expected to improve significantly: the ongoing Belle II and LHCb upgrade experiments are bound to deliverB → D ( * ) ν results based on multiples of the current datasets [13][14][15], and updated lattice QCD results for severalB → D * form factors beyond zero recoil are upcoming [16][17][18], see also the discussions in Refs. [19,20]. This renders the discussion of presently negligible effects important for the full phenomenological exploitation of the upcoming experimental and theoretical results. The discussions resulting from the first two items significantly improve our understanding of these modes, and their sensitivity to the adopted form-factor parametrization. Recent phenomenological analyses have also shown that the V cb puzzle is significantly reduced, albeit not yet fully resolved [11,12,[21][22][23][24][25][26][27][28]. We pose the following questions that affect existing and future angular analyses ofB → D * ν data: 1. What is the amount of LFU violation in the SM induced by the muon mass? Is the muon mass still negligible given the achieved experimental and theoretical precision? 2. What amount of information can be extracted from the available single-differential distributions in comparison to a fully-differential angular analysis ofB → D * ν? Is it possible to increase the sensitivity to BSM physics with available data by modifying the analysis strategy? 3. What are the limits on BSM physics from existingB → D * ν data? Which effective operators could resolve a potential tension with the SM and what would be their implications on so far unmeasured observables? In order to answer these questions, we proceed as follows: We begin by describing the general properties of thē B → D * ν angular distribution and the BSM physics reach of the angular observables arising from this distribution in Section II. In Section III we prepare a full angular analysis on the basis of the Belle data published in Ref. [3]. In doing so, we identify two obstacles to the full use of these data. In Section IV we carry out a fit of the full angular distribution to the Belle data, and discuss the compatibility with SM predictions. In light of an observed tension, we further discuss possible BSM interpretations of our results. We conclude in Section V. II. FULL ANGULAR DISTRIBUTION AND ITS BSM REACH The four-fold differential distribution ofB → D * (→ Dπ) ν decays constitutes a powerful tool for assessing SM as well as BSM physics. It is given as Assuming a purely P-wave Dπ final state, this distribution is fully described by twelve angular observables J ( ) i and their respective angular coefficient functions f i . The dependence of the functions f i on the three angles cos θ , cos θ D and χ, given in Eq. (A1) in Appendix A, is lepton-flavour universal and completely determined by conservation of angular momentum. The angular observables J ( ) i depend on the momentum transfer q 2 , or equivalently the hadronic recoil w. Their calculation involves the lepton-flavour-universal hadronic form factors, as well as the short-distance coefficients of the low-energy effective theory. The latter encode short-distance SM effects (which are again lepton-flavour universal) as well as potential BSM effects (which are in general non-universal). These dependencies are listed in Table I. Additional sources of lepton-flavour non-universality are known kinematic phase-space effects ∼ m / q 2 , which are most pronounced for = τ . Under the assumption that the short-distance behaviour corresponds to the SM expectation, the angular observables J The complete dependence of the angular distribution on BSM contributions in terms of the BSM couplings has been given for the first time in Ref. [29], see also Ref. [30], with previous partial results throughout the literature [31][32][33][34][35][36][37]. We use the conventions/notation provided in Appendix A. The sensitivity to various BSM couplings and lepton-mass effects have been studied in detail [38] based on helicity amplitudes. Here we would like to address properties that are not mentioned previously, or that are particularly important for our work. An important observation in charged-current semileptonic decays is that to extremely good approximation no CP-conserving scattering phases appear in the J . Here the notation . . . denotes integration over the full range of the dileptoninvariant mass as defined in Eq. (A2). The experimental determination of the fully differential rate is rather involved. Many analyses therefore present only results for the partially or fully integrated rate, typically CP-averaged. Doing so simplifies the experimental analysis, but the sensitivity to some of the angular observables is lost, which can render the determination of some parameters of interest impossible. The two recent Belle analyses for instance [2,3] provide binned CP-averaged measurements of the four single-differential distributions where Γ ( ) denotes the CP-averaged decay rate. In particular, in Ref. [3] the authors separate the data by the light lepton flavours = e and = µ. The three CP-averaged single-angular distributions depend on only five out of the twelve angular observables defined in Eq. (1). Out of these five observables, the CP-averaged S ( ) 9 vanishes independently of the BSM scenario, as discussed above Eq. (2), and is thus not relevant for our analysis. This leaves the D * -longitudinal polarization fraction F differ by lepton-mass suppressed terms, only. In a generic BSM scenario, the two observables can further differ due to contributions from pseudoscalar and tensor operators, see Table I. For more details see Appendix A. The presentation of the data in terms of single-differential distributions implies that all angular observables are integrated over the full q 2 range. By binning in q 2 , the data will provide more information about the BSM couplings through the q 2 shape of the angular observables. In particular, the binned angular observables yield access to more and independent bilinear combinations of the BSM couplings than the q 2 -integrated ones do. Hence, binning the angular observables will constitute a powerful tool to discriminate between BSM scenarios, as discussed in more detail below. The CP asymmetries of the single-differential rates Eqs. (3)-(5) vanish independently of the BSM scenario. This can be used to validate the experimental analyses. The CP asymmetry of the χ-dependent rate in Eq. (6) is fully described by the angular observable A ( ) 9 . A measurement of this CP asymmetry could be accomplished with existing datasets and would provide important information about potential CP-violating BSM effects. 1 Such CP-conserving phases are strongly suppressed inB → D * ν and can arise, e.g. at the level of dimension eight in the low-energy EFT or due to radiative QED corrections. The dependence of angular observables on combinations of Wilson coefficients. An entry of denotes the presence of this combination. An entry of m n denotes the presence of this term, but with kinematic lepton-mass suppression ∝ (m / q 2 ) n (n = 1, 2). The "num(·)" indicates that only the dependence of the numerator of this observable is given. The V a i have been introduced in Ref. [29]. A. Parametrization of BSM Physics BSM physics inB → D * ν decays has been investigated, usually based on the assumption of three light lefthanded neutrino flavours below the electroweak scale. The corresponding most general low-energy effective theory at dimension six [39] can be written as [40] Here the operators are constructed out of SM fermion fields and read They account for lepton-flavour violation (LFV) by = . The observables inB → D * ν depend only on four combinations of Wilson coefficients: together with C T , whereas the combination C S = C S R + C S L enters only inB → D ν. Since the neutrino flavour is not detectable, it must be summed over in every observable. We determine the minimal number of parameters and their ranges necessary to parametrize these BSM coefficients for different cases. Starting from the lepton-flavour conserving case, Eq. (7) contains five complex parameters C i ≡ C i per charged-lepton species . In the context of BSM analyses ofB → D * ν, the fact that matrix elements of the scalarcb currents vanish implies that one can maximally determine four linear combinations out of the five Wilson coefficients. These four complex coefficients can be parametrized by seven real parameters, since an overall phase is unobservable, i.e. all observables are invariant under a joint phase rotation C i → exp(iφ )C i . For instance, one of the complex coefficients can be chosen real and positive, which leaves four real and three imaginary parts or four absolute values and three relative phases as free parameters. The Lagrangian Eq. (7) is conveniently normalized to G F V cb to ensure that in the SM C V L = 1 at tree-level. In general, these factors cannot be separated from the BSM Wilson coefficients since only their products enter observables. Hence, they do not count as additional parameters. The set of seven real parameters is therefore the maximal information we can hope to extract fromB → D * ν decays for a given without LFV. All CP-averaged observables depend on these seven parameters through the combinations These combinations, however, are invariant under the discrete symmetry transformation Im can therefore still be determined from CP-averaged observables, albeit only up to an overall sign. One is free to choose one of these signs freely in the fit, since the second solution can always be obtained by inverting the signs of the imaginary parts. In the limit of a massless lepton, the two classes of Wilson coefficients C A,V and C P,T decouple in the observables, since their interference is m suppressed, as shown in Table I. As we will see below, this applies only to electrons, since in precision analyses the muon mass cannot be neglected anymore. This implies a separate symmetry for each class, C V,A → exp(iφ )C V,A and C P,T → exp(iϕ )C P,T . Therefore another phase cannot be determined from anȳ B → D * −ν observable in this limit. In fact, it can be eliminated altogether from the parametrization, leaving maximally six parameters to be determined fromB → D * ν for massless charged leptons. In this case also the discrete symmetry for the imaginary parts holds separately for each class, allowing to choose another sign freely. Hence, the most general parametrization of CP-averagedB → D * eν data within the weak effective theory and when neglecting LFV requires only six parameters, four of which can be chosen positive. Taking into account lepton-mass effects requires a seventh parameter, and only two of these parameters can be chosen positive. Note that in the counting above we have assumed the couplings for the different lepton flavours to be completely independent, allowing in particular for independent phase rotations. Such an assumption does not hold in all BSM scenarios; in particular it does not hold in the Standard Model Effective Field Theory (SMEFT) at mass dimension six. In the matching of Eq. (7) to the SMEFT, the coefficient C V R is lepton-flavour universal, a property inherited from the SM gauge group [41,42]. This universality couples the different sectors and consequently the phase rotations cannot be performed independently anymore. This gives rise to an additional measurable phase in this scenario, and therefore necessitates a new corresponding parameter. For instance, for the common and convenient choice of a real and positive C V L , the coefficients C V R cannot be trivially identified with each other. Instead they fulfill and similarly for = τ . The relative phase between the two Wilson coefficients C e V L and C µ V L appears explicitly, while it can be absorbed everywhere else. This implies that although two real parameters are removed (one of the complex C V R coefficients), one is added (the relative phase), and hence the overall number of parameters is reduced only by one. Generalizing the above observations in the presence of lepton-flavour-violating interactions, = , is straightforward insofar as the contributions with different neutrino flavours do not interfere. Hence all expressions in Eqs. (10)-(11) remain valid with the generalizations The symmetry considerations hold for each neutrino flavour separately. Naively the number of parameters simply triples compared to the lepton-flavour-conserving case above. The situation is nevertheless significantly different from the lepton-flavour conserving case, for which the number of parameters is smaller than the number of combinations resolved TABLE II. Amount of BSM physics information that can be extracted in different scenarios, see also text. Here S and A denote the measurement of the CP average and the CP asymmetry of the respective differential rate. The first and second number corresponds to the number of parameters that can be extracted without and with mass suppression, respectively. of Wilson coefficients appearing in the description of the decay. This implies (non-linear) relations between these combinations in the lepton-flavour conserving case, for instance, With the generalizations in Eq. (13), the number of BSM parameters is larger than the number of combinations of Wilson coefficients. Hence, the latter determine the maximal number of parameters (parameter combinations) that can be extracted. This implies that relations such as Eq. (14) do not hold anymore in the presence of lepton-flavour violation and can be used instead to test for LFV in charged-current decays without the need to identify the neutrino flavour experimentally. In the presence of light right-handed neutrinos, similar considerations as for the LFV case apply, since also here more BSM parameters are introduced and the corresponding contributions do not interfere. The generalization to light right-handed neutrinos is therefore analogous to Eq. (13) and similar comments apply for the determination of the corresponding parameters. We now turn to the determination of the discussed parameters from the differential distributions. Each fully q 2 -integrated angular observable provides one linear combination of the combinations of Wilson coefficients only, as indicated in Table I. The measurement of their q 2 dependence allows further to separate different BSM contributions to the same angular observable, if their q 2 dependence [38] is different. For instance, the q 2 -differential rate allows to determine all four absolute values of the BSM parameters. The question is what amount of experimental information is necessary to determine the maximal amount of parameters in a given scenario. Table II shows the situation in a few scenarios for different sets of experimental measurements. A few general comments are in order: • It is necessary to consider the CP-conjugated modes separately if the sign ambiguity for the imaginary parts is to be resolved. Since the lepton charge tags the B meson flavour, this is not difficult to achieve experimentally. • The interference between the two classes of BSM coefficients C A,V and C P,T is always lepton-mass suppressed, see Table I. Hence its determination requires high statistical power, as expected from the upcoming datasets at Belle 2 and the LHC experiments. • While for = µ there is some sensitivity to additional combinations of Wilson coefficients, these combinations are still strongly suppressed. The corresponding parameters will therefore be determined comparatively poorly. Generally the best chance to determine them is to consider rather low values of q 2 , given the suppression by powers of m / q 2 , both for the angular observables and the q 2 -differential rate. Probing different bins in q 2 can also improve the sensitivity to other BSM coefficients. Tensor interactions for instance can be probed particularly well at low q 2 in d Γ T /dq 2 ∼ 3S 1s − S 2s , since the SM contributions vanish for q 2 → 0, while the tensor contributions remain finite [40], see also Ref. [43]. Considering some of the scenarios in more detail, we make the following observations: • It is impossible to determine the full set of physical BSM parameters for m → 0 (e.g. = e) from the CPaveraged single-differential rates alone, even disregarding ambiguities in the signs of imaginary parts. The reason is that in this case only A ( ) FB is sensitive to the relative phases between the coefficients. Since there are two observable relative phases (one between C A and C V , one between C P and C T ), they cannot both be determined from this single observable. • Assuming the flavour-conserving case, the extraction of all seven parameters is possible for finite m from the CP-averaged single-differential rates, modulo discrete ambiguities. However, one relative phase can only be obtained from lepton-mass-suppressed contributions, even though in more sophisticated measurements it would be accessible without lepton-mass suppression. • Beyond the lepton-flavour-conserving case, it becomes clearer how much more information is contained in a fully q 2 -differential measurement. Strictly speaking, such a measurement is not necessary when assuming leptonflavour conservation. However, also in this case there are additional crosschecks possible and additional formfactor information can be extracted together with the BSM parameters. These observations apply fully to the recent Belle measurements [3]. Considering the determination of the full BSM information in the lepton-flavour-conserving case as an important intermediate goal, there are several ways this could be achieved with existing data, extending the experimental analyses only slightly: 6c entering this observable. Given that the cos θ -differential distribution (4) has been measured in 10 bins in Refs. [2,3], but contains only two angular observables, this seems feasible by reducing the number of cos θ bins and providing the observables in two or three q 2 bins instead. This would give access to all BSM parameters, leaving only two signs of imaginary parts undetermined. 2. Measuring dΓ/dχ separately for the two lepton charges. This would give access to A ( ) 9 , and thereby to Im(C A C * V ). This in turn would determine also Re(C A C * V ) up to a sign, and thereby allow to access Re(C P C * T ) from A ( ) FB up to a two-fold ambiguity. Each of the solutions would still have a two-fold sign ambiguity for the corresponding imaginary part. Together with the first option, this measurement would resolve the sign ambiguity in Im(C A C * V ), leaving only the one in Im(C P C * T ) (should this parameter combination be found to be different from zero). III. AVAILABLE EXPERIMENTAL DATA SemileptonicB → D ( * ) ν decays have been of key interest for many years, see Ref. [44] for a list of analyses over the last ∼ 25 years. However, until recently, almost all experimental analyses have been tied to a specific form-factor parametrization, specifically the so-called CLN parametrization [45]. This parametrization involves assumptions that are not adequate anymore for precision analyses. Applying instead the underlying formalism of a heavy-quark expansion more consistently [25] and extending it to include 1/m 2 c contributions [11,12], allows for a consistent description of the available experimental data and form factor results. However, since experimental analyses presented in most cases only parametrization-specific results, a model-independent reanalysis under different theory assumptions of the underlying experimental data is impossible. 2 Unfortunately, this problem persists in the most recent BaBar analysis [46], which includes a second form factor parametrization, but still does not allow for an independent analysis of the data. Furthermore, in many cases electron and muon data have been averaged without presenting separate results, rendering them of limited use for the analysis of LFU. A notable exception among these past studies is the 2010 untagged Belle analysis [47], which presented lepton-specific differential rates separately for longitudinal and transverse D * polarizations, but lacked the necessary correlations. More recently, the Belle analysis ofB → D ν [1] presented for the first time lepton-specific differential rates including their full correlations, which made possible precision studies with arbitrary form-factor parametrizations for the first time, initiating an intense ongoing discussion regarding the best way to analyze these and similar data. Similar comments apply to the preliminaryB → D * ν data with hadronic tag in Ref. [2], which were however again lepton-flavour averaged and are presently reanalyzed, and the 2018 untagged analysis [3], superseding the results of Ref. [47], which we discuss in detail in the following. A. Belle's 2018 untagged analysis The dataset for the angular distribution provided by Belle [3] is the first analysis that separates the electron mode from the muon mode in both the bin contents and the statistical covariance matrix, and also the systematic covariance matrix can be reconstructed for both lepton species separately. 3 Unfortunately the correlations between the electron and muon modes are not given explicitly. Yet Belle has used these data for a high-precision LFU test that compares the branching fractions to electrons and muons integrated over the entire phase space. They found the ratio to be in agreement with lepton flavour universality, R e/µ = 1.01 ± 0.01(stat.) ± 0.03(sys.). In our study we aim to extend the study of LFU to the angular observables using the same Belle data. For this purpose we need to construct a combined correlation matrix for the full dataset, including correlations between electrons and muons. Before going into these details, however, we comment on an issue present in the statistical correlation matrix. Belle provides the number of (background-subtracted) events before unfolding in bins of the four aforementioned single-differential distributions. These are the distribution in i.e. the same signal candidates have been histogrammed in four different ways in the four single-differential distributions. These relations imply that for both electrons and muons only 37 of the measured bins are independent, since the content of 3 bins can be calculated as the total yield minus the yields of the other 9 bins of the corresponding distributions. This in turn implies that the corresponding statistical correlation matrices have to be singular; each of the 40 × 40 matrices should exhibit three vanishing eigenvalues. This is, however, not the case: the determinant of both matrices is rather large and all eigenvalues of both statistical correlation matrices are O(1). It remains unclear why the statistical correlation matrices do not reflect the linear dependence of the 3 bins, which should by construction be a result of the description of 10 bins per single-differential distribution used by the Belle collaboration. Note that the issue of the linearly dependent bins affects the determination of V cb from these data: 4 if the sum over each set of 10 bins is identical, no information is added to the determination of the total rate by having the four binnings. However, if the correlations are such that these sums become effectively independent, the total rate is more precisely determined by considering all four binnings than by considering only a single one, leading to an underestimation of the uncertainty of the total rate (and hence V cb ). The effect is not large with the given data, but it is non-vanishing: the determination of the total rate is a couple of per mil better than from each individual distribution. It is important to note that this small numerical impact is not an indication that a correct extraction of the statistical correlation matrices will lead to small corrections in the analysis. Since there is an unknown problem in the extraction of the statistical correlation matrices, there is no way of knowing what the effect of its resolution will be. Given this numerical smallness within our analysis, however, we will work below with 40 × 40 matrices. In LF-specific fits with a 37 × 37 matrix the result varies very slightly, depending on the choice of the three discarded bins, and any specific choice would be arbitrary. We have checked that our numerical results below remain essentially unaffected. The issue with the statistical correlation matrices must be kept in mind when interpreting any results obtained from the data from Ref. [3]. In the remainder of this section, we describe the construction of a combined electron-muon 80×80 covariance matrix based on Ref. [3], with only one mild additional assumption. According to Ref. [3], the only source of systematic uncertainties that is different for = e and = µ is the procedure of lepton identification (Lepton ID). Given the statistical independence of electron and muon samples, this implies the following form for the total covariance matrix: . The lepton-ID systematic uncertainties are provided individually for both lepton flavours, but also for the "LFcombined" which enter the systematic correlation matrix given explicitly in the article. We therefore have Together with the information that the Lepton-ID systematic uncertainties are 100% positively correlated throughout all bins [48], Cov sys,lep-ID-comb where σ i are systematic uncertainties of the ith bin taken from tables XI-XIV [3], the "LF combination" can thus be undone for the systematic correlations. We compute the LF-specific systematic covariances (Cov sys, ) from the "LF-combined" ones (Cov sys,LF-comb ) of [3] consequently as LF-specific analyses can be performed with these LF-specific 40 × 40 statistical and systematic correlation matrices at hand. The only assumption we make for the construction of the full 80 × 80 covariance matrix is that the lepton-ID uncertainties for electrons and muons are uncorrelated: This is plausible (as confirmed by Belle collaboration members [49]), given they concern different detector parts, but not fully guaranteed. We consider this assumption to be at a comparable level to the assertion in Ref. [3] that the lepton ID constitutes the only non-universal contribution to the systematic uncertainty. Note that this is an approximation that might not hold well enough to analyze LFU. In that case the systematic uncertainty given in [3] for the LFU ratio R e/µ would be underestimated, as would be our e − µ covariance. However, we perform below an extremely conservative check that our observation of a tension with the SM does not depend on this assumption. IV. FITS TOB → D * (e, µ)ν DATA AND DISCUSSION We analyze the data from the Belle analysis [3] in detail, based on the general analysis in Section II and the covariance matrix derived in Section III. A. Angular analysis and comparison with the SM In the first step our fit is completely model-independent: we use the observation made in Section II that the three single-differential CP-averaged angular distributions can be fully described by only four angular observables retaining all information. Further, we parametrize the 10 bins of the w-distribution again in full generality as the total decay rate and nine independent bins of the normalized w-differential rate: Here w max = 1.5 to comply with the choice in [3], which excludes a tiny part of the low-q 2 phase space. From this parametrization we calculate the bin contents N obs i, by integrating over the relevant angle intervals where necessary, and folding these predictions with the corresponding response matrices and efficiencies provided by the Belle collaboration for each lepton flavour separately, as described in [3]. We thus arrive at a description of the 40 bins per lepton flavour given in [3] in terms of only 10 + 4 = 14 observables in Eqs. (22)- (23). We emphasize that our fit parameters appear up to the common normalization factor linearly, assuring a unique minimum and no distortion of their distributions from a multivariate gaussian shape. The conversion of number of events to decay rate involves the following numerical input: N BB = (772 ± 11) · 10 6 , B(D * + → D 0 π + ) = (67.7 ± 0.5) %, with N BB from [50], f 00 and from the B 0 lifetime from [44] (see also the discussion on f 00 in [51]), and the latest values of the branching fractions from [52]. Note that the value for B(D 0 → K − π + ) was updated w.r.t the value used in Ref. [3,53], which slightly impacts the determination of V cb . The corresponding uncertainties cancel in all ratios and hence affect only the total decay rate, for which they are included in the systematic uncertainties provided by the Belle collaboration [3]. We further introduce the averages and differences of LF-specific observables for later convenience in the study of LFU violation where X ( ) stands for any of the considered observables. We perform two types of fits with our approach to test the stability of the results: 1. a simple χ 2 fit, 2. a fit using pseudo-Monte Carlo techniques, following the procedure described in Ref. [53], both using the full 80 × 80 covariance matrix. In addition, we have applied a correction to the systematic correlations for d'Agostini bias [54], following the procedure described in Ref. [40]. We find the results of the two fits to be virtually identical. In Ref. [53] the authors observe that in their joint fit of V cb and form-factor parameters the two procedures produce markedly different results. They conclude that this difference is due to the large correlations present in the experimental data and that the usage of the pseudo-Monte Carlo technique is mandatory for phenomenological analyses. Our findings are in stark contrast to this conclusion and indicate instead that large correlations alone are not the cause for this difference. Our interpretation is that the observed difference is related to the form-factor parameters entering non-linearly in the fit of Ref. [53], while our angular observables and x ( ) i parameters enter bilinearly. It is worth emphasizing in this context that • our fit results are extremely well described by Gaussian distributions; and that • the correlations between our fit parameters are much smaller than the ones present in the 80 × 80 matrix describing the bin contents. As a consequence, we do not distinguish between the results from the two fit procedures in the following. The fit results for our parameters as defined in Eqs. (22)-(23) are listed in Table III and shown in Figure 1. At the best-fit point we find χ 2 = 48.9 for 80 − 2 × 14 = 52 degrees of freedom (dof), indicating a good fit. 5 This suggests that the assumption of a pure P -wave Dπ final state is well justified. In both Table III and Figure 1 we juxtapose the fit results with their corresponding SM predictions. The latter depend on theB → D * form factors. Here, we use the form-factor determinations from Refs. [11,12]. All SM predictions are obtained using the EOS software [55]. The EOS code for the computation ofB → D * ν observables has been independently checked. We also predict the ratio R e/µ in the SM and obtain: 0.1161 ± 0.0020 0.1164 ± 0.0020 0.1184 ± 0.0027 0.1208 ± 0.0029 [11], together with their values obtained from our fit to the Belle data [3]. For the prediction of the total rate, we leave the value of |V cb | unspecified. which does not include possible structure-dependent QED corrections. We emphasize that the predictions [11,12] of theB → D * form factors are conservative in that the corresponding uncertainties include higher-order contributions in the heavy-quark expansion. Furthermore they rely only on theory input from various sources, i.e. no experimental input has been used for their determination. Note that |V cb | cancels in the predictions for the normalized bins x ( ) i as well as in the angular observables; only the total decay rate is proportional to |V cb | 2 . Moreover, theoretical uncertainties of the normalization of the leading hadronic B → D * form factor cancel in the normalized observables. However, we do not include structure-dependent electromagnetic corrections to the angular distribution. Given the expected precision of the experimental data and the impact of muon-mass effects as discussed in this work, we expect that including these effects will become mandatory soon. Before comparing to our numerical SM predictions, we test the qualitative expectation of approximate leptonflavour-universality, i.e. ∆X ≡ 0, which does not require a specific form-factor parametrization. We find that most Table III. quantities are well compatible with lepton-flavour universality, with the exception of A ( ) FB , which shows a deviation from exact universality at the 3.9σ level, to be discussed below. This strong violation is not readily observable in the 80 bins provided by the Belle collaboration, but becomes obvious in the results of the fit of the non-redundant set of angular observables to the underlying angular distributions, see Figure 1. The violation is further hidden by the fact that the lepton-flavour averaged data are compatible with the SM expectation. In the comparison of our SM predictions with the fit results we find: 1. As expected, the precision for most normalized quantities is better than that for the total rate, typically at the level of a few percent. This is true for both the SM predictions and the fit results. 2. Overall we find very good agreement of the fit results with our SM predictions, as can be seen in Figure 1, especially when considering the individual lepton species. There are a few smaller differences of roughly 1σ, only A (µ) FB shows a tension above the 2σ level. 3. The differences of the lepton-flavour-specific observables, ∆X, are predicted with very small absolute uncertainties due to the muon-mass suppression. Their predictions have similar relative uncertainties as the ones for the angular observables themselves. Their absolute values are also very small, with ∆X/ΣX = O( ) in most cases. This can be readily understood, since these observables receive only corrections of O(m 2 µ ) in the SM. The only sizable central values are those of ∆A FB and ∆ F L , which are slightly enhanced by numerical factors. Most importantly, we find that the latter shifts are still small, but already comparable to the corresponding experimental uncertainties, see Table III. This implies that the muon mass cannot be neglected anymore in precision analyses. 4. The pattern of the shifts in ∆x i is surprising at first sight, since |∆x i |/Σx i is almost constant over the whole range of w (or q 2 ), while we argued that the effect scales like (m µ / q 2 ) 2 . This can be understood from the FB )/2 (bottom right). Contours correspond to 68%, 95% 99.7%, and 99.99% probability, respectively. The ragged outermost contours are artefacts due to lack of samples so far in the periphery of the best-fit point. The SM predictions based on the form factors obtained in Refs. [11,12] are shown as blue crosses. The SM uncertainties are found to be much smaller than 10 −2 and hence negligible, with the exception of the last panel. The uncertainty in the ∆AFB-ΣAFB plane is shown as a (highly degenerate) ellipse at the 68% probability level. normalization to the total rate. The shifts in ∆(∆Γ i )/Σ(∆Γ i ) scale as expected, from significantly less than 1 at w ∼ 1 (high q 2 ) to −5 in the bin with maximal w (lowest q 2 ). The shift in the total rate is about −3 , so normalizing yields shifts in ∆x i /Σx i to the range [−3 , 3 ]. 5. For LFU observables we still find mostly excellent agreement between experiment and our SM predictions. However, the aforementioned difference between the measurements of A FB becomes more significant, given the smaller absolute uncertainty in ∆A FB and the fact that the relatively large SM prediction carries the opposite sign from the one determined in the fit. This quantity differs therefore by approximately 4σ from its SM prediction. In Figure 2 we show the pair-wise 2-dimensional best-fit regions of ∆A FB with ∆F L , ∆ F L , ∆S 3 , and ΣA FB . The discrepancy with the predictions reaches the 4 σ level, compatible with similar levels seen for the 1-dimensional discrepancy for ∆A FB in Table III. These observations mildly depend on the covariance matrix used in the fit. As stated above, we consider our construction of the 80 × 80 covariance matrix reliable to the extent that the data in Ref. [3] are correct. To make absolutely sure that our assumption regarding the e − µ correlations is not the reason for the observed discrepancy, we adopt the following alternative procedure: We determine the A (e) FB and A (µ) FB with separate statistical and systematic uncertainties in two separate fits to the lepton-specific data, using the corresponding 40 × 40 covariance matrices for which we do not have to rely on our assumption. We then minimize the discrepancy with respect to our (strongly correlated) SM predictions by assuming a diagonal 2 × 2 statistical correlation matrix for A (e) FB and A (µ) FB , but allowing for an arbitrary correlation ρ ∈ [−1, 1] between the systematic uncertainties. We find that the minimal tension with respect to the SM for the combined A FB occurs for maximal anticorrelation (ρ = −1), which is not a realistic value. The correlation determined in the fit to the 80 × 80 covariance matrix is actually very small. Adopting nevertheless this most conservative choice of ρ = −1 still leads to a tension of 3.6σ. We emphasize again that this result is not changed by employing the pseudo-Monte Carlo approach with Cholesky decomposition for the fit as done in [53], nor by the d'Agostini effect (the plots shown in Figure 1 include the corresponding shifts). Therefore, even adopting this maximally conservative procedure, our results amount to evidence for µ-e-non-universality beyond the SM in charged-current b → c ν transitions. However, our finding hinges on the approximate validity of the data and specifically the correlation matrices given in Ref. [3]. We also perform a full SM fit to the 2 × 14 observables in Table III, including their correlations given in ancillary files attached to the arXiv preprint of this article. Starting from a fit of form-factor parameters from theory input, only [11,12], the inclusion of the experimental information on these 28 observables increases the minimal χ 2 by 68.5, while only |V cb | is introduced as an additional parameter in the fit. This does indicate a bad fit, with a p value of 2 × 10 −5 , or a tension at the 4.3σ level. The discrepancy remains driven by a ∼ 4σ tension in A FB . The experimental and theoretical correlations with other observables play a minor role, see also Figure 2. We note in passing that S-P wave interference cannot affect the numerator of A FB , and can only decrease the magnitude of A FB by a coherent contribution to the denominator [56]. We refrain from providing the value of |V cb | from either lepton mode, which would be compatible with the values obtained from the lepton-flavour average in Refs. [3,53] and continue to exhibit a substantial tension with respect to the inclusive determination |V cb | B→Xc = (42.00 ± 0.64) · 10 −3 [57]. Given the incompatibility of the data with the SM prediction, we consider it misleading to use it to extract |V cb |. To summarize, we find in our fits a discrepancy between data and the SM of ∼ 4σ. This result is stable with respect to the treatment of the d'Agostini bias, the type of fit we are performing (χ 2 fit vs. pseudo-Monte Carlo techniques), and importantly also the precise treatment of the correlations of the systematic uncertainties between electrons and muons. We reiterate, however, the concerns discussed in Section III A: the statistical correlation matrices given in [3] do not seem to be correct, since they are not singular as they should be, given the performed redistribution of events to obtain the different single-differential rates. Bearing this caveat in mind, we still investigate in the following the possibility that the observed discrepancy is an effect of BSM physics. B. Possible BSM interpretation We consider the possibility that the observed discrepancy is due to BSM physics. To that aim, we investigate the Lagrangian Eq. (7) in the limit of lepton-flavour conservation = . From our general analysis in Section II we have seen that A ( ) FB is special in that it is determined to O(m µ ) only by interference contributions ∼ Re(C i C * j ), and is the only observable in the single-differential distributions to which interference terms contribute in the massless limit. Given the size of the observed effect, ∆A FB /ΣA FB ∼ O(10%), a muon-mass suppressed contribution does not seem likely as its source. This suggests that in order to accommodate ∆A FB , the first options to consider are BSM contributions to right-handed vector operators, to both pseudoscalar and tensor operators, or to left-handed vector operators. Notably, the first two options correspond to second-order BSM contributions: for the interference between pseudoscalar and tensor operators this is obvious. For the right-handed vector operator the interference term For the BSM contributions to the left-handed vector operator only, the discussion is more involved. The interference terms Re( wherein the 1 stands for the SM contribution. However, if C V L were the only BSM contribution it would cancel in all normalized observables. This is not true for the contribution from right-handed vector operators, the real parts of which, however, enter linearly in |C A,V | 2 . Given the compatibility of all other observables with the SM, this scenario would therefore require the main contribution to either have a sizable imaginary part, or specific cancellations with other BSM contributions, in order not to upset this agreement. Taking here the Belle data at face value, we perform fits analogous to the ones described above, including different sets of BSM contributions. Note that we keep our description qualitative, since numerical statements are likely to be upset by an eventual correction of the Belle dataset [3]. For the same reason we do not perform a combined fit with other b → c ν modes, which would of course be required to confirm the viability of potential BSM scenarios that resolve the tension in this dataset. We find that either contributions from right-handed vector operators, or from both pseudoscalar and tensor operators are necessary to accommodate the observed ∆A FB , confirming our previous considerations. In order to describe the dataset well with real BSM Wilson coefficients, only, LFUV contributions to both the rightand left-handed vector operators are required. The three minimal BSM scenarios that fit the present BelleB → D * ν data [3] can be summarized as follows: 1. C V R = 0: This scenario does require a sizable imaginary part (as anticipated above) and LFU violation. The latter fact is interesting, since it might point to BSM physics beyond SMEFT [41]. The imaginary part of are sizable. We strongly encourage an experimental measurement of these observables. 2. C V R = 0 and C V L = 1: This scenario can obviously describe the data well, given that in principle already C V R = 0 suffices. However, to our surprise it is also compatible with an LFU BSM contribution to C V R , which is required in a SMEFT scenario. Enforcing this flavour-universal C V R , i.e., C e V R = C µ V R , results in significantly different absolute values and a sizable phase difference between C e V L and C µ V L . Sizable A ( ) 8,9 are also likely in this case, although not strictly necessary. It is possible to have all BSM coefficients real, and hence A ( ) 8,9 = 0, but only with a phase between the left-handed coefficients φ L = π. This corresponds to a BSM contribution of about twice the SM one and is therefore highly fine-tuned. 3. C P = 0 and C T = 0: Also this scenario provides a good fit to the data, both for complex and real-valued Wilson coefficients. The fact that both C T and C P are required means that this scenario can be tested by measuring S While we do not attempt to include additional datasets as explained above and therefore cannot quantitatively test specific BSM scenarios, we still observe a few general features of a possible BSM explanation in the context of the B anomalies, especially in b → cτν transitions: 1. While moderate shifts in one or several Wilson coefficients are required to fit the present Belle data [3], the total rates are not strongly affected. Hence it is not possible to explain the discrepancy in R(D * ) with these shifts, i.e. additional new contributions in b → cτν coefficients are required to explain the deviations of LFU ratios involving = τ from SM predictions. 2. If the observations made here based on the Belle data persist after future updates or corrections, they would have strong implications for scenarios addressing the B anomalies: Scenarios that only shift C V L would be ruled out, which are currently favoured as simultaneous explanations of the b → cτν and b → s + − anomalies. 3. Based on the picture provided by the observables, one would naively expect a hierarchy ∆ µ > ∆ e . In light of the more substantial deviations in b → cτν, this could be extended to ∆ τ > ∆ µ , which is quite natural in scenarios addressing both B anomalies. However, we find that ∆ µ > ∆ e is far from being established in our fits at the level of the Wilson coefficients. There will therefore be far-reaching consequences for the field of particle physics, should this discrepancy be confirmed. V. CONCLUSIONS In this article we pave the way for precision analyses of b → c ν processes beyond the assumption of e − µ universality. This endeavour is important for the determination of V cb in the Standard Model, a complete understanding of the weak effective theory (WET) beyond the SM (BSM), and also to gain new insights into the persistent b → cτν anomaly. We focus on the angular distribution inB → D * ν with light leptons = e, µ and highlight strategies for improved experimental analyses. We discuss the complete set of CP-even and CP-odd angular observables that arise from the fully-differential angular distribution ofB → D * (→ Dπ) ν. In particular we discuss the influence of a finite mass of the charged lepton on these observables in and beyond the SM. We consider in detail the specific case of four single-differential CP-averaged rates that have been experimentally analyzed in Refs. [2,3]. We find that only four flavour-specific angular observables per lepton flavour are sufficient to describe the three single-differential CP-averaged angular distributions including arbitrary BSM contributions: the lepton-forward-backward asymmetry A 3 . However, we find that it is principally not possible to extract the full information on the BSM contributions to the WET Wilson coefficients for the electron mode when using only the single-differential CP-averaged rates. For the muon mode, part of that information enters only muonmass suppressed, although it can be extracted without that suppression when considering a different presentation of the data. We further emphasize the existence non-linear relations between the Wilson coefficients that allow to test for lepton-flavour violation (LFV) and right-handed neutrinos. The most precise lepton-flavour-specific analysis to date [3] presents the three CP-averaged single-differential angular distributions for electron and muon flavours separately. Since they depend on only four angular observables per lepton flavour, the chosen number of kinematic bins is much larger than necessary. We show that this redundant presentation accidentally hides tensions between SM predictions and data. We encounter an issue with the statistical correlation matrices that can only be clarified by the Belle collaboration. We describe our approach to the combination of statistical and systematic correlations for the electron and muon datasets and extract the non-redundant lepton-flavour specific CP-averaged angular observables from the Belle data. For most of the angular observables we find good agreement with our up-to-date SM predictions, except for A FB in which the correlations of form factors lead to a strong cancellation of uncertainties, reaching the 4 σ level. We perform numerous checks that this tension is not a result of our specific treatment of the data. In particular, even when allowing for arbitrary systematic correlations between the electron and muon data, we find that this tension does not drop below 3.6 σ. This constitutes evidence for lepton-flavour universality violation. We continue by investigating in a qualitative manner the most economic BSM scenarios that can potentially explain the observed tensions. To this end, we assume lepton-flavour conservation, but allow for lepton-flavour non-universality in the WET description. We find that either right-handed vector operators or both pseudoscalar and tensor operators are necessary to accomodate the observed tension. If only right-handed vector operators are present, large imaginary parts in the Wilson coefficients are necessary. As a consequence, the CP-odd angular observables A ( ) 8,9 would be expected to deviate sizably from their SM predictions. A solution with purely real-valued Wilson coefficients appears only as a highly fine-tuned solution in a combined scenario with left-and right-handed vector operators. For the combination of pseudoscalar and tensor operators, we do not find the necessity of sizable imaginary parts. In this case, S are expected to show significant differences relative to their SM predictions. None of these three scenarios coincides with the preferred explanation of the b → cτν anomaly. Given the far-reaching consequences of our findings, we consider it essential that the Belle collaboration reviews -and if need-be corrects -the published dataset from Ref. [3]. Without such scrutiny, we cannot determine the impact of the identified issues on results inferred from the data. We strongly recommend that future measurements separate between the two light-lepton flavours in a transparent way. This is also important for the comparison with existing and upcoming LHCb analyses, which focus on the muon mode, only.
12,090.4
2021-04-05T00:00:00.000
[ "Physics" ]
A Prediction Method for the Damping Effect of Ring Dampers Applied to Thin-Walled Gears Based on Energy Method In turbomachinery applications, thin-walled gears are cyclic symmetric structures and often subject to dynamic meshing loading which may result in high cycle fatigue (HCF) of the thin-walled gear. To avoid HCF failure, ring dampers are designed for gears to increase damping and reduce resonance amplitude. Ring dampers are installed in the groove. They are held in contact with the groove by normal pressure generated by interference or centrifugal force. Vibration energy is attenuated (converted to heat) by frictional force on the contact interface when the relative motion between ring dampers and gears takes place. In this article, a numerical method for the prediction of friction damping in thin-walled gears with ring dampers is proposed. The nonlinear damping due to the friction is expressed as equivalent mechanical damping in the form of vibration stress dependence. This method avoids the forced response analysis of nonlinear structures, thereby significantly reducing the time required for calculation. The validity of this numerical method is examined by a comparison with literature data. The method is applied to a thin-walled gear with a ring damper and the effect of design parameters on friction damping is studied. It is shown that the rotating speed, geometric size of ring dampers and friction coefficient significantly influence the damping performance. Introduction Vibrations of gears are mainly caused by dynamic meshing loads.Resonance of the gear may occur if the excitation frequency is close to the resonance frequencies of the gear within its range of operating speeds.To avoid fatigue failure owing to high resonance stresses, the ideal solution is to redesign the gear to move its natural frequencies away from any potential external excitation.This method is called detuning [1].However, for the thin-walled Gear, which is typically lightweight and operates at high rotating speed, detuning may not be feasible because each gear has multiple natural frequencies in coincidence with the mesh frequency within its operating range. If detuning does not prevent resonance, then damping, as a passive control technique, is a feasible option to avoid high cycle fatigue failures.Friction dampers are effective approaches to provide damping in turbomachinery [2,3].Friction dampers are substructures that remain in contact with the main structure through elastic deformation or centrifugal force.The vibration energy of the system is attenuated (converted to heat) by friction on the interface via the relative motion between the damper and primary structure [4].Thin-walled structural components in aircraft gas turbine engines are easily excited to high vibration level.To reduce the vibrational stress of turbomachinery blades caused by the forced response from aerodynamic exciting sources and negative aerodynamic damping, i.e., flutter [5][6][7][8], many types of friction dampers have been studied and applied in actual structures.Among them, the under-platform damper has been extensively studied in detail [9][10][11][12][13][14].This type of damper is installed under the platform or between neighboring blades.However, gears do not have suitable positions to install the under-platform damper.Therefore, ring dampers are used as damping devices for gears.In contrast with under-platform dampers, limited work has been carried out to investigate ring dampers.Lopez [15,16] used ring dampers on the train wheels to reduce the vibration emitted by freight traffic.The results revealed that increasing the mass of the ring damper is beneficial to vibration reduction.Laxalde [17] studied the damping strategy of ring dampers by using the dynamic Lagrangian frequency-time method to derive the forced response of blisks in the presence of ring dampers.The results showed that the size of the alternating stick-slip area determines the damping effectiveness of ring dampers.A nonlinear modal analysis method is proposed by Laxalde [18], and applied to analyze the effect of design parameters of ring dampers.Zucca [19] studied the effect of the key parameters (for example, mass and friction coefficient) of ring dampers on the vibration amplitude.The authors used the contact element to link the static and dynamic differential equations and calculated the forced response of the coupling system.Tang [20] proposed a novel reduced-order modeling method to solve the forced responses of the blisk-damper systems based on Craig-Bampton component mode synthesis.The authors studied the effect of geometric parameters of ring dampers on the blisk forced responses [21] by this method. For ring dampers to be effective, they are typically located on the rim of the gear where large vibration amplitudes occur, as shown in Figure 1.Otherwise the energy dissipation due to friction will be reduced and even equal to zero, and the ring damper will be ineffective.Ring dampers are mostly effective only for the fundamental mode shapes of the gear [22].These modes are characterized by a large amplitude at the rim of the gear.For thin-walled gears, friction damping is produced by the relative motion caused by the different extension deformations between ring dampers and gears along the tangential direction of the contact surface [23].However, note that the circumferential deformation is caused by radial vibration.In other applications, for example train wheels, vibration energy is attenuated by the axial component of the vibration, and friction damping is produced by relative motion in the axial direction [15].Zucca [22] analyzed the axial and circumferential relative motion of a bevel gear with a ring damper in different response conditions.The results show that although the radial and axial components of the vibration have the same order of magnitude, the ring damper worked mainly in the circumferential direction because the relative displacement along the circumferential direction is much larger than along the axial direction.No relative motion occurs in the radial direction due to the ring damper maintaining contact with the primary structure by centrifugal force. response from aerodynamic exciting sources and negative aerodynamic damping, i.e., flutter [5][6][7][8], many types of friction dampers have been studied and applied in actual structures.Among them, the under-platform damper has been extensively studied in detail [9][10][11][12][13][14].This type of damper is installed under the platform or between neighboring blades.However, gears do not have suitable positions to install the under-platform damper.Therefore, ring dampers are used as damping devices for gears.In contrast with under-platform dampers, limited work has been carried out to investigate ring dampers.Lopez [15,16] used ring dampers on the train wheels to reduce the vibration emitted by freight traffic.The results revealed that increasing the mass of the ring damper is beneficial to vibration reduction.Laxalde [17] studied the damping strategy of ring dampers by using the dynamic Lagrangian frequency-time method to derive the forced response of blisks in the presence of ring dampers.The results showed that the size of the alternating stick-slip area determines the damping effectiveness of ring dampers.A nonlinear modal analysis method is proposed by Laxalde [18], and applied to analyze the effect of design parameters of ring dampers.Zucca [19] studied the effect of the key parameters (for example, mass and friction coefficient) of ring dampers on the vibration amplitude.The authors used the contact element to link the static and dynamic differential equations and calculated the forced response of the coupling system.Tang [20] proposed a novel reduced-order modeling method to solve the forced responses of the blisk-damper systems based on Craig-Bampton component mode synthesis.The authors studied the effect of geometric parameters of ring dampers on the blisk forced responses [21] by this method. For ring dampers to be effective, they are typically located on the rim of the gear where large vibration amplitudes occur, as shown in Figure 1.Otherwise the energy dissipation due to friction will be reduced and even equal to zero, and the ring damper will be ineffective.Ring dampers are mostly effective only for the fundamental mode shapes of the gear [22].These modes are characterized by a large amplitude at the rim of the gear.For thin-walled gears, friction damping is produced by the relative motion caused by the different extension deformations between ring dampers and gears along the tangential direction of the contact surface [23].However, note that the circumferential deformation is caused by radial vibration.In other applications, for example train wheels, vibration energy is attenuated by the axial component of the vibration, and friction damping is produced by relative motion in the axial direction [15].Zucca [22] analyzed the axial and circumferential relative motion of a bevel gear with a ring damper in different response conditions.The results show that although the radial and axial components of the vibration have the same order of magnitude, the ring damper worked mainly in the circumferential direction because the relative displacement along the circumferential direction is much larger than along the axial direction.No relative motion occurs in the radial direction due to the ring damper maintaining contact with the primary structure by centrifugal force.Although all of these papers show that vibration amplitude will decrease when ring dampers are used, limited work to investigate the nonlinear friction damping of thin-walled gears with ring dampers has been done.Most previous theoretical analyses have focused on the forced response of main structures in the presence of ring dampers.In contrast, the energy dissipation by ring dampers has been seldom studied.Niemotka [24] proposed a design method for split ring dampers to lower the vibration amplitude of annular air seals in gas turbine engines based on a quasi-static energy dissipation analysis. The primary objective of this work is to construct a numerical model to predict the damping of ring dampers in thin-walled gears.In the model, the nonlinear friction damping is expressed as equivalent mechanical damping in the form of vibration stress dependent.Macro-slip is used in the friction model to calculate the energy dissipation.The validity of the proposed method is confirmed by a comparison with forced response analysis results.The secondary objective is to investigate the influence of rotating speed, temperature, parameters of ring dampers, and friction coefficient on the damping performance by means of method proposed in this paper. The rest of this paper is arranged as follows.The theoretical background, including the equation of motion and modal analysis, is introduced in Section 2. Theoretical derivation of equivalent damping ratio of the ring damper is shown in Section 3. Method validation and parameter analysis are performed on a thin-walled gear in Section 4, followed by conclusions in Section 5. The Equations of Motion The equations of motion in time domain of the gear-ring damper system can be written as where M, C, and K are the mass, damping, and stiffness matrices of the gear, respectively, and X is the vector of the displacements.F(t) is the vector of the external excitation force.F nl (X, .X, t) is the vector of the nonlinear forces generated by the ring damper and depends on the vibration displacement and vibration velocity of the system.F nl (X, .X, t) can be given by the equivalent damping and stiffness matrices as [25] F nl (X, The equivalent damping matrix C eq and the equivalent stiffness matrix K eq depend on the motion of the gear. The displacement vector X is a function of time and can be expressed as a linear combination of the natural modes of the un-damped system. where Φ is the mass-normalized eigenvector matrix of the gear.Premultiplying Equation (4) throughout by Φ T : where because I denotes the unity matrix and Z, Λ, Z eq , and Λ eq are all diagonal.In the vicinity of the jth natural frequency, Equation ( 5) can be rewritten as .. where ζ j and ζ j,eq are the modal damping ratio and the equivalent damping ratio caused by the ring damper for the jth mode, respectively; k j and k j,eq are the modal stiffness and equivalent stiffness for the jth mode, respectively; and k j = ω 2 j ; ω j is the jth natural frequency of the undamped system.The n equations represented in Equation ( 7) can be uncoupled from all other equations.Therefore, the forced response of the jth mode can be calculated if the relationship between the equivalent damping and the equivalent stiffness and response amplitude can be pre-calculated. In general, the mass of the ring damper is much smaller than the mass of the main structure.Let the weight penalty be defined as mass of the ring damper mass of the gear (8) In this study, the weight penalty is less than 5%.Note that the magnitudes of M and K are much larger than the magnitude of F nl (X, .X, t), thus k j,eq is much smaller than k j .Generally, for the ring damper, k j,eq is two orders of magnitude lower than k j .In other words, the ring damper does not affect the shape of the vibration mode; rather, it affects only the vibration amplitude.Moreover, the influence of the damper on the resonance frequency of the primary structure can be neglected.However, the equivalent damping matrix is of the same order of magnitude or even larger with respect to the damping matrix because the structural damping is usually small (For steel, the damping ratio is 1~5×10 −4 ).The results of other scholars [2,3,20,[25][26][27] also showed that the influence of the ring damper on the frequency is negligible.With or without ring dampers, the frequency variation is less than 1%.Thus, the damper ring reduces the resonant amplitude of the gear, primarily by providing damping, rather than changing the stiffness of the gear system. Modal Analysis Modal analysis was performed with the FEM software ANSYS 14.5.The gear and ring damper finite element models are shown in Figure 2. The gear is a cyclic symmetry structure, comprising z fundamental sectors (Figure 2a).The ring damper is machined to be C-shaped for ease of installation.There is a split in the axial direction, as shown in Figure 2b. Symmetry. 2018, 11, x FOR PEER REVIEW 4 of 18 ,eq ,eq where ζ j and ζ j,eq are the modal damping ratio and the equivalent damping ratio caused by the ring damper for the jth mode, respectively; k j and k j,eq are the modal stiffness and equivalent stiffness for the jth mode, respectively; and 2 ω j j k = ; ω j is the jth natural frequency of the undamped system. The n equations represented in Equation 7can be uncoupled from all other equations.Therefore, the forced response of the jth mode can be calculated if the relationship between the equivalent damping and the equivalent stiffness and response amplitude can be pre-calculated. In general, the mass of the ring damper is much smaller than the mass of the main structure.Let the weight penalty be defined as mass of the ring damper = mass of the gear β In this study, the weight penalty is less than 5%.Note that the magnitudes of M and K are much larger than the magnitude of F (X,X, ) nl t  , thus k j,eq is much smaller than k j .Generally, for the ring damper, k j,eq is two orders of magnitude lower than k j .In other words, the ring damper does not affect the shape of the vibration mode; rather, it affects only the vibration amplitude.Moreover, the influence of the damper on the resonance frequency of the primary structure can be neglected.However, the equivalent damping matrix is of the same order of magnitude or even larger with respect to the damping matrix because the structural damping is usually small (For steel, the damping ratio is 1~5×10 -4 ).The results of other scholars [2,3,20,[25][26][27] also showed that the influence of the ring damper on the frequency is negligible.With or without ring dampers, the frequency variation is less than 1%.Thus, the damper ring reduces the resonant amplitude of the gear, primarily by providing damping, rather than changing the stiffness of the gear system. Modal Analysis Modal analysis was performed with the FEM software ANSYS 14.5.The gear and ring damper finite element models are shown in Figure 2. The gear is a cyclic symmetry structure, comprising z fundamental sectors (Figure 2a).The ring damper is machined to be C-shaped for ease of installation.There is a split in the axial direction, as shown in Figure 2b.Typical gear resonance failure in practice [1] is shown in Figure 3.The mode shapes (Figure 4) that lead to gears failure have the following features: 1. The modal amplitude has an integer number of harmonic distributions along the circumferential direction.2. The nodal line passes through the center of rotation, and the vibration amplitude of the nodal line is zero. 3. For thin-walled gears, the gear rim vibrates mainly in the radial direction. Symmetry.2018, 11, x FOR PEER REVIEW 5 of 18 Typical gear resonance failure in practice [1] is shown in Figure 3.The mode shapes (Figure 4) that lead to gears failure have the following features: 1.The modal amplitude has an integer number of harmonic distributions along the circumferential direction.2. The nodal line passes through the center of rotation, and the vibration amplitude of the nodal line is zero.3.For thin-walled gears, the gear rim vibrates mainly in the radial direction.Therefore, in this study, we focused on nodal diameter vibration.For N nodal diameters (ND), the radial displacement of the groove of the gear can be assumed as cos( ) where B is the maximum amplitude of the groove of the gear,N is the number of nodal diameters, θ is circumferential angle.Typical gear resonance failure in practice [1] is shown in Figure 3.The mode shapes (Figure 4) that lead to gears failure have the following features: Energy Dissipated by Frictional Force 1.The modal amplitude has an integer number of harmonic distributions along the circumferential direction.2. The nodal line passes through the center of rotation, and the vibration amplitude of the nodal line is zero.3.For thin-walled gears, the gear rim vibrates mainly in the radial direction.Therefore, in this study, we focused on nodal diameter vibration.For N nodal diameters (ND), the radial displacement of the groove of the gear can be assumed as cos( ) where B is the maximum amplitude of the groove of the gear,N is the number of nodal diameters, θ is circumferential angle.Therefore, in this study, we focused on nodal diameter vibration.For N nodal diameters (ND), the radial displacement of the groove of the gear can be assumed as Energy Dissipated by Frictional Force where B is the maximum amplitude of the groove of the gear, N is the number of nodal diameters, θ is circumferential angle. Energy Dissipated by Frictional Force In this paper, the motion of the gear is assumed to be small amplitude vibrations, i.e., only elastic deformation is considered and in the same mode shape, and the vibration stress is proportional to the Symmetry 2018, 10, 677 6 of 16 vibration amplitude.The following energy dissipation analysis is based on the method proposed by Alford [28][29][30][31] and Niemotka [24]. Generally, deflections of the structure at a resonance are very small compared to its size; otherwise, the structure will suffer fatigue failure in a short time.For small deformations, the strain-curvature relation is ε = κy (10) where ε, κ, and y are strain, curvature, and the distance from the neutral line, respectively.The preceding equation shows that the circumferential strains are proportional to the curvature and are linearly related with the distance y from the neutral line.Here tensile strain is defined as positive and compressive strain is defined as negative. The curvature can be expressed by the bending moment: where M, E, and I are bending moment, Young's modulus, and moment of inertia of ring dampers, respectively.Equation ( 11) is known as the moment-curvature equation.When the radius of curvature of a ring is sufficiently large compared to its radial height, the relationship between the bending moment M and radial displacement w can be expressed as [32] By substituting Equation ( 9) into Equation ( 12), the following relationship is obtained: At a distance y from the mean radius R, the bending strain is: For the gear, the strain on the contact surface of the gear is tensile on the groove interface; in contrast, it is compressive for the ring damper on the contact surface and vice versa, as shown in Figure 5. where c and R are the half of the radial thickness and the radius.Subscript g and d represent gear and damper respectively.When there is no relative motion on the contact surface, the contact state is the stick state.The relationship between strain caused by friction and bending strain is The strain caused by friction in the ring damper also can be calculated by dividing the frictional force by the product of the damper cross-sectional area and its Young's modulus.F f is defined as the frictional force per unit length, where F f is a function of circumferential angle θ. Symmetry 2018, 10, 677 7 of 16 By substituting Equation ( 17) into Equation ( 18), F f can be written as Symmetry.2018, 11, x FOR PEER REVIEW 7 of 18 When there is no relative motion on the contact surface, the contact state is the stick state.The relationship between strain caused by friction and bending strain is The strain caused by friction in the ring damper also can be calculated by dividing the frictional force by the product of the damper cross-sectional area and its Young's modulus.F f is defined as the frictional force per unit length, where F f is a function of circumferential angle θ. By substituting Equation 17 into Equation 18, F f can be written as When no slipping occurs on the entire contact surface, F f max will appears at θ = π/2N.And . When tangential force is greater than the maximum static friction, slipping occurs at θ < π/2N, and over the zone θ 0 < θ < π/2N, F f max = μP.Where μ is friction coefficient, and P is normal pressure on the contact surface. At Thus, where θ 0 represents the angle where slippage starts, which is called the critical slip angle.When no slipping occurs on the entire contact surface, F f max will appears at θ = π/2N.And . When tangential force is greater than the maximum static friction, slipping occurs at θ < π/2N, and over the zone θ 0 < θ < π/2N, F f max = µP.Where µ is friction coefficient, and P is normal pressure on the contact surface. At Thus, where θ 0 represents the angle where slippage starts, which is called the critical slip angle.When normal pressure P is constant, over a vibration cycle, the condition that no slipping occurs on the entire contact surface is the maximum vibration amplitude of the gear B is less than the critical vibration amplitude B c . In the sliding zone, the frictional force is equal to sliding frictional force µP.Therefore, the strain caused by friction can be written as where R d and A d are respectively radius and the cross-sectional area of the ring damper. The relative displacement on the contact surface can be obtained by integrating the strain.Note that displacement is 0 at the beginning of the sliding zone. Symmetry 2018, 10, 677 The energy dissipated by the ring damper in a complete vibration cycle, ∆W, can be obtained by integrating the product of the frictional force F f and the relative displacement s(θ) in the slip region. Note that ∆W depends on the critical slip angle θ 0 .According to Equation ( 21), θ 0 is a nonlinear function of B. Therefore, ∆W is a function of B. Equivalent Damping Ratio The loss coefficient η or damping ratio ζ is commonly used to indicate the damping capacity of engineering structures.The loss coefficient η is defined as the ratio of the energy dissipated per radian and the total vibration energy [33]: For small damping, the total vibration energy of the system W approximately equal to the maximum kinetic energy [33].Thus, the total vibration energy for the jth normal mode can be expressed as Thus, the equivalent structural damping ratio k j,eq in Equation ( 7) can be rewritten as Application and Discussion To validate the method shown in this article, the numerical simulation is applied to a real thin-walled gear made of 4310 steel (Young's modulus E = 207 GPa and density ρ = 7.84 × 10 3 kg/m 3 ).The mass of the gear is 425 g. Figure 4 shows the mode shape of the model with 3 ND.The corresponding natural frequency is 3758 Hz.For reasons of confidentiality, some of results are given in a normalized form. Method Validation The influence of the normal pressure on damping effect is compared with the results from the forced response analysis based on the harmonic balance method in [34], as shown in Figure 6.The results obtained by the two analysis methods are highly consistent.However, the method shown in this article does not need to calculate the equation of motion in the frequency domain or time domain, so it has faster calculation speed.Since the numerical method shown in this article is independent of excitation and inherent mechanical damping, the excitation and mechanical damping are given in accordance with [34].Since the normal pressure is not directly given in [34], the normal pressure in this section is a relative value (defined as normalized normal pressure P'). At P' = 0, the frictional force at the contact surfaces is 0, and the ring damper can freely slide relative to the gear.The energy dissipated by frictional is 0, and the ring dampers is ineffective.An increment of P' leads to the vibration to decrease down to a minimum value, corresponding to the optimum normalized normal pressure (about 0.45).A further increment of P' causes the vibration to increase again.When P' is large enough (about 1.65), the vibration amplitude increases to the amplitude at P'= 0. In this case, no relative motion takes place on the contact surface of the two structures Thus, the ring damper ceased to be effective.It is worth mentioning that two different analysis methods show that when the normal pressure is greater than about 3.7 times of the optimal normal pressure, the ring damper ceased to be effective, which will be further explained in the following parameter sensitivity analyses. natural frequency is 3758 Hz.For reasons of confidentiality, some of results are given in a normalized form. Method Validation The influence of the normal pressure on damping effect is compared with the results from the forced response analysis based on the harmonic balance method in [34], as shown in Figure 6.The results obtained by the two analysis methods are highly consistent.However, the method shown in this article does not need to calculate the equation of motion in the frequency domain or time domain, so it has faster calculation speed.Since the numerical method shown in this article is independent of excitation and inherent mechanical damping, the excitation and mechanical damping are given in accordance with [34].Since the normal pressure is not directly given in [34], the normal pressure in this section is a relative value (defined as normalized normal pressure P ' ).At P ' = 0, the frictional force at the contact surfaces is 0, and the ring damper can freely slide relative to the gear.The energy dissipated by frictional is 0, and the ring dampers is ineffective.An increment of P ' leads to the vibration to decrease down to a minimum value, corresponding to the optimum normalized normal pressure (about 0.45).A further increment of P ' causes the vibration to increase again.When P ' is large enough (about 1.65), the vibration amplitude increases to the amplitude at P ' = 0.In this case, no relative motion takes place on the contact surface of the two structures Thus, the ring damper ceased to be effective.It is worth mentioning that two different analysis methods show that when the normal pressure is greater than about 3.7 times of the optimal normal pressure, the ring damper ceased to be effective, which will be further explained in the following parameter sensitivity analyses.For a given normal pressure (or rotating speed), when the vibration amplitude B is small, the ring damper is full-stick, and there is no slip, as shown in Figure 7.When B increases to the critical vibration amplitude B c , sliding appears in θ 0 = π/2N.When B increases, the critical slip angle decreases and the slip area increases.When the vibration amplitude is large enough, the critical slip angle approaches 0, and the ring damper is approximately full-slip.In this case, the energy dissipation caused by the ring damper is approximately linear with the vibration amplitude, as shown in Figure 8.For a given normal pressure (or rotating speed), when the vibration amplitude B is small, the ring damper is full-stick, and there is no slip, as shown in Figure 7.When B increases to the critical vibration amplitude B c , sliding appears in θ 0 = π/2N.When B increases, the critical slip angle decreases and the slip area increases.When the vibration amplitude is large enough, the critical slip angle approaches 0, and the ring damper is approximately full-slip.In this case, the energy dissipation caused by the ring damper is approximately linear with the vibration amplitude, as shown in Figure 8.The normalized frictional force and the contact state between the gear and the ring damper along the circumferential direction are shown in Figure 9.In Figure 9a, when the vibration amplitude B is less than B c , the contact state is stick.Thus, no relative motion occurs on the contact surface.slowly as the vibration amplitude increases.In this case, the contact status is approximately full-slip, as shown in Figure 9d.This observation is highly consistent with other studies [9,20], according to those studies, when the excitation frequency is far from the natural frequency, the response amplitude is small and the contact status is stick.When the excitation frequency gradually approaches the natural frequency, relative slip appears on the contact surface and the slip region gradually increases.The normalized frictional force and the contact state between the gear and the ring damper along the circumferential direction are shown in Figure 9.In Figure 9a, when the vibration amplitude B is less than B c , the contact state is stick.Thus, no relative motion occurs on the contact surface.Frictional force is a function of θ, and the maximum frictional force appears at the position of the nodal line.When B = B c , slip appears at the position of the nodal line, as shown in Figure 9b.When B > B c , the slip region expands to both sides as B increases, as shown in Figure 9c.When B ≫ B c , the slip region increases slowly as the vibration amplitude increases.In this case, the contact status is approximately full-slip, as shown in Figure 9d.This observation is highly consistent with other studies [9,20], according to those studies, when the excitation frequency is far from the natural frequency, the response amplitude is small and the contact status is stick.When the excitation frequency gradually approaches the natural frequency, relative slip appears on the contact surface and the slip region gradually increases. Effect of Rotating Speed or Normal Pressure The normal pressure on the contact surface depends on the rotating speed of the system, and the normal pressure is proportional to the square of the rotational speed.Thus, in this article, only the effect of rotating speed is shown. Figure 10 shows the effect of rotating speed on the equivalent damping performance.In Figure 10(a), a decrement of the rotating speed causes the contact surface to slide more easily at a given Effect of Rotating Speed or Normal Pressure The normal pressure on the contact surface depends on the rotating speed of the system, and the normal pressure is proportional to the square of the rotational speed.Thus, in this article, only the effect of rotating speed is shown. Figure 10 shows the effect of rotating speed on the equivalent damping performance.In Figure 10a, a decrement of the rotating speed causes the contact surface to slide more easily at a given resonance stress, resulting in a lower critical vibration stress.Also, the vibration stress corresponding to the maximum damping ratio decreases as the rotating speed decreases.In Figure 10b, for a given resonance stress, when the rotating speed is greater than about 1.9 times of the optimal rotating speed, the contact surface is full-stick, where the optimum rotating speed is defined as the rotating speed corresponding to the maximum damping ratio. Effect of Temperature Figure 11 shows the effect of temperature on the damping performance.The effect of temperature is negligible.This, of course, is because the change in Young's modulus E is small during the operating temperature range.The stiffness of the ring damper is almost unchanged.This also indicates that the ring damper can work at high temperatures and with good temperature adaptability. Effect of Temperature Figure 11 shows the effect of temperature on the damping performance.The effect of temperature is negligible.This, of course, is because the change in Young's modulus E is small during the operating temperature range.The stiffness of the ring damper is almost unchanged.This also indicates that the ring damper can work at high temperatures and with good temperature adaptability. Effect of the Ring Damper Density The effect of the density is investigated according to its effect on the normal pressure acting on the contact surface.The normal direction is defined as along the radial direction of the gear. The effect of the ring damper density on the damping performance is shown in Figure 12.The critical vibration stress increases with an increase of the ring damper density.If the density is too large, then the ring damper ceases to be effective due to the contact surface tends to stick.In this case no energy is dissipated by frictional force. Effect of the Friction Coefficient As shown in Figure 13a, the effect of the friction coefficient μ on the damping performance is similar to density.Increasing μ results in an increase in critical vibration stress.Moreover, in this case, the contact surface tends to be full-stick due to an increase in μ.In contrast, a decrease in μ results in the contact surface tending to be full-slip.However, due to F f max = μP, the maximum frictional force on the contact surface F f max decreased with a decrease in μ.For a given vibration stress, there is an optimum density that maximizes frictional damping.When the density is greater than 3.7 times the optimal density, the ring damper will cease to be effective again, as shown in Figure 13b. Effect of the Ring Damper Density The effect of the density is investigated according to its effect on the normal pressure acting on the contact surface.The normal direction is defined as along the radial direction of the gear. The effect of the ring damper density on the damping performance is shown in Figure 12.The critical vibration stress increases with an increase of the ring damper density.If the density is too large, then the ring damper ceases to be effective due to the contact surface tends to stick.In this case no energy is dissipated by frictional force. Effect of the Ring Damper Density The effect of the density is investigated according to its effect on the normal pressure acting on the contact surface.The normal direction is defined as along the radial direction of the gear. The effect of the ring damper density on the damping performance is shown in Figure 12.The critical vibration stress increases with an increase of the ring damper density.If the density is too large, then the ring damper ceases to be effective due to the contact surface tends to stick.In this case no energy is dissipated by frictional force. Effect of the Friction Coefficient As shown in Figure 13a, the effect of the friction coefficient μ on the damping performance is similar to density.Increasing μ results in an increase in critical vibration stress.Moreover, in this case, the contact surface tends to be full-stick due to an increase in μ.In contrast, a decrease in μ results in the contact surface tending to be full-slip.However, due to F f max = μP, the maximum frictional force on the contact surface F f max decreased with a decrease in μ.For a given vibration stress, there is an optimum density that maximizes frictional damping.When the density is greater than 3.7 times the optimal density, the ring damper will cease to be effective again, as shown in Figure 13b. Effect of the Friction Coefficient As shown in Figure 13a, the effect of the friction coefficient µ on the damping performance is similar to density.Increasing µ results in an increase in critical vibration stress.Moreover, in this case, the contact surface tends to be full-stick due to an increase in µ.In contrast, a decrease in µ results in the contact surface tending to be full-slip.However, due to F f max = µP, the maximum frictional force on the contact surface F f max decreased with a decrease in µ.For a given vibration stress, there is an optimum density that maximizes frictional damping.When the density is greater than 3.7 times the optimal density, the ring damper will cease to be effective again, as shown in Figure 13b. Effect of the Cross-Sectional Area of the Ring Damper The cross-sectional area is equal to the product of the radial thickness and the axial thickness of the ring damper.The effect of the radial thickness is shown in Figure 14.The critical vibration stress decreases and the peak damping ratio increased with an increase in the radial thickness.Increasing the radial thickness can significantly improve the damping performance.The effect of axial thickness is shown in Figure 15.The critical vibration stress is not affected by the axial thickness.However, the peak damping ratios increase with an increase in the axial thickness.In the premise that the mass of ring dampers is much smaller than the mass of gears, the equivalent damping ratio is approximately linear with the axial thickness. Effect of the Cross-Sectional Area of the Ring Damper The cross-sectional area is equal to the product of the radial thickness and the axial thickness of the ring damper.The effect of the radial thickness is shown in Figure 14.The critical vibration stress decreases and the peak damping ratio increased with an increase in the radial thickness.Increasing the radial thickness can significantly improve the damping performance.The effect of axial thickness is shown in Figure 15.The critical vibration stress is not affected by the axial thickness.However, the peak damping ratios increase with an increase in the axial thickness.In the premise that the mass of ring dampers is much smaller than the mass of gears, the equivalent damping ratio is approximately linear with the axial thickness. Therefore, for a given cross-sectional area, the ring damper with a large ratio of radial thickness to axial thickness has a better damping effect. Conclusions In this article, a theoretical study of ring dampers for thin-walled gears has been shown.A numerical method to predict the damping performance of ring dampers is proposed.In the proposed method, the energy dissipated by the ring damper is calculated through a quasi-static process then it is expressed as the equivalent mechanical damping function that depends on vibration stress.The validity of the model is confirmed by a comparison with forced response analysis results.Compared with forced response analysis, the method shown in this article only needs once modal analysis of the primary structure.The proposed method avoids computation of the periodical response of the nonlinear structure.Therefore, minimal computation is required to obtain the damping performance, which greatly improves the efficiency of ring dampers design. The damping performance of the ring damper depends on the vibration amplitude of the gear B and the damper parameters.When B is less than the critical vibration amplitude B c , the ring damper is ineffective.When B is greater than B c , the ring damper can provide friction damping.By increasing B, slip first appears at position of the nodal line, and the slip region expands to both sides as B increases.At approximately 3.7 times the critical vibration amplitude, the efficiency of the damper is theoretically maximized.Therefore, for a given cross-sectional area, the ring damper with a large ratio of radial thickness to axial thickness has a better damping effect. Conclusions In this article, a theoretical study of ring dampers for thin-walled gears has been shown.A numerical method to predict the damping performance of ring dampers is proposed.In the proposed method, the energy dissipated by the ring damper is calculated through a quasi-static process then it is expressed as the equivalent mechanical damping function that depends on vibration stress.The validity of the model is confirmed by a comparison with forced response analysis results.Compared with forced response analysis, the method shown in this article only needs once modal analysis of the primary structure.The proposed method avoids computation of the periodical response of the nonlinear structure.Therefore, minimal computation is required to obtain the damping performance, which greatly improves the efficiency of ring dampers design. The damping performance of the ring damper depends on the vibration amplitude of the gear B and the damper parameters.When B is less than the critical vibration amplitude B c , the ring damper is ineffective.When B is greater than B c , the ring damper can provide friction damping.By increasing B, slip first appears at position of the nodal line, and the slip region expands to both sides as B increases.At approximately 3.7 times the critical vibration amplitude, the efficiency of the damper is theoretically maximized. For a given amplitude, there is optimum speed, density and friction coefficient to maximize damping.Excessively increasing or decreasing the rotating speed, the ring damper density and the friction coefficient will cause the contact surface to be full-stick or full-slide.In both cases, the ring damper does not provide frictional damping.For a given mass of ring dampers, different damping performances may be observed if the density and the ratio of radial thickness to axial thickness are different. The proposed method works well when the mass of the ring damper is significantly less than the mass of the primary structure.The ring damper can provide substantial damping and only weakly affects the mode shape of the system.This methodology is suitable for specific applications such as gears or blisks with ring dampers. Figure 1 . Figure 1.An example of a gear with a ring damper. Figure 1 . Figure 1.An example of a gear with a ring damper. Figure 4 . Figure 4. Mode shape of the gear with 3 nodal diameters. Figure 4 . Figure 4. Mode shape of the gear with 3 nodal diameters. Figure 4 . Figure 4. Mode shape of the gear with 3 nodal diameters. Figure 5 . Figure 5. Local behavior in the contact region. Figure 5 . Figure 5. Local behavior in the contact region. Figure 6 . Figure 6.Validation of the proposed method by [34] results: Resonance amplitude by normalized normal pressure. Figure 6 . Figure 6.Validation of the proposed method by [34] results: Resonance amplitude by normalized normal pressure. Figure 7 . Figure 7.The critical angle versus the normalized amplitude. Figure 7 . Figure 7.The critical angle versus the normalized amplitude. Frictional force is a function of θ, and the maximum frictional force appears at the position of the nodal line.When B = B c , slip appears at the position of the nodal line, as shown in Figure 9b.When B > B c , the slip region expands to both sides as B increases, as shown in Figure 9c.When B B c , the slip region increases Symmetry 2018, 10, 677 10 of 16 Figure 7 . Figure 7.The critical angle versus the normalized amplitude. Figure 8 . Figure 8. Energy dissipated per cycle by the ring damper and maximum kinetic energy of the system versus normalized amplitude. Figure 8 . Figure 8. Energy dissipated per cycle by the ring damper and maximum kinetic energy of the system versus normalized amplitude. Figure 9 . Figure 9. Normalized frictional force and contact state: (a) B < B c ; (b) B = B c ; (c) B > B c ; (d) B ≫ B c. Figure 9 . Figure 9. Normalized frictional force and contact state: (a) B < B c ; (b) B = B c ; (c) B > B c ; (d) B B c . Figure 10 . Figure 10.Effect of the rotating speed: (a) Friction damping at various rotating speed; (b) friction damping for normalized rotating speed (for a given vibration stress). Figure 10 . Figure 10.Effect of the rotating speed: (a) Friction damping at various rotating speed; (b) friction damping for normalized rotating speed (for a given vibration stress). Figure 12 . Figure 12.Effect of the ring damper density. Figure 12 . Figure 12.Effect of the ring damper density. Figure 12 . Figure 12.Effect of the ring damper density. Figure 13 . Figure 13.Effect of the friction coefficient:(a) Friction damping at various friction coefficient; (b) friction damping for normalized friction coefficient (for a given vibration stress). Figure 13 . Figure 13.Effect of the friction coefficient:(a) Friction damping at various friction coefficient; (b) friction damping for normalized friction coefficient (for a given vibration stress). Symmetry. 2018 , 18 Figure 14 . Figure 14.Effect of the radial thickness of the ring damper. Figure 14 . 16 Figure 14 . Figure 14.Effect of the radial thickness of the ring damper. Figure 15 . Figure 15.Effect of the axial thickness of the ring damper. Figure 15 . Figure 15.Effect of the axial thickness of the ring damper.
10,474.8
2018-11-30T00:00:00.000
[ "Engineering" ]
Quantification of local dislocation density using 3D synchrotron monochromatic X-ray microdiffraction ABSTRACT A novel approach evolved from the classical Wilkens’ method has been developed to quantify the local dislocation density based on X-ray radial profiles obtained by 3D synchrotron monochromatic X-ray microdiffraction. A deformed Ni-based superalloy consisting of γ matrix and γ′ precipitates has been employed as model material. The quantitative results show that the local dislocation densities vary with the depths along the incident X-ray beam in both phases and are consistently higher in the γ matrix than in the γ′ precipitates. The results from X-ray microdiffraction are in general agreement with the transmission electron microscopic observations. GRAPHICAL ABSTRACT IMPACT STATEMENT A new approach based on 3D synchrotron microdiffraction showing broad application potential in heterogeneous materials was developed and applied to quantify local dislocation densities in a fatigued two-phase Ni-based superalloy. Introduction Dislocations are present in all crystalline materials. A quantitative description of the dislocation content, including their type, density and spatial distribution, is essential for understanding their origin, dynamics and contribution to the materials physical and mechanical properties [1,2]. Transmission electron microscopy (TEM) is one of the most frequently employed characterization techniques for such studies [3]. However, when the dislocation density exceeds 10 14 m −2 , it becomes challenging to count the dislocations precisely. Features of their spatial arrangement as mutual screening of their strain fields [4] are hard to obtain in conventional TEM and only assessable using high-resolution mode [5]. X-ray (and neutron) diffraction is able to provide useful information about the characteristics of dislocations. Based on the X-ray diffraction radial line profiles, Wilkens screening of the strain field of dislocations by introducing two parameters, namely the effective outer cut-off radius R e and the dislocation screening factor M (M = R e √ ρ) [6,7]. A small or large M value implies a strong or weak screening of the strain field of dislocations, respectively. His method has been applied successfully to quantify the dislocation density in a number of material systems [8][9][10]. The development of synchrotron sources enables measuring diffraction profiles with high accuracy. With monochromatic synchrotron X-ray, high-resolution reciprocal space mapping technique, for example, dislocation structures with dislocation-free regions separated by dislocation walls can be discerned in individual crystalline grains and the local dislocation density within individual subgrains revealed [11,12]. However, the spatial distribution of dislocations and their densities in real space cannot be obtained using this technique. Effects of local microstructural heterogeneities on the plastic deformation behavior are thus difficult to be studied. Another synchrotron technique, 3D Laue microdiffraction, utilizes focused polychromatic X-rays and differential aperture to resolve the diffraction signals from local micrometer-sized voxels [13,14]. Employing this technique, the dislocation contents have been linked with the shape of the Laue peaks [15]. The method has been successfully used to identify the slip systems in deformed materials [16,17] and to quantify the local geometrically necessary dislocation density [18][19][20]. Most properties such as strength are controlled by all dislocations including the redundant dislocation density (not causing any geometrical consequences). Determination of the total dislocation density by polychromatic X-rays is difficult, however [21]. Alternatively, the 3D intensity distribution of a diffraction peak in reciprocal space from a certain volume within a specimen, can be obtained by tuning the monochromatic X-ray energy using a specially designed monochromator [14,22]. From the 3D intensity distribution, radial line profiles can be obtained. So far, such radial line profiles have not been utilized thoroughly for investigations of the dislocation content. The present study aims to accomplish such a quantitative characterization by extending the classical Wilkens' method to the radial profiles obtained by microdiffraction tuning monochromatic X-ray energies. Material and methods A deformed directionally solidified two-phase Ni-based superalloy, DZ17G, was used as model material to demonstrate the broad usage of the method. During solidification, dendrites grow with one of their 001 directions along the temperature gradient. Perpendicular to the growth direction the dendrites are ∼ 200 μm in width. Boundaries between two adjacent dendrites may be either low-or high-angle boundaries depending on their mutual misorientation. The resulting grain width (defined by boundary misorientation angles of 15°a nd above) ranges from 200 μm to 2 mm. This alloy has a structure consisting of coherently oriented cuboidal precipitates of an ordered L1 2 γ -Ni 3 (Al,Ti) phase in a matrix of face-centered cubic (FCC) γ -phase [23,24]. The cuboidal γ precipitates have an average size of ∼ 360 nm, with their edges aligned with the three crystallographic 001 directions. The cuboids are distributed uniformly in the γ matrix, which appears as interconnected 3D channels separating the cuboids. The average width of the γ channels is ∼ 35 nm. The volume fraction of the γ precipitates is about 70%. An as-cast sample was vibration fatigued using a D-300 vibrating machine of Suzhou SuShi Testing Group Co., Ltd. in China, following the standard of the Ministry of the Aviation Industry of the P.R.C. (HB5277-84). More specimen details were given in section A in the supplementary material. Synchrotron microdiffraction was conducted at beamline 34-ID-E at Advanced Photon Source in the USA. A polychromatic X-ray beam was focused to a size of ∼ 0.3 μm using non-dispersive Kirkpatrick-Baez mirrors. The sample was mounted on a holder at an inclination of 45°to the incoming beam. Laue diffraction patterns were recorded using a panel detector mounted in a 90°r eflection geometry 513.2 mm above the specimen. The detector position with respect to the incident beam was calibrated using a strain-free silicon single crystal. A sketch of the experimental set-up and an indexed Laue diffraction pattern from the investigated region can be found in section A in the supplementary material. A monochromatic beam was used for mapping the 3D intensity distribution around the 800 diffraction spot in reciprocal space by scanning the X-ray energy with an energy step of 5 eV from 21.348 keV to 21.688 keV. For each energy step, a Pt knife-edge scanning along the sample surface at a distance of 250 μm was used as a differential aperture for resolving the diffraction signal from different depths illuminated by the microbeam with a resolution of 5 μm. The energy and depth step sizes were chosen to balance between accuracy of the resulting radial line profile and time consumption. Four examples of depth-resolved diffraction patterns obtained at different X-ray energies from a small volume along the beam at a depth of 5-10 μm below the sample surface are shown in Figure 1(a). Based on the diffraction geometry and the energy of the X-rays, the diffraction vector, Q i , for each pixel in the diffraction patterns was determined. The intensity distribution as a function of the length of the diffraction vector, Q = 4πsinθ /λ (λ is the wavelength of X-ray and θ is the Bragg angle), was determined for each energy and each depth. By collecting the intensity distributions of all individual energy steps for each voxel, an X-ray radial line profile from a local volume at each depth was determined. Results and discussion An example of the radial line profile for a depth of 5-10 μm is shown as black circles in Figure 1(b). It is seen that the radial profile is asymmetrical, with a longer tail at the lower Q than at higher Q. This is mainly due to the presence of the γ and γ phases, which have slightly different lattice constants. To reveal the diffraction signal originating from each phase, the radial line profile is separated into two subprofiles using a mirroring method described in section B in the supplementary material. For this separation, the ratio of the integrated intensity between γ and γ subprofiles at each depth is assumed equal to the ratio of the macroscopic volume fraction between the γ and γ phases, i.e. 30:70. Considering the fact that the size of the probed volume is much larger than the sizes of the γ channels and γ cuboids, this is a reasonable assumption. Also, in section E in the supplementary material it is shown that the influence of different integrated intensity ratio is insignificant. An example of separated subprofiles is shown in Figure 1(b), where the orange and olive curves are from the γ and γ phases, respectively. The full width at half maximum (FWHM) of the separated radial subprofiles, δ E , for the local volumes at different depths is determined and shown in Figure 2(a). It is seen that δ E for both phases is different along the depths and the values for the γ phase are in general smaller than those for the γ phase. The lattice constant a determined based on the maximum intensity of subprofile is smaller for the γ precipitate than the γ phase (Figure 2(b)), leading to negative γ /γ lattice misfit, γ /γ = 2(a γ − a γ )/(a γ + a γ ), in a range between −0.05% and −0.11% (Figure 2(c)). Based on the separated line profiles, the dislocation screening factor M and the dislocation density ρ are determined using the Wilkens' method [6,7] for each phase. A brief summary of the Wilkens' method of describing the X-ray line profile of restrictedly random distributed dislocations (with equal amount of dislocations having opposite signs of their Burgers vector) is given here, while more details can be seen in section C in the supplementary material. Series of radial profiles normalized by the dislocation density were determined numerically for different screening parameters M * in the range 0.5-10 based on Wilkens' theory [6,7]. The asterisk indicates that the normalized profiles were calculated by taking into account restrictedly random distributions of solely screw dislocations. The series of calculated radial profiles was then compared to the experimental radial profile by comparing the ratios between full width at several different intensities and the FWHM. The calculated profile with best shape matching to the experimental one (in terms of these ratios) is identified and its apparent dislocation screening factor M * and its FWHM, denoted as δ M * , determined. The apparent dislocation density ρ * is then given by: ( 1 ) The actual dislocation density ρ can be determined using Equation (2), considering also edge dislocations: whereC is a geometrical contrast factor depending on the angles between Q, b (Burgers vector) and l (line vector) of the involved dislocations. The contrast factor is 0.1667 for screw dislocations (i.e. C * ) and 0.1889 for edge dislocations [6], the average valueC will be in the range between these two values. The small difference between C * and C will result only in small differences between the ρ * and ρ. For simplicity and avoiding further assumptions about the involved dislocations, the apparent parameters ρ * and M * are directly used in this article. Last but not least, for the present analysis, peak broadening from size and instrumental effects has little influence on the dislocation densities (about 10%, see section D in the supplementary material) and is therefore omitted in the calculations. The apparent parameters M * and ρ * for each phase and each depth are displayed in Figure 2(d,e), respectively. The results show that the apparent dislocation screening factors M * for the γ channels are in general smaller than that for the γ precipitates (Figure 2(d)), which implies that δ M * is also smaller for the γ phases (see Fig. S3b). According to Equation (1), smaller δ M * and larger δ E lead to larger ρ * for the γ phase (Figure 2(e)). ρ * in general decreases from the surface to the interior for the γ phase, while no clear pattern is seen for the γ phase. The volume-weighted average ρ * over the two phases, at different depths is shown in Figure 2(f). The average ρ * is generally higher in the region close to the surface (with depth below 20 μm) than that in the deeper region. The average apparent dislocation density over the entire characterized volume is ∼ 12.1 × 10 14 m −2 in the γ phase and ∼ 5.7 × 10 14 m −2 in the γ phase. Considering the volume ratio between the two phases, this result suggests that the number of dislocations within the two phases is similar. The apparent effective outer cut-off radius R e * = M * / √ ρ * of dislocations (not shown here), varies from 12 nm to 78 nm with an average of 35 nm in the γ phase, and from 53 nm to 141 nm with an average of 76 nm in the γ phase. To confirm the quantitative results, the microstructure of the sample was characterized using TEM (see section F in the supplementary material). An example TEM micrograph is shown in Figure 3. The dislocations are heterogeneously distributed between both phases. Darker regions depict the majority of the γ channels, suggesting a high dislocation density there, while no dislocations are seen in some of the γ channels (see e.g. the one marked by the orange arrows in Figure 3). The average dislocation density in the γ phase is determined to be about 8.5 × 10 14 m −2 (for details see section F). A large number of dislocations are also apparent in some γ cuboids, while no dislocation is seen in others. The majority of these dislocations are inclined approximately 45°to the cuboid axes and appear in pairs. It remains uncertain if these dislocations reside within the γ cuboids; they may actually lie in γ channels parallel to the TEM foil [25]. The density of dislocation in the γ phase can therefore not be quantified, but is obviously much smaller than in the γ phase. Nevertheless, the level of the average dislocation density in the γ phase determined from TEM is comparable with that determined from the radial line profiles, suggesting that the calculation based on the local microdiffraction data is reliable. The fact that dislocations in the γ phase are confined to the channels and the γ precipitates are isolated by the channels is likely to be the reason for the small values of R e * between 12 and 141 nm determined from the microdiffraction data. To understand the variation seen in Figure 2, pseudo white beam diffraction patterns are obtained by adding all diffraction patterns collected through the series of energies for each depth. The result is shown in Figure 4. Peak splitting is seen in several of the patterns, indicating the presence of a subgrain boundary in the corresponding local volume, i.e. between a depth of 15 and 35 μm. Hence, the region closer to the surface belongs to a different subgrain than the region probed in larger depth. The higher average ρ * (Figure 2(f)) for the subgrain close to the surface than that deeper in the volume, suggests that the crystallographic orientation of different subgrains plays a role for their plastic deformation and dislocation accumulation. (The presence of a subgrain boundary at a depth of 20-25 μm is likely to be also the reason for the small lattice misfit seen in Figure 2(c)). The misorientation angle between both subgrains is at maximum around 0.9°(seen at the depth between 15-20 μm and 20-25 μm). The (geometrically necessary) dislocation density estimated using the Read-Shockley formula ρ = θ/bx [26] based on the misorientation angle θ between subgrains is ∼ 2.5 × 10 13 m −2 , assuming a spacing x of 5 μm. This density is about one order of magnitude less than the total dislocation density determined based on the radial line profile. The overwhelming contribution to the dislocation density in the sample comes from redundant dislocations (with opposing sign of their Burgers vector) as a result of the fatigue deformation of the Ni-based superalloy [27,28]. Conclusions In the present study, intragranular dislocation densities in a vibration-fatigued Ni-based superalloy, DZ17G, have been quantified based on 3D synchrotron monochromatic microdiffraction data using the classical Wilkens' radial line profile method. In this manner, the redundant dislocation density is revealed locally on small length scale of 5 μm, which has not been accessed before by X-ray diffraction, neither by conventional line profile analysis, which cannot capture heterogeneities on micrometer length scales, nor by local polychromatic X-ray investigations, which solely revealed the geometrically necessary dislocation content. Our results show that large amount of redundant dislocations (in the order of 10 14 m −2 ) are generated during vibration fatigue test. The dislocation densities resolved in the γ channels of the superalloy are about twice that in the cuboidal γ precipitates. Local variations in dislocation density are seen for the γ and γ phases at different depth along the incident X-ray. Significant differences are detected between two subgrains with misorientation of ∼ 1°. Upon continued upgrades and developments of synchrotron achieving orders of magnitude higher brilliance and smaller size, the powerful approach introduced in this article will allow resolving intragranular dislocation structures with a spatial resolution better than 100 nm in a broad range of materials.
3,953.6
2021-04-03T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Insights on Structure and Function of a Late Embryogenesis Abundant Protein from Amaranthus cruentus: An Intrinsically Disordered Protein Involved in Protection against Desiccation, Oxidant Conditions, and Osmotic Stress Late embryogenesis abundant (LEA) proteins are part of a large protein family that protect other proteins from aggregation due to desiccation or osmotic stresses. Recently, the Amaranthus cruentus seed proteome was characterized by 2D-PAGE and one highly accumulated protein spot was identified as a LEA protein and was named AcLEA. In this work, AcLEA cDNA was cloned into an expression vector and the recombinant protein was purified and characterized. AcLEA encodes a 172 amino acid polypeptide with a predicted molecular mass of 18.34 kDa and estimated pI of 8.58. Phylogenetic analysis revealed that AcLEA is evolutionarily close to the LEA3 group. Structural characteristics were revealed by nuclear magnetic resonance and circular dichroism methods. We have shown that recombinant AcLEA is an intrinsically disordered protein in solution even at high salinity and osmotic pressures, but it has a strong tendency to take a secondary structure, mainly folded as α-helix, when an inductive additive is present. Recombinant AcLEA function was evaluated using Escherichia coli as in vivo model showing the important protection role against desiccation, oxidant conditions, and osmotic stress. AcLEA recombinant protein was localized in cytoplasm of Nicotiana benthamiana protoplasts and orthologs were detected in seeds of wild and domesticated amaranth species. Interestingly AcLEA was detected in leaves, stems, and roots but only in plants subjected to salt stress. This fact could indicate the important role of AcLEA protection during plant stress in all amaranth species studied. Late embryogenesis abundant (LEA) proteins are part of a large protein family that protect other proteins from aggregation due to desiccation or osmotic stresses. Recently, the Amaranthus cruentus seed proteome was characterized by 2D-PAGE and one highly accumulated protein spot was identified as a LEA protein and was named AcLEA. In this work, AcLEA cDNA was cloned into an expression vector and the recombinant protein was purified and characterized. AcLEA encodes a 172 amino acid polypeptide with a predicted molecular mass of 18.34 kDa and estimated pI of 8.58. Phylogenetic analysis revealed that AcLEA is evolutionarily close to the LEA3 group. Structural characteristics were revealed by nuclear magnetic resonance and circular dichroism methods. We have shown that recombinant AcLEA is an intrinsically disordered protein in solution even at high salinity and osmotic pressures, but it has a strong tendency to take a secondary structure, mainly folded as α-helix, when an inductive additive is present. Recombinant AcLEA function was evaluated using Escherichia coli as in vivo model showing the important protection role against desiccation, oxidant conditions, and osmotic stress. AcLEA recombinant protein was localized in cytoplasm of Nicotiana benthamiana protoplasts and orthologs were detected in seeds of wild and domesticated amaranth species. Interestingly AcLEA was detected in leaves, stems, and roots but only in plants subjected to salt stress. This fact could indicate the important role of AcLEA protection during plant stress in all amaranth species studied. INTRODUCTION Seeds can withstand the loss of cellular water during the maturation phase of their development by the accumulation of high levels of ubiquitous proteins named late embryogenesis abundant (LEA) proteins (Ali-Benali et al., 2005;Dalal et al., 2009;Liu et al., 2013;Avelange-Macherel et al., 2015). LEA proteins were originally discovered in cotton (Gossypium hirsutum) seeds (Dure, 1989), but their accumulation is not only related to the development of desiccation tolerance in orthodox seeds (desiccation-tolerant seeds). LEA proteins are also induced upon water-related stress in plant vegetative tissues and in other anhydrobiotic organisms such as eubacteria, rotifers, nematodes, tardigrades, arthropods (Ingram and Bartels, 1996;Browne et al., 2002;Hundertmark and Hincha, 2008;Campos et al., 2013;Hatanaka et al., 2014;van Leeuwen et al., 2016). In some microorganisms, LEA proteins are reported in response to water limitation, which suggests that they have an important role in desiccation tolerance (Tunnacliffe and Wise, 2007;Tunnacliffe et al., 2010;Hand et al., 2011). In spite of their widely recognized importance for desiccation tolerance, the molecular function of LEA proteins is only starting to emerge, with a variety of functions in agreement with their diversity (Battaglia and Covarrubias, 2013). The distinctive features of LEA proteins are their high hydrophilicity due to a high percentage of charged amino acids such as alanine, serine/threonine and the absence or very low content of non-polar amino acids (tryptophan and cysteine). The presence of repeated motifs, which tend to form secondary structures, has detected in LEA proteins (Dure, 1989;Garay-Arroyo et al., 2000;Tunnacliffe and Wise, 2007). Although LEA proteins are intrinsically disordered proteins (IDP) in aqueous solutions (Wolkers et al., 2001;Goyal et al., 2003;Boucher et al., 2010;Tompa and Kovacs, 2010;Popova et al., 2011), they may acquire some structure folding into α-helical conformations during partial or complete dehydration (Shih et al., 2004;Tolleter et al., 2007;Hincha and Thalhammer, 2012). Several hundreds of LEA protein sequences have been gathered in a dedicated database 1 and bioinformatics analyses have shown that each LEA class can be clearly characterized by a unique set of physico-chemical properties. This has led to the classification of LEA proteins into 12 non-overlapping classes with distinct properties (Battaglia et al., 2008;Hunault and Jaspard, 2010;Jaspard et al., 2012). Although quite a few LEAs have been characterized, the functions of most members of the LEA family remain unknown (Cao and Li, 2015). Transgenic Arabidopsis thaliana plants overexpressing the Nicotiana tabacum NtLEA7-3 gene are much more resistant to cold, drought, and salt stresses (Gai et al., 2011). Tomato LEA25 increases the salt and chilling stress tolerance when overexpressed in yeast (Imai et al., 1996). Wheat and rice over-expressing HVA1 gene (encoding 1 http://forge.info.univ-angers.fr/~gh/Leadb an LEA protein from barley) are more tolerant to drought and salt stress (Xu et al., 1996;Sivamani et al., 2000). Olvera-Carrillo et al. (2010) reported that in A. thaliana, the accumulation of AtLEA4 protein leads to a drought tolerant phenotype. The overexpression of BnLEA4-1 from Brassica napus in Escherichia coli can enhance bacterial cellular tolerance to temperature and salt stresses (Dalal et al., 2009). On the other hand, LEA proteins have a broad subcellular distribution; they are present in cytosol, mitochondria, chloroplasts, endoplasmic reticulum, and nucleus (Candat et al., 2014) and the specific modes of their action could be related to their intracellular location. The biological activity of these proteins seems to be associated with the stabilization of membranes during cell drying (Tolleter et al., 2010), and assistance of the transport of proteins during stress conditions (Chakrabortee et al., 2010). Amaranth, a member of Amaranthaceae family, is a plant that has been cultivated and used since ancient times by Mexican and Central American civilizations. In the last decades, the nutritional role of amaranth seeds from different species has been revalued, particularly for A. hypochondriacus and A. cruentus, not only because of their high protein content and their contribution of essential amino acids, like lysine and methionine (compared to other grains), but also for their antioxidant compounds (Becker et al., 1981;Rastogi and Shukla, 2013), and bioactive peptides (Silva-Sánchez et al., 2008). Current interest in amaranth plants is also related to their extraordinary adaptability to grow in adverse weather conditions (Brenner et al., 2000). Amaranth is resistant against several types of stresses such as pest (Valdes-Rodríguez et al., 2007), heat (Maughan et al., 2009), drought (Huerta-Ocampo et al., 2011, and salinity (Aguilar-Hernández et al., 2011;Huerta-Ocampo et al., 2014). The recent report of Amaranthus cruentus seed proteome by 2D-PAGE revealed the over-accumulation of one spot identified as a LEA protein (Maldonado-Cervantes et al., 2014). In the present study, we have cloned the corresponding LEA cDNA from A. cruentus (AcLEA, GenBank accession no. KX852451), and the recombinant protein was expressed in E. coli. Nuclear Magnetic Resonance (NMR) and Circular Dichroism (CD) were used as tools to study the structural characteristics of this particular AcLEA protein. Its functional activity was evaluated in vivo using E. coli as model. According to its amino acid sequence, AcLEA protein belongs to the Group 3, its hydrophilic nature and spectroscopic characteristics being ad hoc with IDP molecules, but exhibiting a high content of α-helix in the presence of trifluoroethanol (TFE). Overexpression of AcLEA in E. coli conferred resistance to desiccation, osmotic and oxidative stress to the bacterial cells. When accumulated in a heterologous system (Nicotiana benthamiana protoplasts) the amaranth protein was found to be distributed in the cytoplasm of protoplasts. Western blot analyses disclosed that AcLEA protein accumulated in seeds of wild and domesticated amaranth species. Accumulation of AcLEA in leaves, stems, and roots was observed only in plants subjected to salinity stress. RNA Extraction and Cloning of the cDNA Encoding AcLEA Immature seeds (15 days after anthesis) of Amaranthus cruentus were used to extract total RNA with TRIzol Reagent (Invitrogen, Carlsbad, CA, USA) and cDNA was synthesized as previously reported (Maldonado-Cervantes et al., 2014). AcLEA cDNA was amplified using specific primers containing NdeI (5 -CATATGGCATCACATGGTCAGAGT-3 ) and XhoI (5 -CTCGAGCTAGGGCCTAGTAGTCTTAATTGGATC-3 ) restriction sites. cDNA amplification was performed using Platinum Taq DNA polymerase (Invitrogen), under standard reaction conditions. The amplified PCR product was cloned into the plasmid pGEM-T-Easy (Promega Corp., Madison, WI, USA). AcLEA cDNA was excised from pGEM using NdeI and XhoI (New England Biolabs, Ipswich, MA, USA) restriction enzymes. Digested fragments were purified and subcloned into pET28 expression vector restricted with NdeI-XhoI (Novagen-Merck, Darmstadt, Germany) containing the His-Tag at N-terminal. Vector was modified to have a recognition cleavage site within the amino acid sequence LeuGluValLeuPheGln/GlyPro specific for human rhinovirus 3C protease as PreScission Protease (PSP), and ending with the pET28mod vector. The resulting plasmid pET28mod-AcLEA was sequenced in both directions to confirm the AcLEA cDNA identity. Alternatively, the AcLEA cDNA was PCR flanked with attB1 and attB2 recombination sites for generation of an entry clone using the gateway system entry vector pDONR-Zeo (Karimi et al., 2007), which was later used to generate the expression vector pEarlyGate 103-AcLEA. Physicochemical Properties and Phylogenetic Analyses Protein hydrophilicity analysis was performed to obtain the hydropathy plots with the Kyte and Doolittle (1982) values from the Expasy ProtScale Tool 2 (Gasteiger et al., 2005). Grand average of hydropathicity (GRAVY) and instability index were calculated using the ProtParam software 3 . Sequence similarities were determined using the BLAST program and the GenBank database on the NCBI web server. MUSCLE 3.8.31 (Edgar, 2004) was used to perform multiple sequence alignments of full-length protein sequences. The phylogenetic analyses of the LEA proteins based on amino acid sequences were carried out using the neighbor-joining method (Saitou and Nei, 1987). AcLEA protein classification was done comparing its sequence to those available in the LEA Proteins Data Base 4 (Hunault and Jaspard, 2010) and the Pfam server 5 (Finn et al., 2015). Expression and Purification of the Recombinant AcLEA Protein Recombinant AcLEA protein (rAcLEA) was up-accumulated in BL21 (DE3) E. coli cells (Novagen) transformed with the expression vector pET28mod-AcLEA. LB media supplemented with kanamycin was used to grow cells at 37 • C. Overnight cultures were diluted 100-fold using fresh LB medium, and incubation was continued until optical density (OD 600 ) reached 0.5-0.6. At this point, 0.1 mM isopropyl thio-β-D-galacto-pyranoside (IPTG, Sigma-Aldrich, St. Louis, MO, USA) was added to induce the protein expression. After further 4 h of incubation at 28 • C, cells were harvested by centrifugation at 3,000 × g for 15 min at 4 • C. For structural studies cell pellets were resuspended in native buffer (150 mM NaCl, 50 mM Tris-HCl, pH 8) and for antibodies production cells pellets were resuspended in denaturing lysis buffer (500 mM NaCl, 6 M guanidine hydrochloride, 20 mM sodium phosphate, pH 7.8). Resuspended pellets were sonicated for 45 s (Misonix Sonicator 3000, Cole-Parmer, Vernon Hills, IL, USA) in ice bath. Antibodies were obtained as described in Supplementary Information. The soluble fraction was separated by centrifugation at 20,000 × g for 30 min at 4 • C. Recombinant six-His-tagged AcLEA (rHis-AcLEA, 20.7 kDa) was purified by metal-chelate affinity chromatography (IMAC) using the Ni-NTA agarose purification system (Novex, Thermo Fischer Scientific Inc., Waltham, MA, USA), and eluted with five volumes of native (150 mM NaCl, 50 mM Tris-HCl, pH 8.0) or denaturing elution buffer (500 mM NaCl, 8 M urea, 20 mM sodium phosphate, pH 4.0). In both native and denaturing purifications, buffer exchange to 150 mM NaCl, 50 mM Tris-HCl, pH 8.0, was performed by dialysis using a 5 kDa cut-off membrane (Merck Millipore, Billerica, MA, USA), and cleavage of His-Tag was carried out overnight at 4 • C. After cleavage, a second step of IMAC purification was carried out under native conditions (150 mM NaCl, 50 mM Tris-HCl, pH 8.0) in order to obtain native rAcLEA. Since rAcLEA was found to be weakly bounded to the resin, a native buffer containing 20 mM imidazole was used for protein elution. Finally, PD10 desalting columns (GE Healthcare, Piscataway, NJ, USA) were used to remove buffer components. For NMR spectroscopic and CD analyses, an additional purification step of rAcLEA was done using FPLC chromatography with a Sephacryl S-100 column (GE Healthcare) with a mobile phase of 10 mM sodium phosphate pH 7.0 (Sigma-Aldrich). All rAcLEA purification steps were followed by 12% SDS-PAGE gels stained with Coomassie Blue. Recombinant proteins, excised from gel and/or in solution after chromatography purification were reduced with 10 mM DTT followed by protein alkylation with 55 mM iodoacetamide, and finally digested with trypsin (Promega, Madison, WI, USA) in an overnight reaction at 37 • C. MS was carried out with a SYNAPT-HDMS (Waters Corp.) coupled to a nano-ACQUITY-UPLC system as described in Supplementary Information. NMR and CD Analyses Lyophilized rAcLEA purified under native conditions was dissolved in H 2 O/D 2 O (95:5) to prepare a solution at final concentration of 1 mM, and transferred to a 3 mm tube. For 1 H-NMR water suppression signal was performed using the double-pulsed field gradient spin echo sequence (DPFGSE). Fourier transformation was applied to FID file and data were analyzed with the NUTS Data Processing Software (Acorn NMR Inc., Livermore, CA, USA). Proton nuclear magnetic resonance spectra ( 1 H-NMR) were acquired on a 500 MHz Varian Innova spectrometer (Varian, Palo Alto, CA, USA) at 298 K. Circular dichroism spectra were recorded on a Chirascan Circular Dichroism Spectrometer (Applied Photophysics, Leatherhead, UK), equipped with a Peltier cell holder for control of temperature. A stock solution of rAcLEA protein (0.4 mg/ml) was prepared in 10 mM phosphate buffer pH 8.0. Far UV CD spectra were obtained using a quartz cell with a light path of 1 mm in the 200-260 nm range with a bandwidth of 1.0 nm and a digital resolution of 0.5 s per point. Temperature-induced conformational changes were simultaneously recorded at 210, 222 and 230 nm using a heating rate of 1 • C/min in the 20 to 70 • C range. After the heating ramp, the sample was cooled to 20 • C then far UV CD spectra was taken to determine the reversibility of the conformational changes. CD spectra in the near UV region covering the 250-350 nm range were recorded using a quartz cell with a 10 mm path length, bandwidth of 2.0 nm and 1.0 s time per point. Molar ellipticity values, [θ], were calculated from measured θ using the equation: where θ is the measured ellipticity in degrees, M is the protein molecular weight, c is the protein concentration in mg/ml, and l is the path length. Estimation of secondary structure was performed using the CDNN algorithm (Bohm et al., 1992) using a spectral window data from 200 to 260 nm. Five spectra were recorded for each experimental condition. Assay of Protective Role of AcLEA in E. coli Transformed E. coli BL21 cells carrying the plasmid pET28mod-AcLEA and the empty plasmid pET28mod (control) were grown in LB liquid medium supplemented with 37 µg/ml kanamycin overnight at 37 • C. For both bacterial cultures, an aliquot was diluted 100-fold using fresh liquid LB with antibiotic and allowed to grow for 2-3 h at 37 • C. When OD 600 reached 0.5-0.6, IPTG was added to a final concentration of 0.1 mM and cultures were kept at 28 • C for 2 h, for rAcLEA protein induction. At this point stress treatments were analyzed. To test the function of AcLEA protein to prevent desiccation stress, E. coli cells were dried at 40 • C for 2 h in a flat plates under the laminar flow hood. After drying, cells were rehydrated in 200 µl of liquid LB media. Resuspended cells were spread over Petri dishes containing LB, antibiotic, and IPTG, then were incubated overnight at 37 • C. The number of colony former units (CFU) was used to compare viability (Dalal et al., 2009;He et al., 2012). Salinity stress was assessed with different concentrations of NaCl (0.4, 0.6, and 0.8 M). Sorbitol (0.6, 0.8, and 1.0 M) and PEG 4000 (5, 10, and 20% w/v) were used to decrease osmotic potential and mimic dehydration, and H 2 O 2 (0.1, 0.5, and 1.0 M) was tested to promote oxidant conditions. In all experiments, absorbance at 600 nm (OD 600 ) was used to measure the bacterial growth in liquid media (Liu and Zheng, 2005;Wu et al., 2014;Hu et al., 2016). All experiments were carried out in three biological replicates each replicate was done at least three times. Localization In vivo Using Nicotiana benthamiana Protoplasts The expression vector pEarlyGate103-AcLEA was transferred to Agrobacterium tumefaciens C58C1 by electroporation in 0.1 cm gap cuvettes. A single colony was used to inoculate LB broth supplemented with ampicillin (100 µg/ml), rifampicin (100 µg/ml), and kanamycin (50 µg/ml). The inoculated broth was cultivated at 30 • C overnight. To ensure high expression levels of the recombinant protein, A. tumefaciens cells containing the expression clone were used along the helper strain p19 (Voinnet et al., 2003). A. tumefaciens cells were harvested by centrifugation at 1,400 × g at room temperature and resuspended in an aqueous solution of 10 mM MgCl 2 . Dilutions were made to adjust a final OD 600 of 1.0 in the infiltration solution of both the p19 helper strain and the experimental strain (carry on the pEarlyGate103-AcLEA expression vector). Then acetosyringone (50 µg/ml) was added to the infiltration solution and incubated at room temperature for 3 h. This bacterial solution was used to infiltrate N. benthamiana leaves and the treated plants were incubated for 96 h in regular growth conditions (26 • C and 16/8 h light/dark cycle) prior the protein extraction or protoplasts preparation. Total protein was extracted from infiltrated leaves by 10 min incubation in extraction buffer (70 mM Tris-HCl, pH 8.0, 1 mM MgCl 2 , 25 mM KCl, 5 mM NaEDTA·2H 2 O, 0.25 mM sucrose, 7.5 mM DTT, 0.1% v/v Triton X-100) followed by centrifugation at 16,000 × g for 10 min at 4 • C. The protein extracts were analyzed by Western blot using anti-GFP (Invitrogen) and anti-AcLEA specific antibodies. Protoplasts were released from the leaf tissue by incubation in enzyme solution composed of 0.5 M mannitol, 1% w/v cellulase R10 (KARLAN Research Products Corp., Cottonwood, AZ, USA) and 0.05% w/v macerozyme R10 (KARLAN Research) and leaf tissue was incubated in this solution for 3 h at constant agitation (1,400 × g). Confocal microscopy images were obtained with an Olympus FV1000 microscope (Olympus, Center Valley, PA, USA) using excitation lasers of 633 nm for chlorophyll and 514 nm for GFP. Detection of AcLEA in Seeds, Leaves, Stems, and Roots from Different Amaranth Species Proteins from seeds, leaves, stems, and roots were extracted from wild (A. hybridus and A. powellii) and domesticated (A. cruentus and A. hypochondriacus) amaranth species. Seeds were milled under liquid nitrogen in order to obtain a fine powder and proteins were extracted according to their solubility properties. Aqueous soluble proteins were extracted with buffer containing 10% glycerol, 0.1 M Tris-HCl, pH 8.0 in a relation 1:20 (flour/buffer). Suspension was mixed with vortex for 15 min at 4 • C and centrifuged at 17,000 × g at 4 • C, supernatant was recovered and named as hydrophilic fraction. Resulting pellet was resuspended in 7 M urea, 2 M thiourea, 2% CHAPS (w/v), 2% Triton X-100, 0.05 M DTT and mixed as indicated above. The solubilized proteins (hydrophobic fraction) were recovered by centrifugation for 15 min at 17,000 × g at 4 • C. Proteins from leaves, stems, and roots were extracted from plants growing under normal and salt stress conditions. Seeds were germinated and seedlings were transferred to plastic pots with soil (Peat Moss Tourbe, Premier Horticulture, Rivière-du-Loup, QC, Canada). Amaranth seedlings were divided in two groups; the control and the salt-stressed groups, which were watered with water and water containing 150 mM NaCl (EC 16.9-17.2 ds/m), respectively. Samples from control and saltstressed plants were collected next day after salt-stress imposition. Tissues were collected from three biological replicates containing three plants for each replicate. Samples were collected and immediately frozen in liquid nitrogen and milled to a fine powder as reported before . The powder was suspended in extraction buffer (1:10 w/v) containing 7 M urea, 2 M thiourea, 2% Triton X-100, and 0.1 M 2-mercaptoethanol. The mixture was sonicated (GE-505, Ultrasonic Processor, Sonics & Materials, Inc., Newtown, CT, USA) for 15 min at 4 • C and centrifuged as above. Proteins (10 µg) were separated in a 12% SDS-PAGE and resolved at 75/150 V for 90 min and then transferred to a PVDF membrane using a Trans-Blot SD semi-dry transfer cell (Bio-Rad, Hercules, CA, USA) for 45 min at 15 V in transfer buffer (25 mM Tris, 192 mM glycine). Membranes were blocked for 2 h with 5% defatted milk in TBS containing 0.1% Tween-20 (TBST), washed three times for 10 min with TBST and incubated with anti-AcLEA IgG rabbit polyclonal antibody for 2 h at 1:1,000 dilution in TBST. Membranes were washed three times for 10 min each with TBST, incubated with anti-rabbit IgG-alkaline phosphatase antibody (Sigma-Aldrich) for 90 min at 1:10,000 dilution in TBST. After membranes were washed three times for 10 min with TBST. Western blots were revealed with alkaline phosphatase buffer (0.1 M Tris, pH 9.5, 0.1 M NaCl, 5 mM MgCl 2 ) and 0.5 mM BCIP, 0.4 mM NBT for 10-20 min at 37 • C. AcLEA Cloning and Recombinant Protein Expression in E. coli System Bioinformatics analyses, using LC-MS/MS information (Maldonado-Cervantes et al., 2014) and the A. hypochondriacus transcriptome (Delano-Frier et al., 2011), allowed us to design specific primers for cloning the full-length AcLEA cDNA. Amplified AcLEA fragment was ligated into pET28mod vector (Supplementary Figures S1A,B). AcLEA cDNA contains an ORF of 516 bp that codifies for a 172 amino acids protein with a molecular mass calculated of 18.34 kDa and a theoretical pI of 8.58, values that corresponded to experimental data previously reported (Maldonado-Cervantes et al., 2014). The sequence (Supplementary Figure S2A) was deposited in the GenBank with access code KX852451. In order to identify AcLEA similar proteins and consensus sequences, a search was performed using protein BLAST algorithm and multiple alignment was carried out with the sequences of the highest similarity matches (Figure 1A). A search of related sequences in the LEAPdb database (Hunault and Jaspard, 2010) confirmed that all these sequences are grouped in the LEA_4 Pfam (PF02987). According to the classification proposed by Battaglia et al. (2008), in this family are included LEA proteins from Group 3, such as the cotton protein D-7 (Dure, 1993). Group 3 LEA proteins are characterized by a repetitive motif of 11 amino acids TAQAAKDKTSE (motif 3) in the middle of the sequence that is preceded or followed by ATEAAKQKASE (motif 5); in the N-terminal region is usually conserved the SYKAGETKGRKT (motif 4); meanwhile GGVLQQTGEQV (motif 1) and AADAVKHTLGM (motif 2) are frequently observed in the C-terminal. In many proteins motifs 3 and 5 are present more than once. Motifs 1 to 5 were detected in AcLEA wherein the motif arrangement is M4-M5-M3-M1-M2 with only one complete motif of each type ( Figure 1A). On the other hand, the motifs arrangement for LEA group 6 is in the order M3-M1-M2-M4 (Rivera-Najera et al., 2014). AcLEA shares a similar amino acid composition as other LEA proteins, being rich in alanine (19.2%), lysine (14.0%), glutamic acid (9.9%), glutamine (9.3%), threonine (9.3%), and glycine (8.1%) (Battaglia et al., 2008;Denekamp et al., 2010). The total number of negatively charged residues (Asp and Glu) is 27, meanwhile positive charged residues (Arg and Lys) is 29. Another characteristic of AcLEA is the lack of Trp and Cys residues. AcLEA has an aliphatic index of 29.36 with a grand average of hydropathicity (GRAVY) computed of −1.23, indicating a higher abundance of hydrophilic amino acids. Based on the AcLEA amino acid sequence, the hydropathic profile was calculated using the Kyte and Doolittle (1982) values, results showing that the hydrophilic character of this protein is clearly exhibited (Supplementary Figure S2B), as well AcLEA was predicted as disordered structure (Supplementary Figure S2C). The term hydrophilins was coined to the group of proteins with an average hydrophilicity index >1 and at least 6% Gly. Since hydrophilicity index is 1.23 and the Gly content is 8.1%, AcLEA protein fits in the definition of hydrophilins (Garay-Arroyo et al., 2000). Protein Expression and Purification Two distinctive bands putatively corresponding to the recombinant AcLEA were detected in SDS-PAGE, one of them was located at 21.4 kDa, correlating with the molecular weight expected for recombinant protein linked to His-tag, the second band was located at 16.0 kDa (Figure 2). The identities of these two bands were successfully identified by LC-MS/MS and bioinformatics analysis using A. hypochondriacus database (Supplementary Figure S3). Sequences of the matched peptides as well MASCOT scores are shown in Table 1. Data confirm that both the 21.4 and 16.0 kDa bands corresponded to AcLEA. Nevertheless peptides in the N-terminal region were not detected in the 16.0 kDa product, indicating that this short protein is a truncated fragment lacking the N-terminal region. Figure 2A and Supplementary Figure S3. b Best homology as Blast and Muscle-Clustal analysis (Figure 1) The 21.4 kDa His-rAcLEA was retained on the Ni 2+ column and was eluted continuously with successive low concentrations of imidazole (50 mM) washes, but high imidazole concentration (300 mM) was required to completely recover the rAcLEA (Figure 2A and Supplementary Figure S4). The 15.4 kDa rAcLEA was not retained by Ni 2+ column, confirming that this protein is a truncated fragment lacking of N-terminal His-Tag, which was confirmed by MS/MS analysis (Table 1 and Supplementary Figure S3). After exchange buffer by dialysis, the His-Tag was removed by PSP protease cleavage and rAcLEA purification was carried out again using Ni 2+ -NTA resin (Figure 2B). Retention of cleaved rAcLEA in this stationary phase can be explained because after the proteolysis cleavage residues added to the N-terminal include Gly-Pro-His, and since AcLEA possesses a His in position 4 (Met-Ala-Ser-His), this combination of two histidine residues in relative positions 1-4 seems to be responsible for the rAcLEA binding to Ni 2+ -NTA resin. For spectroscopic analysis it is desirable to have a protein purity greater than 98% (Acton et al., 2005). To ensure this experimental condition it was necessary to use a final chromatographic purification step based on molecular exclusion. Both rAcLEA proteins purified by native and denaturing conditions were eluted in a Sephacryl S-100 column with 10 mM sodium phosphates buffer at pH 7.0 as mobile phase; no changes were detected in retention time between them. Typical chromatographic profile shows only one well-defined peak and rAcLEA showed higher purity (Supplementary Figures S5A,B). Nuclear Magnetic Resonance Spectroscopy rAcLEA obtained under native conditions was used to evaluate the structural conformation of the recombinant protein by proton nuclear magnetic resonance. Uni-dimensional 1 H-NMR spectrum provides general overviews of protein structure because chemical shifts values are strongly related with the presence of different elements of secondary structure (Wishart et al., 1991;Mielke and Krishnan, 2009). Particularly, H N amide protons are widespread from 6 to 11 ppm in proteins with a well-defined tridimensional folding with a high content of α-helix and β-strand. In contrast, H N resonances of unfolded proteins with a random coil conformation collapse in a narrow region around 7-8 ppm (Singh et al., 2005). Figure 3 shows the 1 H-NMR spectrum of native rAcLEA, as can be observed amide and aromatic protons are distributed between 6.8 and 8.6 ppm, suggesting a random conformation. Moreover, Hα resonances around 4.1 ppm have also a compact distribution, which is consistent with random coil as well the absence of splitting due to coupling in aliphatic signals in the 0.8-2.0 ppm range. This spectroscopic patterns indicate that methyl and methylene groups present in aliphatic amino acid lateral chains have free rotation without limitations due to steric impediment, suggesting that rAcLEA in the experimental conditions tested lacks secondary and tertiary structure. In fact, rAcLEA possesses the typical NMR profile for IDP previously observed in a LEA protein of T. aestivum (Sasaki et al., 2014) and a dehydrin of A. thaliana (Agoston et al., 2011). FIGURE 3 | NMR. 1 H-NMR spectrum of HisAcLEA purified from native conditions. Narrow signal distribution in amide region, between 6.5 and 8.5 ppm, strongly suggest the lack of a well-defined tridimentional structure distintive of intrinsically disordered proteins. Circular Dichroism Spectroscopy The amino acid composition of AcLEA is rich in α-helix promoters such as Ala (19.0%), Met (5.2%), Glu (9.8%), Gln (9.2%), Thr (9.2%), and Lys (13.8%), nevertheless the Gly content is high (8.1%) this amino acid does not have a high propensity for secondary structure formation (Serrano et al., 1992;Creighton, 1993). As observed in other LEA proteins, secondary structure prediction indicates the formation of vast segments of helical structures reaching up to 80% α-helix content. Interestingly, NMR data (Figure 3) showed that in the experimental tested conditions rAcLEA has the spectral profile of an IDP. Therefore, in order to further explore the conformational properties of rAcLEA, CD spectra were recorded in the far UV region. rAcLEA was dissolved in 10 mM phosphate buffer pH 8.0 at different NaCl or sorbitol concentrations (Furuki et al., 2011;Wu et al., 2014;Warner et al., 2016). As shown in Figure 4A, the AcLEA spectra were not modified by NaCl nor sorbitol presence. All these CD spectra show a negative signal near 200 nm and weak bands in the 210-220 region, suggesting a low secondary structure content. In agreement, the deconvolution of the spectra using the CDNN program (Bohm et al., 1992) indicates a limited content of secondary structure (Supplementary Table S1). Because it is well known that the temperature-induced conformational changes (Soulages et al., 2002), then the curves as a function of temperature at different wavelengths (210, 222, and 230 nm) were followed. For all samples at all the wavelengths tested, the ellipticity signal barely changed with temperature ( Figure 4B). In agreement, the spectra obtained at 20 • C before and after the heating cycle as well as that obtained at 75 • C were very similar (Supplementary Figure S6). The lack of a temperature-induced transition strongly suggests that if secondary structure segments are formed, those segments are fluctuating and do not participate in the compact core structure. It is well established that TFE can induce α-helix folding in peptides (Buck, 1998;Boswell et al., 2014), as well as in unstructured proteins with a predisposition to form secondary structure such as LEA proteins (Shih et al., 2004;Rivera-Najera et al., 2014). Therefore the effect of TFE was evaluated on the rAcLEA conformation. Far UV CD spectra clearly show the tendency of rAcLEA to adopt helical structure as TFE concentration increases (Figure 4C). At TFE concentrations higher than 25%, the CD spectra of rAcLEA show the distinctive minima at 208 and 222 nm characteristic of α-helix structures (Muller et al., 2008). As TFE concentration increased up to 66% a gain of helical structure up 70.7% and a decreased in all the other types of secondary structure were observed (Figure 5 and Supplementary Table S1), this result being quantitatively confirmed using the CDNN program (Supplementary Table S1). In order to determine if this increase in helical content was accompanied with the formation of a structured core, the effect of temperature on rAcLEA dissolved in 50% TFE was assayed. It was found that the ellipticity signal at 208 and 222 nm was lost in a non-cooperative way ( Figure 4D) and changes in CD signal were fully reversible at 25 and 50% TFE (Figure 4E). This strongly suggests that the helical segments induced by the addition of TFE are not arranged in a well-folded tertiary structure. To further explore the formation of tertiary structure, the CD spectra of rAcLEA in the aromatic region were also determined. In the absence of TFE, rAcLEA showed a weak signal in the region corresponding to Tyr and Phe residues, the intensity at 270 nm band being further decreased in the presence of TFE (Figure 4F), thus confirming the absence of TFE-induced tertiary structure formation. Biological Properties of AcLEA In vivo Using E. coli as a Model It has been demonstrated that the expression system of E. coli is a simple, convenient, and effective model to determine the function of recombinant proteins (Liu and Zheng, 2005). So we used the transformed E. coli DE3 cells to evaluate their tolerance to diverse types of abiotic stress conditions (desiccation,NaCl,H 2 O 2 ,sorbitol,. Figure 6A shows the growth kinetics of control E. coli cells transformed with empty plasmid (control) and pET28mod-AcLEA plasmid. It has been reported that expression of LEA (group 1) genes from plants has no effect on the growth kinetics of transformed E. coli or yeast cells (Lan et al., 2005;Campos et al., 2006;Dang et al., 2014). These results are in agreement with our results; however, Warner et al. (2016) reported that induction of AfrLEA-1 (Artemia franciscana LEA group 1) was associated with inhibition of Top10F E. coli on account of basic pI of AfrLEA-1. Curiously AcLEA has also a basic pI but we did not observe such cell growth inhibition. It has been suggested that hydrophilic and heat-stable proteins may modify the structure of other proteins and bind water directly to attenuate the damage caused by desiccation (Houde et al., 1992). Figure 6B shows a clear difference in the number of E. coli viable cells before and after desiccation stress. Before drying process very similar CFU (expressed in ×10 6 units) were obtained for control and AcLEA expressing cells, but after desiccation, although only a very small fraction of cells survived, the number of CFU expressing AcLEA were three times higher than in control cells. This result suggests that AcLEA expression E. coli improved its survival capacity after desiccation. On the other hand, it is well known that the E. coli growth rate is strongly influenced by the salt content present in the growth medium (Gowrishankar, 1985). Lan et al. (2005) and Reddy et al. (2012) showed that overexpression of LEA group 1 from plants in E. coli provides an increased tolerance to the harmful effects of high salinity environments. Liu and Zheng (2005) indicated that the expression of PM2, a LEA group 3 from soybean, enhances salt tolerance of E. coli cells and that the 22-mer repeat region is an important functional region in this protein. As shown in Figure 6C, the E. coli growth was inhibited by addition of NaCl and contrarily to other reports, the expression of AcLEA did not change this behavior. Because AcLEA has been classified as LEA Group 3 it was expected that it would participated in the protection of cells against salt stress, however, the differences in the amino acid sequences detected in AcLEA ( Figure 1A) could be responsible for this observed difference. Low ROS concentrations can act as messengers to regulate biological process, while high ROS concentrations can have very harmful effects and dehydration will disrupt the metabolism of seeds leading to high ROS production (Bailly et al., 2008). In Figure 6D is shown that even at high H 2 O 2 concentration, AcLEA conferred a significant tolerance to E. coli cells. On the other hand, Warner et al. (2016) reported that in general E. coli strains tolerate low sorbitol concentrations. Our results showed that AcLEA was able to overcome the negative sorbitol effect on E. coli growth even at 1 M concentration ( Figure 6E). E. coli growth was also tested in the presence of PEG as a compound that decreases the osmotic potential of the cells. As shown in Figure 6F, the accumulation of AcLEA improved the growth cell supporting an osmoprotection function. In situ Localization of AcLEA To decipher the subcellular localization of AcLEA protein, the corresponding coding sequence was fused with green fluorescent protein (AcLEA-GFP) in vectors designed for transient transgene expression in N. benthamiana leaf protoplasts. Confocal microscopy images (Figure 7A) of protoplasts from leaves infiltrated with the expression vector pEarlyGate103-AcLEA clearly show that AcLEA protein exhibits a cytosolic localization in these conditions. The accumulation of AcLEA and GFP proteins in infiltrated leaves was confirmed by immunodetection analysis (Figure 7B). It is noted that cytosolic LEA proteins could be involved in stress protection not only within the cytosol itself but also at the level of membranes delimiting the organelles such as mitochondria, chloroplasts, endoplasmic reticulum, and nucleus (Candat et al., 2014). AcLEA Localization in Amaranth Seeds and Plant Tissues Anti-AcLEA antibodies were sensitive to detect the corresponding polypeptides in extracts from seed proteins from different amaranth wild and domesticated species. Among all species analyzed, no differences in abundance were observed in seeds (Figures 8A,B and Supplementary Figure S7). This could indicate that AcLEA plays an important function most likely during seed drying process. To identify all sequences related with LEA proteins, we carried out a search in phytozome database 7 . Sixty matches were retrieved but only one of those sequences (AHYPO_005092) was identical to AcLEA (Supplementary 7 https://phytozome.jgi.doe.gov/pz/portal.html Figure S8), which correlates with the Western blot analysis where only one reactive band was observed ( Figure 8B). The abundance of AcLEA was tested also on leaves, stems, and roots of wild and domesticated amaranth species. Under normal conditions of plant growth of watering, AcLEA was not detected (Figures 8C,D). But very interestingly, when plants were subjected to salinity stress, we observed the accumulation of AcLEA (Figures 8E,F). As shown in Figure 8F, AcLEA accumulation was observed in A. hypochondriacus leaves in the FIGURE 8 | (A) SDS-PAGE profile from amaranth seed storage proteins: Lane M = molecular weigh marker, Lanes 1-4 = hydrophilic proteins from: A. hybridus, A. powellii, A. cruentus, and A. hypochondriacus, respectively. Lanes 5-8 = hydrophobic proteins from: A. hybridus, A. powellii, A. cruentus, and A. hypochondriacus, respectively. (B) Western Blot analysis against anti-AcLEA. (C) SDS-PAGE profile from amaranth leaves, stems, and roots from plants growing under normal conditions: Lane M = molecular weight marker; Lanes 1-4 = leaf proteins from: A. hybridus, A. powellii, A. cruentus, and A. hypochondriacus, respectively; Lanes 5-8 = stem proteins from: A. hybridus, A. powellii, A. cruentus, and A. hypochondriacus, respectively; Lanes 9-12 = root proteins from: A. hybridus, A. powellii, A. cruentus, and A. hypochondriacus, respectively. (D) Western blot analysis against AcLEA. (E) SDS-PAGE profile from amaranth leaves, stems, and roots from plants subjected to salinity stress: Lane M = molecular weight marker; Lanes 1-4 = leaf proteins from of A. hybridus, A. powellii, A. cruentus, and A. hypochondriacus, respectively; Lanes 5-8 = stem proteins from of A. hybridus, A. powellii, A. cruentus, and A. hypochondriacus, respectively; Lanes 9-12 = root proteins from of A. hybridus, A. powellii, A. cruentus, and A. hypochondriacus, respectively. (F) Western blot analysis against AcLEA. expected size (19 kDa, Supplementary Figure S7) but also two more bands around 25 and 30 kDa were observed. In leaves of wild species, the 19 kDa band was barely observed, but in stems a strong band was observed in the wild species A. hybridus and the domesticated A. cruentus and A. hypochondriacus. Meanwhile in roots the 19 kDa band was detected in all species, but at much lower accumulation. These results have shown that AcLEA is conserved in seeds among amaranth species, but that AcLEA plays an important function in response to plant stress and its tissue-specific accumulation was observed. CONCLUSION We present the isolation, cloning, structural and functional characterization of the first LEA from Amaranthus species (AcLEA). The deduced amino acid sequence of this gene showed that AcLEA belongs to the LEA proteins group 3 and structural analysis in solution has shown that it belongs to IDPs lacking of a well-defined secondary or tertiary structure, but has a strong tendency to adopt a helical conformation. Using E. coli as in vivo model to evaluate the AcLEA function it was shown that this protein displayed a protective effect against desiccation, osmotic, and oxidative stresses. In N. benthamiana leaf protoplasts AcLEA was observed as being localized to the cytosol. Moreover, AcLEA was detected in different tissues from wild and domesticated amaranth species suggesting the important function of AcLEA protein as osmoprotectant during seed desiccation. But interestingly, AcLEA was accumulated in leaves and stems in response to salt stress. These results highlighted the importance of AcLEA as an important protein for stress protection in amaranth species.
9,571.8
2017-04-07T00:00:00.000
[ "Biology", "Environmental Science" ]
Camera Space Particle Filter for the Robust and Precise Indoor Localization of a Wheelchair This paper presents the theoretical development and experimental implementation of a sensing technique for the robust and precise localization of a robotic wheelchair. Estimates of the vehicle’s position and orientation are obtained, based on camera observations of visual markers located at discrete positions within the environment. A novel implementation of a particle filter on camera sensor space (Camera-Space Particle Filter) is used to combine visual observations with sensed wheel rotations mapped onto a camera space through an observation function. The camera space particle filter fuses the odometry and vision sensors information within camera space, resulting in a precise update of the wheelchair’s pose. Using this approach, an inexpensive implementation on an electric wheelchair is presented. Experimental results within three structured scenarios and comparative performance using an Extended Kalman Filter (EKF) and Camera-Space Particle Filter (CSPF) implementations are discussed. The CSPF was found to be more precise in the pose of the wheelchair than the EKF since the former does not require the assumption of a linear system affected by zero-meanGaussian noise. Furthermore, the time for computational processing for both implementations is of the same order of magnitude. Introduction Recently, the use of diverse types of sensors and different strategies for information fusion has allowed important developments in key areas of robotic and artificial intelligence.Within these disciplines, a specific area of investigation is mobile robotics where the sensor-based localization problem is an important research topic.Localization of an autonomous mobile robot is the main concern of a navigation strategy since it is necessary to know precisely the actual position of the mobile robot in order to apply a control law or execute a desired task.In general, a navigation system requires a set of sensors and a fusion algorithm that integrates the sensors information to reliably estimate the pose of the mobile robot.One of the most commonly used sensors in wheeled mobile robots is odometers (dead reckoning).Unfortunately, these sensors are subjected to accumulated errors introduced by wheel slippage or other uncertainties that may perturb the course of the robot.Therefore, odometric estimations need to be corrected by a complementary type of sensor.Reported works in autonomous robots present approaches where the odometry sensors information is complemented with different types of sensors such as ultrasonic sensor [1][2][3], LIDAR (Light Detection and Ranging) [4][5][6][7][8], digital cameras [9][10][11][12], magnetic field sensor [13], global position system (GPS) [7,8,14,15], and Inertia Measurement Units (IMUs) [7,15]. Among the different types of sensors there exist advantages and drawbacks depending on the general application of the mobile robots considered.GPS systems are low-cost systems and relatively easy to implement but have low accuracy and their use is not convenient for indoor environments.IMUs are relatively inexpensive, easy to implement, and efficient for outdoor and indoor conditions but are very sensitive to vibration-like noise and are not convenient for precise applications.LIDAR sensors have high accuracy and are robust for indoor and outdoor applications, with acceptable performance in variable light conditions; however, LIDAR sensors are expensive and the data processing is complex and time consuming.Camera sensors are inexpensive and easy to implement and are with an important amount of tools for images processing and analysis.Although vision sensors are sensitive to light and weather conditions, the use of vision sensors for indoor-structured environments with controlled light conditions is very reliable. When several sensors are implemented in a single intelligent system (e.g., a mobile robot), it becomes necessary to implement a strategy to fuse the data from every sensor in order to optimize information.To this end, combining sensor data is usually called sensors fusion.With respect to the localization problem, the specialized literature reported several fusion strategies [16].These techniques can be classified in heuristics algorithms (e.g., genetic algorithm and fuzzy logic) [3], optimal algorithms (Kalman Filter and grid-based estimations) [15], and suboptimal algorithms.Real-world problems normally utilize suboptimal Bayesian filtering, such as approximated grid-based estimations, Extended or Unscented Kalman Filters [9,12,[17][18][19], and particles filtering methods [1,20].Due to their ability to perform real-time processing and reliability, Kalman-based fusion techniques are implemented in many cases, under the assumption that the noise affecting the system is zero-mean and Gaussian.However, for the case of a robotic wheelchair such assumption is rather strict and is not always satisfied [21]. Wheelchairs, unlike specialized mobile robots, show many uncertainties related to their inexpensive construction, foldable structure, and nonholonomic characteristics of the wheels.Hence, nonlinear and non-Gaussian assumptions become important for these low-end vehicles, where pose uncertainty can be a consequence of differing wheel diameters, as well as gross misalignment, dynamic unbalances, or other problems due to daily use and factory defects.Thus, the filtering strategy is crucial in order to minimize all the uncertainties that are not considered in an ideal mathematical model. Considering the special case where the mobile robot is a wheelchair intended to be used by a severely disabled person, the literature offers some good examples of fusions between different sensors using nonoptimal algorithms.In [22], the wheelchair is controlled by the user through special devices where a brain-machine interface control is proposed for semiautonomous drive.In [13], a line-followerlike wheelchair moves autonomously, updating its position through metal-artificial markers and RFID tags.In [5,23], a vision-based autonomous navigation control using an EKF and artificial markers is described.In [1], an autonomous wheelchair is developed using odometry and ultrasonic sensor measurements to be fused using a PF. In this work, a vision-based particle filter (PF) fusion algorithm is proposed and compared with a Kalman-based algorithm.Particle filters are able to work on the assumption that nonlinear systems are affected by non-Gaussian noise but can be computationally intensive.A novel implementation of a PF on sensor space, called camera space particle filter (CSPF), is used to combine visual observations with sensed wheel rotations mapped onto a camera space through an observation function.A main contribution of this project is the novel strategy of the CSPF implementation.The CSPF fuses the data from odometry and vision sensors on camera space resulting in a precise update of the wheelchair's pose.The particles used by the CSPF are a set of odometry estimations each with random initial conditions.Every first estimated position is mapped into a camera space through an observation function.In this work, the PF is performed in the sensor space with every visual measurement.Using this strategy, the computational demand is reduced considerably since the filtering is applied to a set of horizontal pixel positions and a single marker observation (point of interest), avoiding the need for exhaustive image processing.The computational processing time of the implementation here presented is shown to be of the same order of magnitude as a typical Kalman Filter implementation. Methods and Materials In this work, only the nominal kinematic model is required to estimate the position of the wheelchair.If the kinematic model considers only non-holonomic constraints, real-world disturbances such as slipping and sliding of the wheels with respect to the ground (or any error from other sources) are not taken into account and therefore must be corrected.To update the wheelchair positions, the kinematic model is coupled with a pinhole camera model.Applying the solution here proposed, observations of passive artificial markers at known positions are used to estimate the wheelchair physical position through an observation function that maps from the planar physical space to camera space.Next, the kinematical model and the approach used to set up the observation function are first reviewed followed by the camera space particle filter algorithm description. Navigation Strategy. In this approach a metric map is built from a training stage.Here, a person drives the wheelchair through a desired path where different estimated positions of the wheelchair are recorded.Based on the acquired metric map, the wheelchair moves autonomously following the instructions recorded in the training stage.For both stages, either when the wheelchair builds the metric map or when the wheelchair tracks the reference path, the wheelchair estimates its position based on the CSPF here proposed.This strategy is convenient for disabled users since it is not necessary to visit every place inside the work area as proposed in other approaches used in mobile robotics typically used for exploration such as Simultaneous Localization and Mapping (SLAM) [6]. Kinematic Model.The kinematic model used for controlling the wheelchair is a unicycle.The reference point X(, , ) used to identify the position of the wheelchair in the physical space is assumed to be at the middle of the traction wheels axis; see Figure 1.The and coordinates are considered with respect to an inertial fixed reference frame 0 − 0 . is the radius of traction wheels and 2 is the distance between the wheels.The wheelchair's orientation angle is represented by = (), where defines the average between the right and left driving wheels' rotation angles and , respectively: The position of X(, , ) is obtained from integrating the following kinematic system of equations: the variable can be defined as a function of the differential forward rotations of the two driving wheels: the state of the system is defined by the wheelchair's pose X ≡ [, , ] .In general, (2) can be expressed compactly as thus, (4) can be solved by odometry integration. Observation Function. A vision function based on the pinhole camera model was defined using four vision Principal point Focal axis parameters (see Figure 2) where 1 represents the distance in pixels from the image plane to the optical center along the focal axis, 2 is the orthogonal distance in mm between the focal axis and point , 3 is the distance in mm between the optical center and point parallel to focal axis, and 4 is the fix angle in radians between the wheels' axis and the focal axis. The reference frame − is defined as the camera space coordinate system which has its origin at the principal point of the image plane, Figure 2. Based on Figure 2, ( 5) is obtained, where is negative since it is placed at the left side of the principal point: angle is formed between the focal axis and the projected axis of the observed marker, is the angle between the 0 axis and the projected axis of the observed marker, and is the orientation angle of the wheelchair with respect to the global coordinate 0 ; see Figure 2.These angles are related to 4 as it is shown in substituting ( 6) in (5) yields after some trigonometric manipulation, the following equation is obtained: From Figure 2, the following equation can be verified: Thus, substituting ( 9) into (8) produces where ( , ) and ( , ) are, respectively, the coordinates of the observed marker and the position of the focus of the camera with respect to the fixed frame of reference (, ). The coordinates ( , ) are described by Using these values into (10) yields the observation function where ,V , ,V are known coordinates of a given V visual marker.This function maps the physical space coordinates into the camera space.The variable is defined as the projected distance along between the focal axis and the visual marker's centroid on camera space coordinate system.Thus, ( 12) can be defined as the observation function h: This observation function is henceforth used to map the planar coordinates of the wheelchair (, , ) in the physical space to the ( , ) camera space coordinate system.It is noteworthy to observe that , defined within the image plane location, is the only variable used in this study, since the marker's vertical location of the centroid on image plane is set to be constant.Thus, is irrelevant for the wheelchair position estimation as the system (including the cameras) is constrained to travel on a plane.To estimate the vision parameters values 1 , 2 , 3 , and 4 the wheelchair is taken to different positions across the planar physical space where visual markers can be observed.Whenever a visual marker is detected, both the visual marker position on camera space and its position on physical space are associated with the wheelchair position on physical space; this information is saved in a "training database." Based on this information and using (12) a least squares minimization via a Marquardt method process is performed to compute the vision parameters. Camera Space Particle Filter. A particle filter is a statistical, exhaustive search approach to estimation that often works well for problems that are difficult for conventional Extended Kalman Filter (i.e., systems that are highly nonlinear or affected by non-Gaussian noise) [24].The main idea on the PF is to represent a posterior density function by a set of random samples with associated weights and compute estimates based on these samples and weights [16]. To define the problem of tracking, consider the evolution of the state sequence of a target given by the system and measurement equations: where f: is a nonlinear function related with the measurement process, and is the step index.Furthermore, V −1 and n −1 are system and measurement noises vectors, respectively; both are independent and identically distributed (iid).The algorithm proposed in this work was based on the PF approach described by [24], which in turn is based on the Monte Carlo method; this algorithm is summarized as follows. (1) Assuming that the probability density function (pdf) of the initial state (X 0 ) is known, initial states are randomly generated based on (X 0 ); these states are denoted by {X + 0, , = 0, . . ., } and are uniformly distributed over an initial range W, that is, where W is tuned experimentally; U[, ] is a closed set of random numbers uniformly distributed with values from up to , and the parameter (number of particles) is chosen as a trade-off between computational effort and estimation accuracy. (2) For each state update, the algorithm is as follows. (a) Perform the propagation step to obtain an a priori state X − , , through (b) A set of particles on camera space is determined using the observation equation Physical space Camera space Trajectories as a function of state (c) The weight { , , = 1, . . ., } for each particle x − , is conditioned on the measurement z through (17).For every particle, the weight , is the maximum at the central pixel of the visual marker and decreased following the tail of a Gaussian function as the particle is farther away from the central pixel of the visual marker position observed on the camera space; see Figure 3: in ( 17) is the covariance of the measurement noise tuned experimentally.(d) The weights obtained in the previous step are normalized as follows: the sum of all relative weights is equal to one.(e) Next a resampling step is needed to generate a set of posterior particles x + , on the basis of the relative weights , as follows.For = 1, . . ., the following three steps were performed. (i) A random number ∼ U[0, 1] is generated.(ii) Accumulating the weights , into a sum, one at a time, until the cumulative sum is greater than , that is, , > , the new particle x + , is then set to be equal to the old particle x − , ; the resampling step is summarized as X + , is related to its state X + , , so that X + , Filtering on camera space implies using a PF in a onedimensional set; this set is formed by the centroid of the observed visual markers (z ) and all those virtual markers coming from observation equation ( 12) that use all different possible positions estimated through odometer data before PF correction. Testing Platform Description. A low-cost foldable electrical wheelchair model P9000 XDT from INVACARE was used for this experiment.This wheelchair is usually controlled through a joystick which allows displacement and velocity control. The P9000 XDT wheelchair has two DC brush motors which were fitted with encoders at each motor and connected to a Galil Motion Control Board DMC-4040.The encoders used for this implementation were two-channel incremental encoders from Avago Technologies, model HEDS-5600 #B06.As vision sensors, two uEye cameras, 640 × 480 pixels, were installed on the wheelchair.An emergency stop button was added.This equipment is depicted in Figure 4. A computer program was developed using Microsoft Visual C++ 2010 and open access libraries OpenCV 2.3, running in operating system Windows 7 Home Premium edition on an on-board Toshiba laptop Core i5 processor at 2.5 GHz and 4 GB RAM memory. Experiment Description. To test the algorithm described in Section 2.3, a set of experiments was developed.For these experiments, the path to be followed by the wheelchair and its initial position were defined.Along this path visual markers (concentric circles) were placed at known positions ( ,V , ,V ), shown in Figure 5. After the markers' positions were established and a desired path was chosen, a training stage was performed in order to track automatically the desired path.The wheelchair is trained by a human guide that drives the wheelchair along a path by each position where visual markers can be detected by the cameras.Using odometry, the wheelchair position is estimated and mapped into camera space.Thus, the differences between the odometer estimation in camera Visual markers (concentric circles) space and the observed marker position on camera space are used to update the wheelchair's actual position according to the fusion filter applied (e.g., CSPF and EKF).This position (i.e., the state of the system) is saved in a database. Based on the training information, the wheelchair moves to each of the saved states.In such locations, the vision system scans for a marker related to the current state.Based on a filtering strategy, the acquisition of the marker allows the updating of the wheelchair's pose.After the pose is updated, it is possible to perform a control strategy to arrive to the next position. Two different experiments were implemented to validate the system.The first test consisted of a "Straight line" segment 7.5 m long (Figure 6); the second experiment was an "Lpath" trajectory (Figure 7).In these two experiments, the wheelchair followed a trained trajectory (i.e., a reference path) using both CSPF and EKF.Finally, a third experiment to test the CSPF in complex maneuvers was implemented; see Figure 8.This experiment is called "Office-path" where the wheelchair goes from a desk located in an office to a second desk in a different office passing through a narrow door following a reference path. Experimental Results. The physical position automatically tracked by the wheelchair was measured using "Straight Line" Experiment.For the "Straight line" experiment, the wheelchair moves through a 7.5 m long segment (Figure 6).During the training stage, the wheelchair was driven following a straight line painted on the floor, allowing the human guide to drive the system precisely. Results from the "Straight line" experiment are shown in Figure 9.This graph shows four series of data; the first one is the reference path recorded in the training stage, and the rest of the series are the measurement of the automatically tracked positions during the task performed using different types of filters.First, in the "No-filter" series the wheelchair follows the references path based only on the recorded information and the odometry sensors without filtering update (i.e., correction).This task produced a maximum error at the end of the path equal to 0.3 m.This high magnitude of error arises because the initial conditions are not set accurately enough and because other uncertainties cannot be accounted for into the kinematic model.Thus, it is necessary to have additional information (i.e., camera observations) to detect deviations from the reference path.The task where an EKF fuses odometry and camera information shows deviations of less than 0.05 m in direction from the reference path.Finally, the task where a CSPF is used as the filter shows a deviation of about 0.03 m in the direction.Figure 9 shows that our implementation of CSPF yields better performance in terms of transversal deviation compared to the EKF.It is noteworthy that the CSPF is a filter that minimizes non-Gaussian noise surely present in a foldable wheelchair system, due to wheels differences, unbalances, and so forth.These sources of noise skew the probability density function, so it is harder for the EKF to correct the errors affecting the wheelchair pose. "L-Path" Experiment. For the "L-path" experiment, the reference path is an L-like path which consists of a straight segment of approximately 8 m connected through a curve to a second 2.5 m perpendicular segment; this trajectory is about 11 m long; see Figure 7. The results for "L-path" experiment are shown in Figure 10.Here, the series with no-filter clearly showed a large deviation from the reference path.In the first segment, the wheelchair deviated about 0.4 m along the direction from the reference path.When approaching the 90 degrees, the wheelchair did not turn sharply.In this case, the transversal error compensates by increasing the longitudinal deviation. In the second segment, although the error in the transversal direction does not seem significant, the wheelchair overreaches the final position (i.e., direction) of about 0.5 m.During the first segment, when the task is performed via EKF, the performance seems very similar to the task performed via CSPF.At the beginning of the second segment, the EKF shows a deviation from the reference path of about 0.25 m.At the point where the curve connects with the second segment, the deviation from the reference path obtained using the CSPF-based tracking is always smaller than the one from the task performed with the EKF-based corrections. " Office-Path" Experiment.Finally, the CSPF was tested in a new trajectory where the wheelchair is tasked to go from one office to another.Furthermore, the wheelchair is placed at an office desk and is required to automatically drive to a different desk.In this experiment, the wheelchair needs to perform various complex maneuvers such as moving backward, turning, and moving forward.This trajectory includes passing through a doorway (0.9 m width), making a left turn followed by a right turn, and finally taking the wheelchair into a second desk.This trip is approximately 10 m long, with a higher complexity level than the previous two experiments. Due to the backward movement of the wheelchair needed at the beginning of the trajectory in the "Office-path" tracking via EKF was not considered.Indeed, during backward movements, the EKF requires a different algorithm where it is necessary to subtract the covariance matrix error instead of adding it [23].This difference does not apply for CSPF since the variation is directly considered from the odometry information that produces the PDF filtered by the CSPF, so it is suitable either for forward or for backward maneuvers. Figure 11 shows the results from the "Office-path" experiment where the wheelchair is able to follow the reference path with no collisions.It is noteworthy that when the experiment was performed without any filter the wheelchair was not able to come out from the office due to repeated impacts with the doorway.This trajectory is not shown. RMS Analysis. In order to facilitate the performance comparison between CSPF, EKF, and no-filter the root mean square (RMS) error from the reference path is proposed.The root mean square error is defined as the squared root of the mean squared difference between the position reached using a filter technique and the reference path: where is the number of corrections made during a trip and is the physical measurement of the and coordinates of the reference path at a moment in which an correction is performed. 𝑖 ] is the physical measurement of the and coordinates at the tracked position when the th correction is performed using a certain type of filter technique.Table 1 shows the RMS error analysis for both the and coordinates.For the "Straight line" and "L-path" the deviation increases significantlyfor both and coordinates when Nofilter is used compared to EKF and CSPF.This deviation is expected because the control is open-loop and there is no feed-back pathway to measure any deviation from the reference path.Therefore, the system is very sensitive to variations of initial conditions and uncertainties.In Table 1, the "Officepath" experiment does not show values for either the Nofilter or EKF conditions.In the first condition, it was not possible to complete the task because the wheelchair usually impact the doorway.In the second case, a change in the test algorithm would have been necessary in order to perform the task.When both the EKF and CSPF implementations were possible, both filters performed well with better tracking when the CSPF was used. In Figure 12 the values for the total error ( √ RMS 2 + RMS 2 ) are described.The largest total error is obtained along the "L-path" experiment when No-filter technique is utilized.The error is 0.272 m which is about 50% of the width of the wheelchair (i.e., 0.6 m).In Figure 12, EKF and CSPF can be compared and used to perform the "Line" and "L-path" task.During the "Line" task, the EKF 29.41 45.45 6.17 yields a total error of 0.037 m (6.0% of the wheelchair width), while the CSPF implementation yields 0.016 m (2.8% of the wheelchair width).Along the L-path series depicted in Figure 12 due to the increased complexity, the total error also increases to 0.086 m for EKF and 0.058 m for CSPF (14.5% and 9% of wheelchair width, resp.).These values are safe values considering halls with more than 2 m of clearance. Even when passing a doorway, these values could be tolerable since the differences between the wheelchair width and the door clearance are about 3 m (i.e., 50% of the wheelchair). The "Office-experiment" task shows an error of 0.085 m (14.2% of the wheelchair width).This error still guarantees a good safety margin to avoid collision, considering that the reference path passes closely to the middle of the doorway (±0.1 m about the center); see Figure 13. Processing Time. In this approach, the increase in computational cost between the EKF and CSPF implementation is not very critical.This is a consequence of implementing the particle filter algorithm in sensor space.Furthermore, the observation function implemented in this work allows for mapping a tridimensional to a one-dimensional space saving an important amount of computational effort.Figure 14 depicts the processing time of 4 CSPF implementations with different amount of particles, alongside the processing time of EKF implementation.For the CSPF to have a reliable performance 500 particles were necessary, thus, increasing the processing time of only 10 ms with respect to the EKF implementation.For this application, increasing the processing time of 10 ms per cycle does not compromise the real-time performance.On the other hand, even though the CSPF implementation requires a slightly longer processing, an improvement in the accuracy is achieved.Furthermore, the CSPF algorithm does not need special considerations for forward or backward displacements, using a simpler implementation.Finally, the CSPF is limited by neither linearity nor Gaussianity assumptions as it is the case for EKF implementations. Conclusions The theoretical development and experimental implementation of a vision-based sensing technique for the robust and precise localization of an automatically guided wheelchair were presented.This system is designed to support individuals with severe disabilities.Such application requires accurate localization and control. The developed algorithm consists of an original implementation of particle filtering on camera space (CSPF).This work takes advantage of the intrinsic characteristics of the particle filter (i.e., ability to deal with non-zero-mean and non-Gaussian uncertainties where the probability density function is not bound to Gaussian type).Moreover, the implemented sensor-based method avoids time consuming state estimations and inherent model errors. A successful experimental testing showed the feasibility of the proposed approach.The system was capable of following complex trajectories in indoor-structured environments where tight tolerances are required such as when passing through a doorway or docking into a desk.The proposed algorithm, based on experiments, proved to be robust to the uncertainties of the system. The system performs a precise position estimation based on odometry and vision information through a novel CSPF implementation that fuses the sensors information.In the fusion process, the inaccurate odometry information (dead reckoning) is taken to camera space through an observation function where, using the Monte Carlo method, a set of most likely positions is produced.Finally an estimation of the system's position is obtained through the average of the resulting set of positions. The CSPF is not limited to linear or Gaussian assumptions unlike the common case of Kalman-based filters.This is an advantage when using the filter on systems subjected to biased uncertainties as in the case of a foldable wheelchair. The robotic wheelchair system implemented in this work is able to follow a variety of reference paths.Three experiments called "Straight line," "L-path," and "Office-path" were performed.In order to validate the system, the fusion of the sensors information using CSPF was compared with No-filter and EKF implementations in two experiments: "Straight line" and "L-path."For both experiments when No-filter was used the performance decreased significantly deeming the system useless for the application.When the CSPF and EKF were compared, results in both experiments showed that CSPF had better performance.This may be due to the capability of the CSPF to perform using a distribution of uncertainties with skew shape of the PDF.On the other hand, EKF does not perform as well when the distribution of uncertainty is not Gaussian.After the system was validated against the EKF implementation, the system was tested in a significantly more complex trajectory, the "Office-path."For this path the RMS error was slightly larger than for the other cases due to the added complexity of the trajectory but results were still within acceptable and safe values for the users. As a mean of comparison to other studies, [25] shows a study where using fixed cameras several filtering techniques are applying for the localization of mobile robots playing soccer.Using EKF the localization RMS error was about 0.061 m.On the other hand, using a PF based algorithm this error decreases to 0.030 m, this being the most precise result reported.The robots utilized in [25] are 3-wheeled robots that move in an area of 3 × 5 m.These robots have in general few sources of noise unlike the wheelchair used in this work.The shortest computational time performance was obtained using an EKF with an average processing time of about 1.3 ms.The time used in their PF implementation was 287.3 ms, which is 221 times the EKF processing time.Considering these results the present work shows an important contribution since it is able to localize precisely a noisy system, as it is the case for a foldable wheelchair, still maintaining the processing time of the same order of magnitude compared to an EKF implementation without linear or Gaussian limitations. It is noteworthy to reiterate that in a CSPF the process which fuses the odometry information (dead reckoning), with the information from the digital cameras, is implemented within sensor space (i.e., camera space).The observation function is one-dimensional, reducing the computational burden and allowing a real-time implementation.The CSPF is a Monte Carlo-based algorithm that is not limited to linear or Gaussian assumptions unlike common implementations based upon Kalman Filters.This advantage is especially important in the case of a foldable wheelchair where the system is noisy and uncertainties are difficult to model. To summarize, the advantages of the developed approach are numerous.The system developed in this work performs a precise and robust position estimation based on the combination of odometry and camera sensor space.An innovative aspect of the technique is avoiding the assumption of a linear model corrupted by Gaussian noise, assumptions that, instead, are common practice for Kalman-based filters.Our method reduces the computational burden typically associated with particle filter approaches allowing real-time implementations permitting the capability to follow complex paths.Additionally, new paths to be followed by the wheelchair can be easily set up during a training stage which is convenient for impaired users. Figure 2 : Figure 2: Description of the vision parameters used at the observations. Figure 3 : Figure 3: Scheme of PF on camera space. takes the old state value X − , ; the average value of the set {x + , , = 1, 2, . . ., } becomes the state estimation X .(iii) After the PF update, a new set of random states is set, so impoverishment is avoided.The new states are uniformly distributed in the screen window [X − W, X + W] about X , which are X + −1, in the next iteration. Figure 5 : Figure 5: Artificial visual markers and wheelchair at initial position. Figure 10 :Figure 11 : Figure 10: Positions measurement of the reference path and the wheelchair tracked positions using different filters for the "L-path" experiment. Figure 12 : Figure 12: Total error in m for different experiments by type of filtering technique. Figure 13 : Figure 13: Total deviation compared with the wheelchair width. Figure 14 : Figure 14: Comparison of processing time between EKF and PF with different particles amount. Table 1 : RMS error in m of positions reached using different filter techniques against a reference.
7,637
2016-01-01T00:00:00.000
[ "Engineering" ]
Navy Fiber Optic Standards and Specifications Fiber optics is a new, rapidly developing technology that has the potential for improving the survivability and combat effectiveness of the Navy's ships and aircraft. Fiber optics is quickly finding potential application in almost every type of military weapon system, and the number of applications is growing daily. The Navy's requirements for fiber optics cover a wide range of applications, including data transfer, guidance and control, machinery control, damage control, communication, and sensing. This paper will discuss why fiber optics is important to the Navy, how the technology is being used, the status of fiber optic standards, and the mission of the Navy's new Fiber Optics Standardization Office. INTRODUCTION Optical fibers have unique characteristics and capabil ities that are extremely useful in military applications. First, the data-carrying capacity of the hair-thin fibers is thousands of times greater than that of coaxial cable. This huge capacity is being considered for several of our new weapon systems. Second, optical fibers are essentially immune to electromagnetic interference from lightning and radio or radar transmitters, and they can survive electromagnetic pulses from nuclear explosions. Third, fiber cables can be made very secure. In the mil itary, we spend a lot of time and money ensuring that our classified data links are secure. A standard copper cable can be tapped by wrapping a coil of wire around it. To tap an optical cable, you must cut into the cladding and remove some of the light, causing a power loss that is easy to detect. Fourth, optical fibers are made from either plastic or glass and thus do not conduct electricity. Because there is no electricity, and therefore no danger that a short circuit will cause sparks or high temperatures, we can safely use fiber cables in highly explosive environments such as fuel and munition storage areas. In addition, the nonconductive properties of fiber optic cables isolate the optical transmitters and receivers in the system. The isolation elimi-nates the need for a common electrical ground, with its attendant ground-loop and line-balancing problems. The isolation also decreases the noise in the electronic part of the system. Fifth, glass fibers are rugged. The hair-thin fibers are being deployed from airplanes and missiles. They are being proof tested at tensile strengths of more than 1.4 gigapascal (200,000 pounds per square inch) and have survived accelerations greater than 2,000 g's. MILITARY APPLICATIONS The introduction of fiber optics into military systems is proving to be very cost effective. For example, installing coaxial cables in an aircraft carrier for one type of radar costs $130 per meter. The total i nstal 1 ation cost is about $1 ,300,000. Comparable installation costs for a fiber optic cable would be about $30,000. In undersea fiber optic cables, repeaters can be spaced at 30to 50-kilometer intervals, rather than the 2to 3-kilometer intervals required by metallic cables. In land use, the Army's Fiber Optic Transmission System (FOTS) realized an 80% reduction in the number of repeaters and a 60% reduction in the number of cable reels in comparison to what is needed in a conventional system. Military applications of fiber optics continue to increase. One projection is for over 125 applications of the technology in military systems by 1989. The Army and Air Force are conducting a variety of programs. The goal of a joint effort, the Tactical Generic Cable Replacement (TGCR) program, is to develop an optical modem that will permit the use of optical fibers in a variety of 26-pair cable applications. In a separate program, the Army has demonstrated the size and weight advantages of fiber optics by deploying fiber optic cable from helicopters at speeds of up to 130 miles per hour. The rapid deployment system virtually eliminated reel weight and reduced cable weight to 25 pounds per kilometer. The Navy's requirements for fiber optics cover a wide range of applications, including data transfer, guidance and control , machinery control, damage control, communication, and sensing. In sensor technology, the Navy is conducting research in the use of fiber optic acoustic, mag-3.1.1 U.S. Government Work Not Protected By U.S. Copyright ment multiplexed communication aboard AEGIS-class cruisers. A fiber optic data link connecting the AEGIS Computer Center, Program Assurance Facility, and System Control Laboratory is also being considered. STANDARDS AND SPECIFICATIONS If the benefits of the unique characteristics of fiber optics are to be realized in military systems, standards and specifications must be written. In many fields, the need for standards has been recognized throughout history. In a recent (March 1985) Smithsonian magazine article titled 'A Long, Arduous March Toward Standardization," author Achsah Nesmith presents many examples of man's attempts at standardization-and the difficulties encountered in those attempts. For example, the cubit--a measure widely used in the ancient world--was based on the length of a man's forearm, but the exact measurement varied greatly. Egypt used both a man's cubit, 17.72 inches, and a king's cubit, 20.62 inches. Yet, measurements for the Great Pyramid of Giza and other such structures were remarkable for their accuracy. The Roman mile equaled 1,000 paces, hardly a specific unit of measurement. The English Saxon yard was ostensibly based on a man's girth, but the measure varied so much that Henry I decreed that a yard would equal the length of his arm. Closer to home, in 1789 the Constitution charged Congress with fixing standard weights and measures, and even George Washington urged action. Thirty years later, in 1819, Congress ordered a study and John Quincy Adams (then Secretary of State) was asked to conduct it. Two years later, Adams produced a booklength report documenting the discrepancies in America's weights and measures. Almost a decade later, nothing had been done, but another study was commissioned. Some progress was made during the next 25 years, and by then industrialization created even greater needs for standards. The Civil War also created new needs, and in 1863 the Secretary of the Navy established a standard gauge for screw threads and diameters of bolts and nuts used in Navy yards. Examples of other standardization problems abounded in the United States through the following decades, perhaps the most well known being attempts to standardize rail gauges so the country's railroads could interconnect. Loss of life, unfortunately, was also a result of lack of standardization--in 1894 in Pennsylvania, 27 boilers exploded simultaneously, killing thousands of people. In 1910, with boiler explosions occurring at a rate of 1,400 a year, the American Society of Mechanical Engineers wrote a comprehensive boiler code that virtually ended explosions. Of course, what these historical examples tell us is that the need for standards and the difficulties in establishing them are not new. In fiber optics, we are trying to address the same kinds of needs for the same reasons--efficiency, effectiveness, cost reduction, and safety. The development of standards and specifications has a direct bearing on increasing the acceptance and use of fiber optics in the Navy and the other services. Navy standards development must be conducted in coordination with other DOD organizations and with industry to maximize standardization of fiber optic components and systems where standards are appropriate and do not restrict technology's progress. If the military is to reap the full benefits of using fiber optics, we need a full range of standards and specifications. The situation in connectors is an illustration of the disorder caused by lack of standardization. NATO has a written specification for single-fiber connectors corresponding to the Amphenol 905 SMAstyle. In current US programs, the connector that the military has qualified for tactical communication systems is made by Hughes in formats from two to eight fibers. The use of the Hughes connector, however, is not universal--Magnavox's AN-GRC-206 tactical communication system uses the FOMC connector made by ITT. For the long-haul FOTS, the connector is made by ITT/STC, an English company. The problems caused by this kind of situation are obvious. A commonly held view is that market forces will contribute to settling the fiber optic standards question ("Fiber Optic Trends: What's Happened to Standards," Photonics Spectra, February 1985). As they have in other technologies, those forces undoubtedly will play an important role. We cannot, however, rely on the market to solve all the problems--the process would take too long. We must move ahead carefully but quickly to increase standardization everywhere that it is feasible and sensible to do so. Standards are not specific formulas for designs. They are instruments for eliminating unnecessary inconsistency and stimulating potential users to take advantage of technology. They must be broad enough to accommodate the changing technology, specific enough to be of use, and they must not be mere revisions of electrical standards. Neither the military nor industry wants to be restricted by rigid standards, but they also do not want the chaos created by lack of standards or the confusion caused by using standards from other technologies as a basis for developing fiber optic standards. Because both groups want to see fiber optic standards development move more quickly, they are frustrated at the slowness of the pace. Each group can see in the other the causes of the slow pace, and each has valid points. If the objective--the development of useful standards--is kept in view, the two groups can work together to reach it. NAVY FIBER OPTICS STANDARDIZATION OFFICE Part of the Navy's effort to reach the goal, predicated on increased recognition of fiber optics' advantages, is the establishment of the The head of this new office is the Fiber Optic Standards Manager (FOSM), the position I assumed in December 1984. The FOSM is assigned functional responsibility for developing fiber optic standards and related specifications for the Navy. Specific responsibilities include (1) establishing and maintaining a Navy data base of fiber optic standards and specifications, (2) supporting Navy program and acquisition managers in developing fiber optic standardization documents, (3) establishing and funding technical panels to draft fiber optic standards and specifications, (4) conducting technical reviews of draft fiber optic standards and providing comments to the cognizant Command Standardization Office, (5) providing validation testing, and (6) developing a user feedback system to evaluate and correct deficiencies. To meet these responsibilities, my office will work in coordination with other organizations involved in fiber optic standards development. These organizations include the Defense Materiel Specifications and Standards Office (DMSSO), Defense Electronics Supply Center (DESC), Tri-Service Fiber Optic Coordinating Structure, the other services, and industry groups such as the Electronic Industries Association (EIA), Society of Automotive Engineers (SAE), American National Standards Insti tute (ANSI), Institute of Electrical and Electronic Engineers (IEEE), International Electrotechnical Commission (IEC), and International Telegraph and Telephone Consultative Committee (CCITT). We are reviewing published fiber optic standards and those being developed. The review includes work being done by DOD and other Federal Government agencies and both American and international industry organizations. The result of this review will be a data base of standards and specifications organized by category (for example, fiber, cable, connector), originating organization, and development status. The data base will be used to determine what is needed to meet emerging requirements. This computerized data base will have versatile search capabilities and be readily accessible to Navy users to help them develop acquisition documents or specifications to meet their fiber optic requirements. The Fiber Optics Standardization Office will also provide a variety of other services, including tutorials to familiarize program managers and engineers with fiber optics, briefings for military and industry groups on standards and specifications, and workshops for program managers on system design and engineering. As the Navy expands its use of fiber optics, the office will assist acquisition managers in identifying integrated logistic support requirements for im--plementing the new technology, and will help identify the needs for training personnel in the use of the technology and assist training organizations to meet those needs. CONCLUSION The Fiber Optics Standardization Office will play a major role in expediting the Navy's acceptance and use of fiber optics. By stimulating and coordinating development of Navy fiber optic standards and specifications and by encouraging and enabling Navy acquisition managers to take full advantage of the technology, the F-SO will help bring the benefits of fiber optics to the Navy. The "long, arduous march toward standardization" continues, but we in the Navy intend to shorten that march. The march may be no less arduous, but it will result in a more organized approach to solving the old problems of standards and specifications.
2,888.2
1985-10-01T00:00:00.000
[ "Engineering", "Physics" ]
Validation of reference genes for quantitative real-time PCR during leaf and flower development in Petunia hybrida Background Identification of genes with invariant levels of gene expression is a prerequisite for validating transcriptomic changes accompanying development. Ideally expression of these genes should be independent of the morphogenetic process or environmental condition tested as well as the methods used for RNA purification and analysis. Results In an effort to identify endogenous genes meeting these criteria nine reference genes (RG) were tested in two Petunia lines (Mitchell and V30). Growth conditions differed in Mitchell and V30, and different methods were used for RNA isolation and analysis. Four different software tools were employed to analyze the data. We merged the four outputs by means of a non-weighted unsupervised rank aggregation method. The genes identified as optimal for transcriptomic analysis of Mitchell and V30 were EF1α in Mitchell and CYP in V30, whereas the least suitable gene was GAPDH in both lines. Conclusions The least adequate gene turned out to be GAPDH indicating that it should be rejected as reference gene in Petunia. The absence of correspondence of the best-suited genes suggests that assessing reference gene stability is needed when performing normalization of data from transcriptomic analysis of flower and leaf development. Background The general aims of transcriptomic analysis are identification of genes differentially expressed and measurement of the relative levels of their transcripts. Transcriptomic analysis like that relying on microarray techniques reveals an underlying expression dynamic that changes between tissues and over time [1]. Results must then be validated by other means in order to obtain robust data that will support working hypotheses directed at a better understanding of development or environmental responsiveness. Since the advent of quantitative PCR, it has become the method of choice to validate gene expression data. However, data obtained by qPCR can be strongly affected by the properties of the starting material, RNA extraction procedures, and cDNA synthesis. Therefore, relative quantification procedures require comparison of the gene of interest to an internal control, based on a normalization factor derived from one or more genes that can be argued to be equally active in the relevant cell types. This requires the previous identification of such genes, which can then be reliably used to normalise relative expression of genes of interest. Identification of candidate genes useful for normalization has become a major task, as it has been shown that normalization errors are probably the most common mistake, resulting in significant artefacts that can lead to erroneous conclusions [2]. Several software tools have been developed to compute relative levels of specific transcripts (commonly referred to as 'gene expression', although obviously transcript stability is also an important factor contributing to transcript levels) based on group-wise comparisons between a gene of interest and another endogenous gene [3]. However identification of genes with stable patterns of gene expression requires pairwise testing of several genes with each other. Among the software programs developed toward this end are geNorm [4], BestKeeper [5], NormFinder [6] or qBasePlus [7]. The programs geNorm and qBasePlus use pairwise comparisons and geometric averaging across a matrix of reference genes. qBasePlus also calculates a coefficient of variation (CV) for each gene as a stability measurement. BestKeeper uses pairwise correlation analysis of each internal gene to an optimal normalization factor that merges data from all of them. Finally, Norm-Finder fits data to a mathematical model, which allows comparison of intra-and intergroup variation and calculation of expression stability. Using the programs described above researchers have identified genes suitable for use as normalization controls in Arabidopsis [8], rice [9], potato leaves [10], the parasitic plant Orobanche ramosa [11], Brachypodium distachyon [12] and grape [13]. In the Solanaceae, candidate genes for normalization have been determined based on EST abundance [14], and qPCR followed by statistical analysis using the tools described above have been reported [15]. A feature shared amongst these studies, and a large number of additional publications describing human, animal and plant systems, is the identification of genes specific for a certain tissue, developmental stage or environmental condition. This is a logical experimental design, as individual research programs tend to be focused, and the number of appropriate genes can be expected to be inversely related to the number of cell types or conditions under investigation. Recent studies that included different cultivars of soybean [16], underscore how the characteristics of the plant and the types of organs studied must drive the experimental approach to transcriptomic analysis. The garden Petunia (Petunia hybrida) has been extensively used as a model for developmental biology [17,18]. Amongst the inbred Petunia lines used in research, the white-flowered Mitchell [19], also known as W115, is routinely exploited for transformation and scent studies [20][21][22]. The genetics of flower pigmentation has been intensively studied in lines such as V30 [23]. Mitchell and V30 are genetically dissimilar, as demonstrated in mapping studies, and vary in a number of other ways, including growth habit and amenability to propagation in culture. Here we have used multiple developmental stages of flowers and leaves of these two Petunia lines to identify genes that show reliable robustness as candidates for use in normalization of relative transcript abundance. The experiments were carried out in two different laboratories, with different PCR machines and different purification and amplification conditions. We found that the final shortlist of valuable genes was different between lines suggesting the necessity of performing reference gene stability measurements as part of the experimental design where differences in gene expression in Petunia is tested. (1948 w) Petunia lines, developmental stages and selection of genes for normalization Two very different Petunia lines were used for the analyses. Mitchell, also known as W115, is a doubled haploid line obtained from anther culture of an interspecific Petunia hybrid [19]; it is characterized by vigorous growth, exceptional fertility, strong fragrance and white flowers. V30 is an inbred line of modest growth habit and fertility featuring deep purple petals and pollen. From each line we harvested flowers representing four developmental stages, from young flower buds to open flowers shortly before anthesis, and two leaf developmental stages, young and full-sized ( Figure 1). Potentially useful RG were selected based on review of the relevant literature, from which we identified genes previously used for normalization or routinely used as controls for northern blots or RT-PCR. From the original list we developed a short list of nine, including genes encoding Actin-11 (ACT), Cyclophilin-2 (CYP) [10], Elongation factor 1a (EF1a), Ubiquitin (UBQ) Glyceraldehyde-3-phosphate dehydrogenase (GAPDH), GTPbinding protein RAN1 (RAN1), SAND protein (SAND) [8,24,25], Ribosomal protein S13 (RPS13) [6] and b-Tubulin 6 (TUB) [26] (Table 1). The products of these genes are associated with a wide variety of biological functions. Moreover, these genes are described as not co-regulated, a prerequisite for using one of the algorithms to identify stably expressed genes (geNorm) reliably [4]. Strategy for data mining and statistical analysis The genes described above were selected to test for stability of transcript levels through leaf and flower development in two Petunia lines, Mitchell and V30. As the aim of the present work is to find if we could obtain a similar rank of genes irrespective of the Petunia line, growth conditions or sample processing, we developed all the data mining procedures separately for each line. Cycle threshold (CT) values were determined and expression stability, i.e., the constancy of transcript levels, ranked. As a strategy for calculating relative expression quantities (RQ) we applied the qBasePlus software, taking into account for each reaction its specific PCR efficiency. Rescaling of normalized quantities employed the sample with the lowest CT value (see materials and methods and Figure 2). With qBasePlus we measured expression stability (M values) and coefficients of variation (CV values). Relative quantities were transferred to geNorm for computing M stability values. It is worth noting that the procedure for computing M values differs between geNorm and qBasePlus. Finally, we used the combined stability measurements produced by geNorm, NormFinder, BestKeeper and qBasePlus to establish a consensus rank of genes by applying Ran-kAggreg [27]. The input to this statistical package was a matrix of rank-ordered genes according to the different stability measurements previously computed. RankAggreg calculated Spearman footrule distances and the software reformatted this distance matrix into an ordered list that matched each inital order as closely as possible This consensus rank list was obtained by means of the Cross-Entropy Monte Carlo algorithm present in the software. CT values and variability between organs and developmental stages in Mitchell and V30 Real-time PCR reactions were performed on the six cDNA samples obtained from each Petunia line with the nine primer pairs representing the candidate RG. In order to assess run reliability non-template controls were added and three technical repetitions were included for each biological replicate. CT values were defined as the number of cycles required for normalized fluorescence to reach a manually set threshold of 20% total fluorescence. Product melting analysis and/or gel electrophoresis allowed for the discarding of non-specific products. Moreover, we considered only CT technical repetitions differing by less than one cycle. The CT values obtained for all the genes under study differed between the two Petunia lines (Figure 3). The range of values was consistently narrower in Mitchell than in V30. This could indicate that gene expression in general is less variable in Mitchell than in V30, however these data correspond to averages derived from all the samples and further analysis showed that in fact V30 exhibited more constant levels of tested transcripts at the single organ level or developmental stage (see below). For Mitchell samples UBQ was the most highly expressed gene overall, with a CT of 14.8, and SAND Wilcoxon test, both non-parametrical, using a Bonferroni correction and a significance cut-off of 0.05. In Mitchell the genes RAN1, RPS13 and UBQ showed significant differences in transcript levels between developmental stages (Additional file 1). RAN1 transcript levels differed significantly between leaf A and flowers C and D, RPS13 differed in flower D from the rest of floral stages analysed, and UBQ transcript levels differed significantly between leaf A and flower D. For V30, the overall CT variability was higher than that seen in Mitchell; in fact, expression of all the genes analysed showed significant differences between one or more sets of organs and/or developmental stages. Expression of the genes GAPDH and TUB differed between leaves A and C, while levels of other measured transcripts were essentially the same in the two leaf stages. In contrast, during flower development, we could distinguish genes that showed two levels of significantly different CT values (GAPDH and TUB), those that showed three (ACT, CYP, EF1a and RPS13) and others that differed at each developmental stage analysed (RAN1, SAND and UBQ). Stability of gene expression in Mitchell and V30 Data from each of the two chosen Petunia lines were analyzed separately. As a first approach, we applied data as a unique population and transferred it to NormFinder, BestKeeper, geNorm and qBasePlus according to the flowchart plotted in Figure 2. In a second approach, we subdivided data into several subpopulations, corresponding to unique developmental stages (i.e., flower C or leaf A), then, piped this data into the qBasePlus and geNorm tools. The results of both sets of analyses are presented in Tables 2 and 3 and Additional files 2, 3 and 4. CT values were log-transformed and used as input for the NormFinder tool, which fitted this data into a . CT data were checked for normality (Shapiro-Wilk test) and, due to non-normality, they were analysed by non-parametrical tests (Kruskal and Wallis). Since CT values showed non-equal distributions according to the organ from which RNA was extracted, they were further tested using pairwise Wilcoxon tests with Bonferroni's correction with the aim of solving pairwise significant variations. A significance threshold of 0.05 was used. Abbreviations: PV, pairwise variation; M, classical stability value; stab, NormFinder stability value; CV, variation coefficient; r2, determination coefficient -regression to BestKeeper; RQ, relative quantities. mathematical model based on six independent groups corresponding to single developmental stages. Estimates for stability of gene expression are based on the comparison between inter-and intra-group variability. In the Mitchell line, the gene exhibiting the most stable level of expression was EF1a (stability value of 0.018) and CYP and EF1a represented the best combination (0.017). In V30, NormFinder estimated UBQ (0.053) as the most stably expressed gene, and RAN1 and UBQ (0.069) as the best combination of two genes. CT values and one efficiency value for each primer pair served as input for the BestKeeper package. This program was intended to establish the best-suited standards out of the nine RG candidates, and to merge them in a normalization factor called the BestKeeper index. Because BestKeeper software is designed to determine a reliable normalization factor but not to compute the goodness of each RG independently, we took as the stability-of-expression value the coefficient of determination of each gene to the BestKeeper index. BestKeeper calculated the highest reliability for CYP in line Mitchell and V30 finding GAPDH as the least suitable gene in Mitchell and TUB in V30. qBasePlus and geNorm calculate M stability values by a slightly different procedure. This parameter is defined as the average pair-wise variation in the level of transcripts from one gene with that of all other reference genes in a given group of samples; it is inversely related to expression stability. However, because the inclusion of a gene with highly variable expression can alter the Table 2 Optimal genes for quantification of individual and mixed organs in each Petunia line. Statistic Flower Leaf SAND (0.14) UBQ (0.14) RAN1 (0.14) SAND (0.14) EF1a (Figure 4). It is noteworthy that stability of transcript levels between reproductive and vegetative modules differed in the two lines. In general, M values calculated with qBa-sePlus, were higher in flowers stage C and D than in leaves from Mitchell, whereas V30 showed an opposite trend. A remarkable case was GAPDH, with an M value four times higher in Mitchell than in V30 at leaf stage C, whereas it was three times lower in Mitchell compared to V30 at flower stage A (see Table 2). Mean CV value, a measurement of the variation of relative quantities of RNA for a normalized reference gene, showed little difference between lines, with a value of 0.42 in Mitchell and 0.44 in V30, for data analysed as a whole. Determination of the number of genes for normalization Quantification of gene expression relative to multiple reference genes implies the calculation of a normalization factor (NF) that merges data from several internal genes. Determination of the minimal number of its components is estimated by computing the pairwise variation (PV) of two sequential NFs (Vn/n+1) as the standard deviation of the logarithmically transformed NFn/ NFn+1 ratios, reflecting the effect of including an additional gene [4]. If the pairwise variation value for n genes is below a cut-off of 0.15, additional genes are considered not to improve normalization. The number of genes required for normalization was determined to be two for both Mitchell and V30, except when either different floral developmental stages or vegetative and reproductive stages were mixed (see Table 2). The PV values showed the same trend as that seen for stability measurements, i.e., the developmental stage with the lowest average PV was flower stage D, both in Mitchell and V30. In contrast, gene expression in leaves of Mitchell showed more variability, with higher PV values, than those of V30 ( Figure 5). Consensus list of similarities between lines The different software programs used to determine gene suitability for normalization of gene expression give slightly different results and statistical stability values for each gene. We arranged the internal genes in five lists according to the rank positions generated by each of the five statistical approaches, M values by geNorm and qBa-sePlus, NormFinder stability value, coefficient of determination to BestKeeper and CV of qBasePlus. These lists were used to create an aggregate order, with the aim of obtaining an optimal list of genes for each Petunia line. The results of the merged data revealed that the most adequate of the genes tested for normalization in Mitchell are EF1a, SAND and RPS13; the three showing the lowest reliability are TUB, ACT and GAPDH ( Figure 6A and 6B). For V30, the best candidate genes are CYP, RAN1 and ACT, while the three lowest ranking are EF1a, SAND and GAPDH. Thus none of the genes found as highly reliable coincide between the lines. Despite of that, GAPDH was highly unstable in both lines. Identification of robust normalization genes for Petunia We have attempted to identify a set of genes suitable for normalization of transcript levels in P. hybrida. Since several Petunia lines are used for research, we based this work on two that are extensively used for different purposes. In an effort to reflect different growth environments typical of distinct lab setups, plants of each line were grown in a set of conditions, differing in photoperiod, thermoperiod and growth substrate between lines (see methods). RNA was isolated using different RNA extraction kits, and amplifications were carried out using different reagents and PCR machines. The experimental design aimed to maximize potential variability in transcript abundance for the putative RG under study. Highly contrasting results would suggest that every laboratory do a pilot experiment to identify genes suitable for use in normalization; similar results between the two systems would point to a set of genes reliable for broad application, minimally for the lines and developmental stages described. Our findings in terms of line-associated variability were not in accordance with the results from a soybean study comparing different cultivars. Results of that study suggested no highly relevant cultivar influence on RG suitability [16]. A similar study has been reported in coffee, for which average M stability values for leaves from different cultivars were lower than that for different organs of a single cultivar. Our result suggests that there are differences in gene expression between same tissues from different lines as well as different tissues from the same line. Noise in gene expression patterns Development of petals, like that of many tissues and organs in Petunia, is characterized by a spatial and temporal gradient of cell division that is eventually replaced by cell expansion [28]. However the experiments described here used whole flower tissues including full petals along with sepals, stamens and carpels. This imposes a general requirement that any gene emerging as robust be differentially regulated to a huge extent neither in the various tissues analyzed together nor in these tissues at different stages of maturation. One interesting aspect of our findings was the identification of flower stage C as a particularly noisy developmental stage compared to early or fully developed flowers. The transition between cell division and expansion in petals, or other flower tissues during this developmental stage, might explain the increased noise. An alternative nonexclusive explanation is that the intermediate stages of flower development are generally less tightly defined than the open flower stage. Leaf development similarly consists of cell growth followed with cell expansion [29]. However, an important difference between floral and leaf development is that leaves perform their essential function, e.g., photosynthesis, from a very early stage such that developing leaf tissue is always a mixture of at least three processes: growth, cell morphogenesis and differentiated cell function. This combination of processes might account for the increased gene expression noise observed. Number of genes required for normalization of gene expression in Petunia Gathering data from several RG into a normalization factor is currently an accepted method of accurate relative quantification of gene expression [30]. Moreover, this method has been statistically and empirically validated [13,31]. Ideally the number of genes required should be low enough to make experimental procedures affordable, and high enough to merit confidence in the conclusions. The PV value obtained for both Mitchell and V30 was very low. Although the value tended to be higher in Mitchell, the number of genes deemed necessary for normalization was the same for both lines: using the proposed cut-off of 0.15 and comparing single developmental stages, the required number was two for Mitchell and V30. The requirement for only two genes is low compared to the results reported for other phylogenetically related species [10,15,32] and will require significantly less work than the previously suggested minimum of three genes [4]. Data mining strategies and consensus list of genes for normalization The present research aims to identify the control genes best suited for use in gene expression studies in several organs of two Petunia lines. The candidate RG combined classical and recently identified genes. Since each software package can introduce bias, we employed several tools in our analysis. As discussed by other authors, geNorm bases its stability measurement on pairwise comparisons of relative expression quantities of all the panel of genes in the material of interest requiring a suite of non-coregulated RG [6]. BestKeeper and Norm-Finder examine primarily CT values, whereas qBasePlus and geNorm evaluate RQ, a consequence of which is that PCR efficiency dissimilarities can affect stability measurements [16]. Nevertheless, some of these algorithms are intrinsically biased because they assume that data are normally distributed. For instance BestKeeper is based on Pearson correlation analysis, which requires normally distributed and variance homogeneous data. The author described this problem and suggested further versions of the software in which Spearman and Kendall Tau correlation should be used [5]. However, those versions are currently not available. Our plant material diverged in the variability of statistical outputs amongst lines. V30 showed a high variability in terms of raw expression data (CT values) and low in terms of expression stability measurements, whereas Mitchell showed the opposite responses. Our global analysis merged different statistics, some of which are CT-based and others RQ-based, with the aim of counteracting this biasing influence. Summarizing the results of our entire dataset analysis, geNorm recommended use of RAN1 and SAND genes for Mitchell and RPS13 and UBQ for V30 and discouraged use of GAPDH for both lines. Non-suitability of GAPDH has been described by several authors [33,34]. Regarding to Solanaceae, its unsuitability has been confirmed in tomato [15] but it was selected as a stable RG in coffee [35]. Due to its sequential exclusion of the least stable gene in the M value calculus algorithm, geNorm M values can differ from those of qBase-Plus. qBasePlus corresponded with geNorm, evaluating EF1a as the most reliable gene in line Mitchell but differed in line V30, recommending ACT as the best candidate. EF1a suitability has been confirmed in potato during biotic and abiotic stress [10], atlantic salmon [36] and several developmental stages of Xenopus laevis [37]. Expression of ACT genes differs depending on the family member. ACT2/7 has been reported as a stably expressed gene whereas ACT11 was reported as unstable [38,39]. It is worth noting that the ACT gene used in this study corresponds to an ACT11. Conclusions Altogether, there were strong similarities between the different programs but the coincidence in assigning best and worst genes was not absolute. The fact that each program identified slightly different genes as best suited for normalization prompted us to merge the data in an unsupervised way and giving identical weight to the output of the different programs. We used the RankAggreg program for this purpose. Our results show that GAPDH was the worst gene to use in normalization in both lines. In contrast, the suggested genes did not coincide and were EF1a and SAND in Mitchell, whilst CYP and RAN1 were the genes of choice in V30. In conclusion, we provide a list of genes in discrete developmental stages that show M values below 0.5 (Table 2) [4]. A normalization factor including two genes should be enough for reliable quantification. Nevertheless we propose a reference gene stability test when performing gene expression studies in Petunia. Plant material Petunia hybrida lines Mitchell and V30 were grown in growth chambers. Mitchell plants were grown on ED73 + Optifer (Patzer) under a 10 h light/14 h dark cycle, with a constant temperature of 22°C (60% humidity). V30 plants were germinated in vermiculite and grown in a vermiculite-perlite-turf-coconut fiber mixture (2:1:2:2). Plants were kept under a long day photoperiod (16L: 8 D) with 25°C in L and 18°C in D. Flowers were classified into four developmental stages: flower buds (stage A, 1-1.5 cm), elongated buds (stage B, 2,5-3 cm), pre-anthesis (stage C, 3.5 -4.5 cm) and fully opened flowers shortly before anthesis (stage D) according to Cnudde et al. [40]. Leaves were harvested at two different stages, stage A corresponded to young, small leaves and stage C to fully expanded ones. Three independent samples of each of the developmental stages of flowers and leaves were taken. RNA isolation and cDNA synthesis Mitchell material Total RNA was isolated from 100 mg homogenized plant material using an RNeasy Mini Kit (Qiagen, Hilden, Germany). Putative genomic DNA contamination was eliminated by treatment with recombinant DNase I (Qiagen) as recommended by the vendor. RNA concentration and purity was estimated from the ratio of absorbance readings at 260 and 280 nm and the RNA integrity was tested by gel electrophoresis. cDNA synthesis was performed using M-MLV reverse transcriptase (Promega, Mannheim, Germany) starting with 1 μg of total RNA in a volume of 20 μL with oligo(dT)19 primer at 42°C for 50 min. V30 material Samples were homogenized in liquid nitrogen with a mortar and pestle. Total RNA was isolated using the NucleoSpin® RNA Plant (Macherey-Nagel, Düren, Germany) according to the manufacturer's protocol. This RNA isolation kit contains DNaseI in the extraction buffer, added to the column once RNA is bound to the spin column. RNA was measured by photometry at 260 nm and quality-controlled on denaturing agarose gels. Total RNA (0.8 μg) was transcribed using the SuperScript® III (Invitrogen Corp., Carlsbad, CA) and oligodT20 employing 10 μL 2× RT reaction mix, 2 μL RT enzyme mix and 8 μL RNA. Reverse transcription was performed on a GeneAmp Perkin-Elmer 9700 thermocycler (Perkin Elmer, Norwalk, CT, USA) by using the following programme: 10 min at 25°C, 30 min at 50°C and 5 min at 85°C; addition of 1 u of Escherichia coli RNAse H, and incubation for 2 h at 15°C. Real-time PCR Mitchell Real-time PCR was performed in an Mx 3005P QPCR system (Stratagene, La Jolla, CA) using a SYBR Green based PCR assay (with ROX as the optional reference dye; Power SYBR Green PCR Mastermix, Applied Biosystems, Foster City, CA). A master mix containing enzymes and primers was added individually per well. Each reaction mix containing a 15 ng RNA equivalent of cDNA and 1 pM gene-specific primers (Tab. 3) was subjected to the following protocol: 95°C for 10 min followed by 50 cycles of 95°C for 30 sec, 60°C for 1 min and 72°C for 30 sec, and a subsequent standard dissociation protocol. As a control for genomic DNA contamination, 15 ng of total non-transcribed RNA was used under the same conditions as described above. All assays were performed in three technical replicates, as well three biological replicates. V30 Reactions were carried out with the SYBR Premix Ex Taq® (TaKaRa Biotechnology, Dalian, Jiangsu, China) in a Rotor-Gene 2000 thermocycler (Corbett Research, Sydney, Australia) and analysed with Rotor-Gene analysis software v. 6.0 as described before [41] with the following modifications: Reaction profiles used were 40 cycles of 95°C for 30 s, 55°C or 60°C for 20 s, 72°C for 15 s, and 80°C for 15 s, followed by melting at 50-95°C employing the following protocol: 2 μL RNA equivalent of cDNA, 7.5 μL SYBR Premix Ex Taq 2×, 0.36 μL of each primer at 10 μM and 4.78 μL distilled water. Annealing temperature was 55°C (TUB, CYP, ACT, EF1a, GAPDH, and SAND) or 60°C (RPS13, UBQ, RAN1) according to the previous optimisation. In order to reduce pipetting variability, we performed reaction batches containing primer pairs, and templates were added in the end. We performed three technical replicates for each reaction and non-template controls, as well three biological replicates.
6,504.4
2010-01-07T00:00:00.000
[ "Biology" ]
Mechanical Deformation Induced Continuously Variable Emission for Radiative Cooling Passive radiative cooling drawing the heat energy of objects to the cold outer space through the atmospheric transparent window (8 um - 13 um) is significant for reducing the energy consumption of buildings. Daytime and nighttime radiative cooling have been extensively investigated in the past. However, radiative cooling which can continuously regulate its cooling temperature, like a valve, according to human need is rarely reported. In this study, we present a concept of reconfigurable photonic structure for the adaptive radiative cooling by continuously varying the emission spectra in the atmospheric window region. This is realized by the deformation of the one-dimensional PDMS grating and the nanoparticles embedded PDMS thin film when subjected to mechanical strain. The proposed structure reaches different stagnation temperatures under certain strains. A dynamic exchange between two different strains results in the fluctuation of the photonic structure's temperature around a set temperature. Introduction The growing demand for thermal comfort boosts the increase in the consumption of various energy sources for cooling and heating and exerts enormous stress on electricity systems over the world.This also drives up the carbon dioxide emissions and contributes to the problem of global warming.Nearly 20% of the total electricity was used by air conditioners or electric fans to regulate the temperature of buildings to be comfortable 1 .However, the peak wavelength (~9.7 µm) of blackbody radiation for objects on Earth (~300K) coincides with the atmospheric highly transparent window (8 ~13 µm) that scarcely absorb infrared thermal radiation.Therefore, terrestrial objects can naturally radiate thermal energy to the outer space (~3K) through the atmospheric window and hence lower their temperature, which is called passive radiative cooling 2 . Effective nighttime radiative cooling has been extensively studied for organic and inorganic materials with high infrared emissivity within the atmospheric window [3][4][5] .However, the daytime radiative cooling is highly demanded and a challenge since the solar radiation (ASTM G-173, ~1000W/m 2 ) is much higher than the potential radiative cooling (~100W/m 2 ).If objects absorb only a few percents of solar irradiance, it will counteract the cooling power and heats the objects ultimately.To achieve daytime radiative cooling, a spectrally selective surface which effectively reflects solar irradiance (0.3 µm ~2.5 µm) and strongly emits heat within the infrared region (8 µm ~13 µm) simultaneously is a promising device.Consequently, several metamaterials successfully achieving daytime radiative cooling with an equilibrium temperature below the ambient have been experimentally investigated, such as silica-polymer hybrid metamaterial 6 , hierarchically porous paint-like materials 7 , and wood-based structural materials 8 .Other materials like nanophotonic structures 9,10 , infrared transparent aerogel 11 , and polymer nanofiber 12 also provide various alternatives for daytime radiative cooling.These materials pave the way for applications of radiative cooling to energy-saving buildings, energy harvesting, and temperature regulation without energy consumption and achieving sustainable cooling throughout the day. Although static radiative cooling systems can effectively save energy in summer, the cooling functionality will increase the energy consumption for heating in winter.To overcome this difficulty, a conceptive design of self-adaptive radiative cooling was developed based on phase change material vanadium dioxide (VO 2 ) that can adaptively turn "ON" and "OFF" radiative cooling corresponding to the ambient temperature 2,13 .Moreover, the phase change temperature of VO 2 co-doping with W and Sr can be adjusted around the room temperature by changing W contents 14 .Although these temperature-induced systems can automatically adjust the radiative cooling with ambient temperature, their performances, and applications highly based on the specific phase change temperature under the specific W content. Considering the rigorous manufacturing process and the fixed sole phase change temperature under specific usage scenarios, there are still some limitations on large-scale fabrications and complex practical applications. Here, we conceptually propose a system of a reconfigurable nanophotonic structure for mechanical deformation induced radiative cooling basing on the continuously variable emission in the atmospheric window to attain diverse desired stagnation temperatures by continuous deformation adjustment according to ambient temperature. Results The reconfigurable structure consists of a PDMS layer embedded with multispecies of nanoparticles on top of the 1-D PDMS grating coated by a silver thin film (Fig. 1A).The emissivity spectra in the atmospheric window of this structure are continuously tunable by the mechanical deformation of the top PDMS thin film and PDMS grating periods to stabilize at a certain temperature when subjected to a mechanical strain (Fig. 1B).We theoretically prove that the emissivity properties of the proposed system under different strains are angular-independent which is important in real applications.Theoretical analysis also shows that this system can maintain itself at a set temperature by mechanical deformation which could be potentially applied to thermal regulations for different applications, such as outdoor vehicles, buildings, and greenhouses. Figure 1 C introduces the concept of the mechanical deformation induced radiative cooling.The basic principle of continuous temperature adjustment is that the thickness of the top nanoparticles embedded PDMS layer and the period of the Ag coated PDMS gratings are changed with mechanical deformation, which induces the corresponding change in emissivity of the structure with the atmospheric window (8 ~13 µm).This structure, like a valve, can continuously regulate its opening when subjected to different strains.The emissivity in the atmospheric window is a function of the strain.The higher the strain, the lower the emissivity.Furthermore, different strains correspond to different stagnation temperatures, that is a small strain yields a stagnation temperature below the ambient temperature, while a large strain represents a stagnation temperature above the ambient (the inset of the Fig. 1 C). To realize such functionality, we employ an elastomer, PDMS, as the valve's component to forms reconfigurable metamaterials and proposed a nanophotonic structure with a PDMS layer embedded with three nanoparticles: SiC, Si 3 N 4 , and BN.1-D PDMS grating layer coated with Ag thin film adheres to the top PDMS layer.When subjected to mechanical deformation of ∆x, The PDMS stretches and the grating period (Λ) and filling ratio (φ ) increase.We assume that the Ag grating strips (width, w) do not undergo any deformation as it has a much higher Young's modulus (69) than the PDMS (0.5).Therefore, the new grating period of the stretched structure is Λ + ∆x and the new filling ratio is w/(Λ + ∆x).The thickness of the top PDMS layer, h 1 , also decrease to be (h 1 -∆h), as shown in Fig. 1 B. The PDMS strongly absorbs infrared light when its thickness is above 1 µm since its extinction coefficient (κ) has absorption peaks from 7 µm to 13 µm 15 .If we increase h 1 above 10 µm, it emissivity will increase to 0.9 but its spectral selectivity loses, so we keep its thickness around 1 µm and introduce three nanoparticles (SiC, Si 3 N 4 , and BN) to increase the emissivity only within the atmospheric window.These three nanoparticles have separate extinction coefficient peaks from 7 µm to 13 µm (SiC: 12.8 µm 16 ; Si 3 N 4 : 8.5 µm and 12.5 µm 17 ; BN: 7.09 µm and 12.45 µm 17 ).This increases the emissivity in the atmospheric window but does not for the rest wavelength range.The PDMS grating strips serve as a transition layer between the top PDMS layer and the bottom Ag layer.Although the strain of the structure increase to the PDMS limits (120%), the Ag layer can still keep undeformed.Since Ag is highly reflective from 0.37 µm to 20 µm 18 , so the Ag grating layer can be regarded as opaque to both the infrared and visible light considering its thickness we used, h 3 = 400 nm and the small period, Λ= 40 nm compared to the wavelength range consider here (0.37 µm to 20 µm), that is, the Ag grating layer serves as a thin film to reflect all the incident light, even under 120% strain.Since the PDMS layer is transparent in the solar wavelength region as it has a negligible extinction coefficient 19 , the proposed structure is highly reflective in the solar region. The hemispherical emissivity of the reconfigurable nanophotonic structure can be expressed as 20 : where c is the speed of light in vacuum, ω is the angular frequency and k ρ is the magnitude of inplane wave vector.R h and T (µ) h are the polarization dependent effective reflection and transmission coefficients which can be calculated using the recursive relations of Fresnel coefficients of each interface 21 .The dielectric functions can be related to real (n) and imaginary (κ) parts of refractive index as √ ε = n + jκ.Dielectric functions of the materials (PDMS, SiC, Si 3 N 4 , BN and Ag) utilized in this work are taken from literature 15,19,[22][23][24][25] .The Bruggeman effective medium theory is employed to predict the dielectric function of the nanoparticles embedded PDMS thin film composite.Here, the diameter of these three nanopartiles are confined to be 80 nm that is much smaller than the shortest wavelength of interest (400 nm) and the thickness of the PDMS layer.Besides, the sum volume fractions of these nanoparticles are below 33% (the maximum volume fraction limit of Bruggeman effective medium approximation) 26 .Therefore, the effective dielectric function of multispecies of nanoparticles can be calculated: where η i is the volume fraction of different nanoparticles, ε i stands for the dielectric function of different nanoparticles.ε is the dielectric function of the matrix.As our design involves a 1-D grating structure of PDMS, second order approximation of effective medium theory was used to obtain the effective dielectric properties given by 27 : where ε A and ε B are dielectric functions of two media (PDMS and vacuum) in surface gratings.The expressions for zeroth order effective dielectric functions ε T E,0 and ε T M,0 are given by 27,28 : We choose grating period Λ= 40 nm that is much smaller than the shortest wavelength (400 nm). In order to better fit the practical application scenarios, here, two possible scenarios of the deformation for the transition PDMS grating layer are considered.In fig. 2 A, scenario I (top photonic structure) shows the ideal scenario.The w of PDMS grating remains unchanged when the period elongates from Λ to Λ + ∆x.However, the PDMS grating layer must undergo deformation to some extent.Hence, we assume that the bottom width, w b keeps unchanged, while the top width, w t elongates with the same strain to the top PDMS layer.Therefore, the PDMS grating strips become an isosceles trapezoid subjected to the mechanical stain, which represents the practical situations, like scenario II.To illustrate the difference between scenarios I and II, the emissivity spectra of the structure under 60% strain are calculated and shown in Fig. 2 B. The difference between the two scenarios cannot be negligible in the infrared region, so scenario II is adopted for the following analysis.The spectral emissivity of scenario I is higher than scenario II's, and the reason is that the PDMS grating strip in scenario II fills more in the vacuum space than the scenario I when subjected to the same mechanical strain.This increases the infrared absorptance over the atmospheric window region.For the deformation of the PDMS grating layer, we divide the 1-D strips into multiple layers of rectangular gratings with decreasing filling ration from top to bottom.Here, we take 100 layers in calculations which is enough to get converged.Fig. 2 C shows the strain-dependent reflectivity in the wavelength range of 0.4 µm to 2.5 µm and emissivity in the wavelength range of 8 µm to 13 µm for scenario II.The R solar increases slightly with strain and the ε IR drops abruptly as the strain increases, for example, the ε IR at 100% (0.23) strain is only equivalent to 32% of the ε IR at the original state.This is because the thickness of the top PDMS layer decrease and the incident infrared light travels less in the top PDMS thin film.We confine the strain less than 120% since the PDMS film will fracture around that strain 29 . After the optimization of variables h 1 , h 2 , h 3 , Λ, Φ and vf of SiC, Si 3 N 4 , and BN, we get the optimal configuration with h 1 = 1100 nm, h 2 = 100 nm, h 3 = 400 nm, Λ= 40 nm, Φ= 0.6, vf SiC = 3%, vf Si 3 N 4 = 25%, and vf SiC = 4%.Figure 3 A shows the spectral emissivity of the reconfigurable nanophotonic structure at two limit states: original state (0% strain) and 120% strain.Both the original and stretched structure show high reflectivity in the solar irradiance region, while the original one has relatively high emissivity over atmospheric window that represents the complete "ON" of radiative cooling valve.The stretched structure has an overall 0.15 emissivity from 8 µm to 13 µm and stands for the entire "OFF" features of the valve.Besides, both the "ON" and "OFF" states have low absorptivity in the rest wavelength region (5 µm ~8 µm and 13 µm ~20 µm) for thermal radiation which avoid the absorption of heat from the ambient environment.The structure of three nanoparticle inclusions has similar R solar while has relative higher ε IR over the atmospheric window compared with the structure of the single nanoparticle inclusions (Fig. 1 B).The high R solar (θ ) ensures excellent reflection of sunlight from all angles of incidences (Figs. 3 C and 3 E, angle-averaged emissivity: 0.92 and 0.95), and the high ε IR (θ ) (Fig. 3 D, angle-averaged emissivity: 0.6324) of the complete "ON" state from 0 • to 60 • leads to a hemispherical high ε IR resulting in a good radiative cooling feature.However, the low ε IR (θ ) of the entire "OFF" state from 0 • to 85 • (Fig. 3 F, angle-averaged emissivity: 0.18) yields the low radiative cooling ability.Moreover, the states between the entire "ON" and "OFF" state represent different emissivity in the atmospheric window corresponding to different strains. Discussions The thermal performance analysis of the reconfigurable metamaterials is evaluated by solving the energy balance equation (Figure 4 A): We supposed that the backside of the self-adaptive photonic structure is insulated, and only the energy transfer between the top surface of the structure, ambient, and outer space is considered.Here, P r is the radiative cooling power of the structure, P nr is the non-radiative power from the ambient, P a is the incident thermal radiation power from the ambient, P s stands for the incident solar power absorbed by the structure, T a means the temperature of ambient air, and T presents the temperature of the 5/9 structure.P r can be determined as follows: where, I BB (T, λ ) = 2hc 2 λ −5 exp(hc/λ k B T − 1) −1 defines the spectral radiance of blackbody at a certain temperature.where h is the Planck's constant, k B is the Boltzmann constant, and λ is the wavelength. ε λ cos θ sin θ dθ is the temperature-dependent emissivity of the structure 30 .Here, the emissivity measured at room temperature (298 K) is taken into simulation, since it is assumed that the temperature variations of the structure affect little on emissivity.θ and φ are the azimuthal and latitudinal angles, respectively. The non-radiative heat transfer between the structure and ambient air is given by: h is the nonradiative heat transfer coefficient ranging from 2 to 8 Wm −2 K −12 .Here h = 8 Wm −2 K −1 is set as natural air convection heat transfer to the structure.The absorbed power of the incident thermal radiation from atmosphere P a (T a ) is given by: The absorptivity of the atmosphere, ε(λ , θ , φ ), is given by 1-τ(λ , θ , φ ).Here τ(λ , θ , φ ) is the transmittance value of atmosphere obtained from MODTRAN4 31 .Solar irradiation absorbed by the radiative cooler P s (T ) is given by: Here, I AM1.5 (λ ) is the spectral irradiance intensity of solar irradiation at AM 1.5.ε(λ , θ sun , T cooler ) is the temperature-dependent emissivity of radiative cooler.The integration is taken from 0.3 µm to 2.5 µm, which cover 97% of the solar incident power.The time-dependent temperature variations of the structure can be obtained by solving the following equation: Since Ag has relatively high thermal conductivity (406 W/m K) and the Ag grating strips' thickness is only 400 nm, so the thermal resistance is negligible.The heat capacitance of the reconfigurable photonic structure, C, consists of the PDMS thin film and PDMS grating strip with a thickness of 1200 nm (h 1 + h 2 ).We present the net cooling power as a function of the structure's temperature without (Fig. 4 B) and with (Fig. 4 C) the influence of non-radiative heat transfer, respectively.Figs. 4 B and 4 C show that the structure has a larger net cooling power in a closed environment (h = 0 W m −2 K −1 ) than the one that is open to the ambient environment (h = 8 W m −2 K −1 ) at any temperature for different strains.The net night-time radiative cooling is higher than the day-time's since the absorbed solar irradiance neutralizes part of the cooling power that the structure radiates out to the outer space in the daytime.The net cooling power of the original structure is higher than the stretched one's (120%) whether it is open to ambient or not, and most of the stretched one's cooling power is negative, that is, it increases the temperature of the structure.Therefore, the original and stretched state of the reconfigurable structure can be regarded as the complete "ON" and "OFF" states.The temperature of the closed environment in which the net cooling power is positive is lower than the open environment (night-time: 260K-0% strain, 307K-0%; day-time (0.9 suns): 287K-0%, 313K-0%), this is also for the stretched state because the PE convection shield eliminates the non-radiative heating power from the ambient.Therefore, the closed environment is better for a lower desired temperature, while the open environment case is suitable for a higher expected temperature. The stagnation temperature responses of the continuously adaptive cooling structure under various strains are presented in Fig. 5 A by solving Eq. 9 using spectra under different strains.For each strain, both the structure and the ambient is assumed to be 25 • C and we set h = 0 W m −2 K −1 and I solar = 1 sun.When the strain is below 20%, the net cooling power of the reconfigurable structure is positive, its temperature decrease as time evolves, and eventually reaches a stagnation temperature which below the ambient after the 50s (-10% strain → 6.1 • C temperature decrease, 0% strain → 3.9 • C temperature decrease, 10% strain → 1.24 • C temperature decrease).While the strain is above 20%, the system is approaching to the complete "OFF" state, the negative radiative cooling heats the structure up and reaches an equilibrium temperature that is above the initial temperature (20% strain → 4.69 • C temperature increase, 40% strain → 9.4 • C temperature increase, 60% strain → 14.76 • C temperature increase). Finally, we simulate the transient temperature variations of the structure as a function of time when subjected to dynamic mechanical strains to keep at a set temperature(Fig. 5 B).The system is under an environment with h = 0 W m −2 K −1 , I solar = 1 sun and 26 • C of the ambient temperature.The initial temperature of the structure is assumed to be 26 • C which gives humans thermal comfort.We use the spectra of the structure at the original state and 20% strain into the calculation.The set temperature of this structure in the first 300s is 26 • C. When the structure is in the original state, the radiative cooling feature is total on, and then the structure's temperature drops below 26 • C from 0s to 1s.The temperature of the structure goes up from 1s to 2s, since the structure is stretched by 20%.The transient temperature of the structure keeps dynamically changing between 25.7 • C and 26.05 • C after the 15s with an average temperature of 25.8 • C.This shows that this system can control its temperature in a narrowband around the set temperature.When we change the set temperature at 300s to be 30 • C, the structure's temperature can go up to 30 • C and then stay around that point.The average temperature from 315s to 600s is 29.6 • C.This shows our proposed structure has a quick adjustment ability. Above all, we have presented a conceptive reconfigurable nanophotonic design of mechanical-induced radiative cooling that can continuously adjust the radiative cooling when subjected to different mechanical strains.A PDMS thin film and grating strips allow the reversible stretching of the structure.Deformation of the PDMS thin film and PDMS gratings leads to a change of the thin film thickness and filling ratio of PDMS grating strips, and hence, the spectral emissivity of the structure over the atmospheric window can be actively and continuously changed.Compared with other self-adaptive radiative cooling for fixed critical temperature, our designs have various stagnation temperature under different strains, which give us more options for different engineering applications.Moreover, strains above 20% can turn the radiative cooling to heating and fluctuational temperature control can be achieved with the dynamic exchange between 0% and 20% strain.This work verifies that the elastic materials have the potential to be applied in the mechanical deformation induced radiative cooling.As a proof of concept demonstration, we use elastic polymer, PDMS, as the matrix materials, and other transparent soft polymers, like silica gel, can also serve as the alternative matrix.Such designs can be potentially applied in a series of applications, such as energy-saving buildings, textiles, and automobiles for energy-saving and thermal comfort enhancing. Figure 1 . Figure 1.(A) The reconfigurable metamaterials consists of a PDMS layer (thickness, h 1 ) embedded with three nanoparticles: SiC, Si 3 N 4 , and BN (volume fraction, vf SiC/Si 3 N 4 /BN ).1-D rectangular grating (period (Λ), width (w), filling ration (φ )) of PDMS (h 2 ) coated with silver (Ag, h 3 ) thin film is at the back of the top PDMS layer.Case I: the structure has high R solar that reflects most of the solar irradiance, and has high ε IR in atmospheric window region that can radiate heat out to the universe when it is released.(B) Case II: it keeps unchanged high R solar , while the ε IR is reduced since the PDMS layer gets elongated and thinner and the period of the 1-D PDMS layer is increased when the grating is stretched, but the width of the Ag layer does not change.(C) Schematic showing the concept of mechanical deformation induced radiative cooling. Figure 2 . Figure 2. (A) Two scenarios of the reconfigurable nanophotonic structure under stretching.Top: scenario I (constant w) for the ideal stretching or compression due to the mechanical strain.Bottom: scenario II (constant w t ) for the real stretching situation.(B) The spectral emissivity of the reconfigurable metamaterials for different scenarios under 60% strain.(C) The strain dependent overall reflectively and emissivity for scenario II.The overall R solar is calculated from 0.4 µm to 2.5 µm, and the ε IR is calculated from 8 µm to 13 µm. Figure 3 . Figure 3. (A) Spectral emissivity (ε = 1 -R) of the reconfigurable nanophotonic structure in the original state and under a strain of 120% displayed with the normalized ASTM G173 solar spectrum (AM 1.5), the infrared atmospheric transparent window and the normalized blackbody spectrum at 300K.(B) The overall R solar and ε IR of the proposed structure embedded with only the single species of SiC, Si 3 N 4 , and BN and the three mixed nanoparticles.The nanophotonic structure's R solar (θ ) (C) and ε IR (θ ) (D) across various angle of incident (AOI) result in high hemispherical R solar and ε IR at original state ("ON" mode), and R solar (θ ) (E) under a strain of 120%.The low ε IR (θ ) (F) across angles shows the "OFF" mode with a low hemispherical ε IR under 120% strain. Figure 4 . Figure 4. (A) Schematic drawing of the thermal characterization setup used in the thermal performance analysis.(B) Calculated net cooling power of the reconfigurable nanophotonic structure at a different strain as a function of its temperature at night-time and day-time under 0.5 sun (0.5 * AM 1.5 illumination) and 0.9 sun (0.9* AM 1.5 illumination) with (B) or without (C) polyethylene (PE) convection shield. Figure 5 . Figure 5. (A) Stagnation temperature of the reconfigurable metamaterials as a function of strains showing the structure has cooling or heating abilities under different strains.(B) Transient temperature variations of the structure when subjected to dynamic mechanical strain showing it can control its temperature around a critical temperature.
5,782
2020-06-16T00:00:00.000
[ "Physics", "Engineering" ]
Buckling Analysis of Corroded Pipelines under Combined Axial Force and External Pressure Affected by a complex environment, corrosion is a common defect in steel pipelines. Moreover, steel pipelines are subjected to large axial forces during their installation and operation. Corroded deep-sea steel pipelines are prone to local buckling under complex loads. Therefore, in view of this problem, the collapse response of corroded steel pipelines under the combined axial force and external pressure is analyzed in detail. First, a formula for evaluating the collapse pressure of corroded steel pipelines under external pressure and axial force is proposed. Then, the factors affecting the collapse pressure of the steel pipeline are parameterized by using the finite element method. The accuracy of the finite element model is proved by collapse tests of the corroded steel pipeline. As shown in finite element results, the diameter-to-thickness ratio, initial ovality and corrosion defect size have significant effects on the buckling response of a steel pipeline. The collapse pressure of the steel pipeline decreases as the axial force increases. Finally, based on the finite element simulation results, the parameter variables in the evaluation formula are obtained. Introduction Oil and gas steel pipelines have been widely used in the production industry [1][2][3][4]. With the discovery of deep-sea oil and gas fields, the demand for steel deep-sea oil and gas pipelines is also greatly increasing. Compared to land, the deep-sea environment is more complex. Therefore, steel pipelines with higher bearing capacities are required [5][6][7][8][9][10][11]. Moreover, corrosion defects are often formed on the surface of the steel pipeline under the perennial erosion of external seawater, which it will cause the local wall thickness of the steel pipeline to be thinned. The fracture analysis of defective pipelines under complex loads has been extensively studied, and many important results have been obtained [12,13]. Miller [14] proposed an analytical solution based on the net section collapse criterion, including the ultimate load expressions for various defect types in different structures. After that, Jones and Eshelby [15] developed the ultimate load solution of thin-walled cylinder with partial symmetrical circumferential cracks and full-ring cracks under internal pressure. Kim et al. [16] fitted the ultimate load of the cylinder with partial penetrating surface cracks. Shim [17] used the finite element method to analyze the ultimate load of thick-walled pipes with irregular penetration cracks under combined loads. Staat and Vu [18] carried out the plastic limit analysis of circumferentially cracked tubes and vessels under the action of internal pressure by means of the finite element method and proposed the global limit load solution and the local limit load solution. In the deep-sea environment, external pressure is the main factor causing corrosion and local buckling in steel pipelines [19,20]. Fan et al. [21] studied the instability mechanism of submarine steel pipelines through external pressure tests on full-scale and reducedscale steel pipelines by considering the effects of initial ovality and pitting corrosion defects. Zhang et al. [22,23] performed a large number of finite element simulations on steel pipelines with initial ovality, initial wall thickness eccentricity and asymmetric corrosion defects. Additionally, the instability mode and collapse pressure response of the steel pipeline were discussed in detail. Netto et al. [24][25][26] conducted a large number of external pressure tests on steel pipelines with corrosion defects, and studied the influences of various defect shapes, defect sizes and steel pipeline sizes on the collapse pressure. Finally, an empirical formula for predicting the collapse pressure of the steel pipeline was proposed. Deep-sea pipelines usually bear multiple loads at the same time. Under the combined influence of corrosion defects and complex loads, such a steel pipeline may be partially buckled or collapse, causing serious economic losses. Therefore, the research on the buckling response of corroded steel pipelines under complex loads has received extensive attention. Steel pipelines are usually designed with the influences of pressure and tension in mind [27]. The presence of axial force leads to the collapse and buckling propagation of the steel pipelines. Heitzer [28] analyzed the plastic collapse of defective pipelines under internal pressure and tension. The study found that a circumferential defect has a great influence on the ultimate axial force of the pipeline. Qiao et al. [29] developed analytical formulas and finite element models, and concluded that an increase in internal pressure can enhance the tensile stiffness of the hose. Bai et al. [30,31] established a finite element model to study the effects of initial ellipticity, residual stress, strain hardening, yield anisotropy, loading path and other parameters on the collapse pressure of steel pipelines under combined external pressure, tensile force and bending force. The response of the steel pipeline under the combined loads is greatly affected by the load path. Madhavan [32] conducted experiments and numerical simulations of pipeline collapse under external pressure and axial force, and found that the different loading paths of p→T and T→p have almost no effect on the collapsed shell of the tube with lower initial ovality. Yu et al. [33] concluded that the loading path has a great influence on the ultimate load of the steel pipeline, and the p→T loading path is more serious than the T→p loading path. Although scholars have conducted a lot of research on the buckling of corroded steel pipelines, the formula for buckling pressure of corrosion steel pipelines under the combined action of external pressure and axial force is rarely involved [34][35][36][37], which is more convenient for engineering practice. Therefore, a finite element model of corrosion defect steel pipeline is established, and its accuracy is verified through experiments. Based on finite element analysis, the buckling mechanism of corroded steel pipelines under the combined external pressure and axial force is studied in detail, and the collapse pressure evaluation formula of corroded steel pipelines is proposed, providing a theoretical basis for the design and application of deep-sea steel pipelines. Theoretical Development Timoshenko and Gere [37] conducted a theoretical study on a complete linear elastic thin-walled tube under external pressure and indicated that the buckling of the infinite tube can be simplified to a plane strain problem. The formula for the critical external pressure of eigenvalue buckling is [37]. where p co is the buckling pressure of intact pipeline, E is the young's modulus, t is the wall thickness, µ is the poisson's ratio and R is the mean radius. The normalized critical external pressure formula can be written as p co /p y = Et 2 4σ y (1 − µ 2 )R 2 (2) where p y is the yield pressure, p y = σ y t/R and σ y is the yield stress. Due to the production process, the initial ovality is usually formed on a section of the pipeline. Herein, the initial ovality parameter is introduced in the formula. Assuming that the initial ovality is uniformly distributed along the pipeline axis, the initial ovality defect is introduced based on the following formula: where w 0 is the radial displacement, R is the mean radius, θ is the polar angle and ∆ 0 is the initial ovality, which is defined as: where D max and D min are the maximum and minimum outer diameters of the steel pipeline, respectively. The formula for the collapse pressure of a steel pipeline with initial ovality can be expressed in the following form: where p c is the collapse pressure. For corroded steel pipelines (Figure 1), the parameters of circumferential width and radial depth of the defect are added in Equation (5). where d and θ c are the depth and polar angle of corrosion defects, respectively. The influence of the axial force is further considered: the function h related to the axial force is added to Equation (6), which can be written as where T and T 0 are the axial force and yield axial force, respectively; T 0 = πσ y Dt; and D is the outer diameters of the steel pipeline. The parameter analysis based on the experimentally verified finite element model was conducted in this study. Additionally, by fitting with parameter analysis results, the values of parameters f, g and h of Equation (7) were determined. where py is the yield pressure, py = σyt/R and σy is the yield stress. Due to the production process, the initial ovality is usually formed on a section of the pipeline. Herein, the initial ovality parameter is introduced in the formula. Assuming that the initial ovality is uniformly distributed along the pipeline axis, the initial ovality defect is introduced based on the following formula: where w0 is the radial displacement, R is the mean radius, θ is the polar angle and Δ0 is the initial ovality, which is defined as: where Dmax and Dmin are the maximum and minimum outer diameters of the steel pipeline, respectively. The formula for the collapse pressure of a steel pipeline with initial ovality can be expressed in the following form: where pc is the collapse pressure. For corroded steel pipelines (Figure 1), the parameters of circumferential width and radial depth of the defect are added in Equation (5). where d and θc are the depth and polar angle of corrosion defects, respectively. The influence of the axial force is further considered: the function h related to the axial force is added to Equation (6), which can be written as where T and T0 are the axial force and yield axial force, respectively; T0 = πσyDt; and D is the outer diameters of the steel pipeline. The parameter analysis based on the experimentally verified finite element model was conducted in this study. Additionally, by fitting with parameter analysis results, the values of parameters f, g and h of Equation (7) were determined. Material Test Three tensile specimens (X65) were taken from steel pipelines employed in actual engineering. Round rod specimens were taken along the axial direction of the pipeline. Figure 2 shows the specific sampling size of the tensile sample. The tensile specimen was composed of three parts: a clamping section, transition section and parallel section. The thickness and width of each section were measured twice by a vernier caliper, and the average value was taken. The minimum values of the measurement data, the thickness and width of the sample, were used to calculate the initial cross-sectional area of the sample. Material Test Three tensile specimens (X65) were taken from steel pipelines employed in actual engineering. Round rod specimens were taken along the axial direction of the pipeline. Figure 2 shows the specific sampling size of the tensile sample. The tensile specimen was composed of three parts: a clamping section, transition section and parallel section. The thickness and width of each section were measured twice by a vernier caliper, and the average value was taken. The minimum values of the measurement data, the thickness and width of the sample, were used to calculate the initial cross-sectional area of the sample. The PLW-100 tensile testing machine was used to measure the material parameters of the pipeline steel. The entire test process was controlled by the control system; and the load value, displacement value, deformation value, test speed and test curve were monitored and dynamically displayed in real time. An extensometer and a static resistance strain gauge were used to measure the deformation and strain of the sample. The extension measuring range of SCDY-1 double-sided extensometer and YG-26 static resistance strain gauge used in the experiment was 25/50 mm, and the accuracy of the static resistance strain gauge was 0.1% [38]. The relative resistance ΔF and the relative strain average Δε were determined by using a static resistance strain gauge with a strain gauge pasted on the surface of the test piece. The test loading rate was 0.1 mm/min. The material Poisson's ratio μ and elastic modulus E were calculated by the relative resistance ΔF and the relative strain average Δε. The two fractured patterns were tightly connected at the fracture, so that the axis was in a straight line, and a vernier caliper was used to measure the gauge length Lb after fracture. Figure 3 shows the round bar tensile specimen and the necking fracture of the specimen after the test. The PLW-100 tensile testing machine was used to measure the material parameters of the pipeline steel. The entire test process was controlled by the control system; and the load value, displacement value, deformation value, test speed and test curve were monitored and dynamically displayed in real time. An extensometer and a static resistance strain gauge were used to measure the deformation and strain of the sample. The extension measuring range of SCDY-1 double-sided extensometer and YG-26 static resistance strain gauge used in the experiment was 25/50 mm, and the accuracy of the static resistance strain gauge was 0.1% [38]. The relative resistance ∆F and the relative strain average ∆ε were determined by using a static resistance strain gauge with a strain gauge pasted on the surface of the test piece. The test loading rate was 0.1 mm/min. The material Poisson's ratio µ and elastic modulus E were calculated by the relative resistance ∆F and the relative strain average ∆ε. The two fractured patterns were tightly connected at the fracture, so that the axis was in a straight line, and a vernier caliper was used to measure the gauge length L b after fracture. Figure 3 shows the round bar tensile specimen and the necking fracture of the specimen after the test. Material Test Three tensile specimens (X65) were taken from steel pipelines employed in actua engineering. Round rod specimens were taken along the axial direction of the pipeline Figure 2 shows the specific sampling size of the tensile sample. The tensile specimen was composed of three parts: a clamping section, transition section and parallel section. The thickness and width of each section were measured twice by a vernier caliper, and the average value was taken. The minimum values of the measurement data, the thickness and width of the sample, were used to calculate the initial cross-sectional area of the sam ple. The PLW-100 tensile testing machine was used to measure the material parameters of the pipeline steel. The entire test process was controlled by the control system; and the load value, displacement value, deformation value, test speed and test curve were moni tored and dynamically displayed in real time. An extensometer and a static resistance strain gauge were used to measure the deformation and strain of the sample. The exten sion measuring range of SCDY-1 double-sided extensometer and YG-26 static resistance strain gauge used in the experiment was 25/50 mm, and the accuracy of the static re sistance strain gauge was 0.1% [38]. The relative resistance ΔF and the relative strain average Δε were determined by us ing a static resistance strain gauge with a strain gauge pasted on the surface of the tes piece. The test loading rate was 0.1 mm/min. The material Poisson's ratio μ and elastic modulus E were calculated by the relative resistance ΔF and the relative strain average Δε. The two fractured patterns were tightly connected at the fracture, so that the axis was in a straight line, and a vernier caliper was used to measure the gauge length Lb after frac ture. Figure 3 shows the round bar tensile specimen and the necking fracture of the spec imen after the test. where ε and σ are the uniaxial strain and uniaxial stress. The measured elastic modulus E and yield stress σ y of X65 steel were 172 Gpa and 376 Mpa, respectively. The material coefficients α and the strain hardening parameter β were 0.009 and 7.5, respectively. where ε and σ are the uniaxial strain and uniaxial stress. The measured elastic modulus and yield stress σy of X65 steel were 172 Gpa and 376 Mpa, respectively. The materia coefficients α and the strain hardening parameter β were 0.009 and 7.5, respectively. Full-Scale Buckling Test of Steel Pipeline A collapse test of a corroded steel pipeline was carried out. As shown in Figure 5, full-size high-pressure test chamber with a total length of 11.8 m and an inner diameter o 1.2 m was used in the test. As shown in Figure 6, test specimens were three X65 stee pipelines 5000 mm in length, 273 mm in diameter and 12 mm in wall thickness. The thre specimens had dual external corrosion defects distributed in the axial, circumferential an diagonal directions, respectively. The defect sizes are listed in Table 1, where LC and c ar the length and arc length of defect, respectively; and SL and SC are the axial and circum ferential spacings between the two defects. Full-Scale Buckling Test of Steel Pipeline A collapse test of a corroded steel pipeline was carried out. As shown in Figure 5, a full-size high-pressure test chamber with a total length of 11.8 m and an inner diameter of 1.2 m was used in the test. As shown in Figure 6, test specimens were three X65 steel pipelines 5000 mm in length, 273 mm in diameter and 12 mm in wall thickness. The three specimens had dual external corrosion defects distributed in the axial, circumferential and diagonal directions, respectively. The defect sizes are listed in Table 1, where L C and c are the length and arc length of defect, respectively; and S L and S C are the axial and circumferential spacings between the two defects. = + where ε and σ are the uniaxial strain and uniaxial stress. The measured elastic modulus E and yield stress σy of X65 steel were 172 Gpa and 376 Mpa, respectively. The material coefficients α and the strain hardening parameter β were 0.009 and 7.5, respectively. Full-Scale Buckling Test of Steel Pipeline A collapse test of a corroded steel pipeline was carried out. As shown in Figure 5, a full-size high-pressure test chamber with a total length of 11.8 m and an inner diameter of 1.2 m was used in the test. As shown in Figure 6, test specimens were three X65 steel pipelines 5000 mm in length, 273 mm in diameter and 12 mm in wall thickness. The three specimens had dual external corrosion defects distributed in the axial, circumferential and diagonal directions, respectively. The defect sizes are listed in Table 1, where LC and c are the length and arc length of defect, respectively; and SL and SC are the axial and circumferential spacings between the two defects. Both ends of the test piece were sealed with flanges, and it was hoisted into the test chamber with a crane. The flanges were connected to the test chamber with bolts. Then the test chamber was sealed and filled with water, and external pressure was applied after the tightness was checked. The maximum pressure in the cabin during the experiment was defined as the collapse pressure. The trends of external pressure are shown in Figure 7. It can be noted that the pressure increased linearly and monotonously before reaching the critical buckling pressure. Then the pressure dropped sharply after pipeline collapse. The schematic diagram of pipeline after collapse is shown in Figure 8. The deformation was mainly concentrated in the corrosion defect area of each pipeline. Specimen Aligned Type Both ends of the test piece were sealed with flanges, and it was hoisted into the test chamber with a crane. The flanges were connected to the test chamber with bolts. Then the test chamber was sealed and filled with water, and external pressure was applied after the tightness was checked. The maximum pressure in the cabin during the experiment was defined as the collapse pressure. The trends of external pressure are shown in Figure 7. It can be noted that the pressure increased linearly and monotonously before reaching the critical buckling pressure. Then the pressure dropped sharply after pipeline collapse. The schematic diagram of pipeline after collapse is shown in Figure 8. The deformation was mainly concentrated in the corrosion defect area of each pipeline. Both ends of the test piece were sealed with flanges, and it was hoisted into the test chamber with a crane. The flanges were connected to the test chamber with bolts. Then the test chamber was sealed and filled with water, and external pressure was applied after the tightness was checked. The maximum pressure in the cabin during the experiment was defined as the collapse pressure. The trends of external pressure are shown in Figure 7. It can be noted that the pressure increased linearly and monotonously before reaching the critical buckling pressure. Then the pressure dropped sharply after pipeline collapse. The schematic diagram of pipeline after collapse is shown in Figure 8. The deformation was mainly concentrated in the corrosion defect area of each pipeline. Establishment of the Finite Element Model In this section, a finite element model is established for simulating the buckling fail ure behavior of a corroded steel pipeline under external pressure and axial force, and th commercial software ABAQUS was used to numerically calculate the collapse pressure o a corroded steel pipeline [40,41]. The 8-node hexahedral linear reduction integral elemen (C3D8R) was employed to establish the finite element model. This type of element can b used for linear and complex nonlinear analysis involving contact, plasticity and large de formation. In order to improve computing efficiency, only a quarter finite element mode was established. The pipeline was divided into 50, 40 and 5 elements in the longitudinal circumferential and radial directions, respectively. The boundary conditions and force are shown in Figure 9. The X = 0 plane was set to be symmetrical about the YZ plane, and the nodes on the Y = 0 plane were set to be symmetrical about the XZ plane. We con strained the Y-direction position on the bottom node of the Z-axis on the X = 0 plane to prevent rigid body displacement from causing non-convergence of the calculation. A kin ematic coupling was established at one end of X = L, and we coupled the end surface along the axial direction with the reference point, where L is the length of the steel pipeline. Th axial force was applied to the reference point, and the external pressure was uniformly applied to the outer wall of the pipeline. The damage of the corrosion-defected steel pipe line under external pressure shows large pre-buckling deformation and material plastic ity, which involves geometric nonlinearity. In this study, the Riks method (arc length method) was used to determine the buckling response of the corrosion-defected stee pipeline under external pressure and axial force. In the calculation process, the axial forc was first applied to the specified value and then kept constant, and then external pressur was applied until local buckling instability occurred. With the continuous developmen of offshore oilfields into the deep sea, the operating temperature continues to rise, and th design temperature can sometimes reach 100°. The axial pressure caused by temperatur can be estimated by the following formula. Establishment of the Finite Element Model In this section, a finite element model is established for simulating the buckling failure behavior of a corroded steel pipeline under external pressure and axial force, and the commercial software ABAQUS was used to numerically calculate the collapse pressure of a corroded steel pipeline [40,41]. The 8-node hexahedral linear reduction integral element (C3D8R) was employed to establish the finite element model. This type of element can be used for linear and complex nonlinear analysis involving contact, plasticity and large deformation. In order to improve computing efficiency, only a quarter finite element model was established. The pipeline was divided into 50, 40 and 5 elements in the longitudinal, circumferential and radial directions, respectively. The boundary conditions and forces are shown in Figure 9. The X = 0 plane was set to be symmetrical about the YZ plane, and the nodes on the Y = 0 plane were set to be symmetrical about the XZ plane. We constrained the Y-direction position on the bottom node of the Z-axis on the X = 0 plane to prevent rigid body displacement from causing non-convergence of the calculation. A kinematic coupling was established at one end of X = L, and we coupled the end surface along the axial direction with the reference point, where L is the length of the steel pipeline. The axial force was applied to the reference point, and the external pressure was uniformly applied to the outer wall of the pipeline. The damage of the corrosion-defected steel pipeline under external pressure shows large pre-buckling deformation and material plasticity, which involves geometric nonlinearity. In this study, the Riks method (arc length method) was used to determine the buckling response of the corrosion-defected steel pipeline under external pressure and axial force. In the calculation process, the axial force was first applied to the specified value and then kept constant, and then external pressure was applied until local buckling instability occurred. With the continuous development of offshore oilfields into the deep sea, the operating temperature continues to rise, and the design temperature can sometimes reach 100 • . The axial pressure caused by temperature can be estimated by the following formula. where A is the cross-sectional area of the steel pipeline; λ is the temperature expansion coefficient of the steel pipeline material, with a value of 11.7 × 10 −6 ; ∆T is the temperature difference. The calculated value range of the axial force T is approximately 0.2 T 0 to 0.8 T 0 . where A is the cross-sectional area of the steel pipeline; λ is the temperature expansion coefficient of the steel pipeline material, with a value of 11.7 × 10 −6 ; ΔT is the temperature difference. The calculated value range of the axial force T is approximately 0.2 T0 to 0.8T0. Validation of the Finite Element Model In order to verify the accuracy of meshing, a sensitivity analysis of the number of meshes was carried out. The meshing scheme formulated above was called S1. In addition, coarser and finer mesh division schemes were called S2 and S3, respectively. The results of the sensitivity analysis of the number of grids are shown in Figure 10. It can be seen that the results of S1 and S2 are very close, whereas the results of S3 and the first two schemes are very different. It can be noted that S1 has sufficient accuracy and the finer mesh will not be significantly improved. Therefore, S1 was adopted as the meshing scheme in this study. It is worth pointing out that the method of setting chamfers at the junction of the defect and the intact part was used to eliminate the error caused by the stress concentration in the study of internal pressure blasting of steel pipelines. In this study, a pair of chamfering and non-chamfering models were calculated using a grid scheme, as shown in Figure 11. The chamfering radius of the chamfering model was equal to half of the defect depth. Figure 12 shows the comparison between the calculation results of the Validation of the Finite Element Model In order to verify the accuracy of meshing, a sensitivity analysis of the number of meshes was carried out. The meshing scheme formulated above was called S1. In addition, coarser and finer mesh division schemes were called S2 and S3, respectively. The results of the sensitivity analysis of the number of grids are shown in Figure 10. It can be seen that the results of S1 and S2 are very close, whereas the results of S3 and the first two schemes are very different. It can be noted that S1 has sufficient accuracy and the finer mesh will not be significantly improved. Therefore, S1 was adopted as the meshing scheme in this study. (9) where A is the cross-sectional area of the steel pipeline; λ is the temperature expansion coefficient of the steel pipeline material, with a value of 11.7 × 10 −6 ; ΔT is the temperature difference. The calculated value range of the axial force T is approximately 0.2 T0 to 0.8T0. Validation of the Finite Element Model In order to verify the accuracy of meshing, a sensitivity analysis of the number of meshes was carried out. The meshing scheme formulated above was called S1. In addition, coarser and finer mesh division schemes were called S2 and S3, respectively. The results of the sensitivity analysis of the number of grids are shown in Figure 10. It can be seen that the results of S1 and S2 are very close, whereas the results of S3 and the first two schemes are very different. It can be noted that S1 has sufficient accuracy and the finer mesh will not be significantly improved. Therefore, S1 was adopted as the meshing scheme in this study. It is worth pointing out that the method of setting chamfers at the junction of the defect and the intact part was used to eliminate the error caused by the stress concentration in the study of internal pressure blasting of steel pipelines. In this study, a pair of chamfering and non-chamfering models were calculated using a grid scheme, as shown in Figure 11. The chamfering radius of the chamfering model was equal to half of the defect depth. Figure 12 shows the comparison between the calculation results of the It is worth pointing out that the method of setting chamfers at the junction of the defect and the intact part was used to eliminate the error caused by the stress concentration in the study of internal pressure blasting of steel pipelines. In this study, a pair of chamfering and non-chamfering models were calculated using a grid scheme, as shown in Figure 11. The chamfering radius of the chamfering model was equal to half of the defect depth. Figure 12 shows the comparison between the calculation results of the chamfered model and the un-chamfered model. The results of the two models are very close, and the maximum difference between them is 1.93%. Based on the above results, combined with a large number of studies on the collapse of steel pipelines under external pressure, it can be seen that for modeling and calculations, the unchamfered model can be used instead of the chamfered model to study the collapse of corrosion-defected steel pipelines under external pressure and axial force. Next, the results of finite element simulation are compared with those of the tests ( Table 2). All errors are less than 12%, and the accuracy of the collapse pressure calculated by the finite element model can be verified. The finite element and experimental collapse modes are compared in Figure 13. It can be seen that the collapse modes given by the finite element analysis are basically consistent with those obtained from the experiments. Therefore, the finite element model of this study can be well applied to the calculation of steel pipeline collapse. Next, the results of finite element simulation are compared with those of the tests ( Table 2). All errors are less than 12%, and the accuracy of the collapse pressure calculated by the finite element model can be verified. The finite element and experimental collapse modes are compared in Figure 13. It can be seen that the collapse modes given by the finite element analysis are basically consistent with those obtained from the experiments. Therefore, the finite element model of this study can be well applied to the calculation of steel pipeline collapse. Next, the results of finite element simulation are compared with those of the tests ( Table 2). All errors are less than 12%, and the accuracy of the collapse pressure calculated by the finite element model can be verified. The finite element and experimental collapse modes are compared in Figure 13. It can be seen that the collapse modes given by the finite element analysis are basically consistent with those obtained from the experiments. Therefore, the finite element model of this study can be well applied to the calculation of steel pipeline collapse. Next, the results of finite element simulation are compared with those of the tests ( Table 2). All errors are less than 12%, and the accuracy of the collapse pressure calculated by the finite element model can be verified. The finite element and experimental collapse modes are compared in Figure 13. It can be seen that the collapse modes given by the finite element analysis are basically consistent with those obtained from the experiments. Therefore, the finite element model of this study can be well applied to the calculation of steel pipeline collapse. Figure 14 shows the effects of diameter thickness ratio on collapse pressure under different corrosion defect depths; the corrosion defect length, axial pressure and initial ovality remained constant. For a fixed corrosion defect depth, the collapse pressure of a corroded steel pipeline decreases gradually as the diameter thickness ratio increases. Compared with the steel pipeline with deep corrosion, the collapse pressure of the steel pipeline with shallow corrosion has a more obvious downward trend. This shows that when the corrosion is shallow, the collapse pressure is more sensitive to the diameter thickness ratio, and when the corrosion depth d/t ≥ 0.7, the influence of the diameter thickness ratio on the collapse pressure of corroded steel pipeline can be almost ignored. Figure 14 shows the effects of diameter thickness ratio on collapse pressure under different corrosion defect depths; the corrosion defect length, axial pressure and initial ovality remained constant. For a fixed corrosion defect depth, the collapse pressure of a corroded steel pipeline decreases gradually as the diameter thickness ratio increases. Compared with the steel pipeline with deep corrosion, the collapse pressure of the steel pipeline with shallow corrosion has a more obvious downward trend. This shows that when the corrosion is shallow, the collapse pressure is more sensitive to the diameter thickness ratio, and when the corrosion depth d/t ≥ 0.7, the influence of the diameter thickness ratio on the collapse pressure of corroded steel pipeline can be almost ignored. Influence of Initial Ovality on the Buckling Responses of Steel Pipelines In order to study the effect of initial ovality on the buckling response of corroded steel pipelines with different corrosion defect depths, the corrosion defect length, axial pressure and corrosion defect angle were kept unchanged. Figure 15 shows the effect of Δ0 on the collapse pressure of each corroded steel pipeline. It can be seen that the collapse pressure decreases as Δ0 increases. For the steel pipeline with D/t = 15, when d/t = 0.1, the dimensionless collapse pressure decreases from 0.714 to 0.328 while Δ0 changes from 0.1% to 1%; however, when d/t = 0.7, the dimensionless collapse pressure only decreases from 0.195 to 0.119 for the same ovality variation. This shows that for shallow corrosion defects, the effect of Δ0 on collapse pressure is more obvious than that for deeper defects. Influence of Initial Ovality on the Buckling Responses of Steel Pipelines In order to study the effect of initial ovality on the buckling response of corroded steel pipelines with different corrosion defect depths, the corrosion defect length, axial pressure and corrosion defect angle were kept unchanged. Figure 15 shows the effect of ∆ 0 on the collapse pressure of each corroded steel pipeline. It can be seen that the collapse pressure decreases as ∆ 0 increases. For the steel pipeline with D/t = 15, when d/t = 0.1, the dimensionless collapse pressure decreases from 0.714 to 0.328 while ∆ 0 changes from 0.1% to 1%; however, when d/t = 0.7, the dimensionless collapse pressure only decreases from 0.195 to 0.119 for the same ovality variation. This shows that for shallow corrosion defects, the effect of ∆ 0 on collapse pressure is more obvious than that for deeper defects. Figure 16 shows the effect of θc on the collapse pressure of pipelines with different corrosion defect depths. It can be noted that for a given d/t, the ratio of pc/p0 was found to decrease with the growth of θc, but when θc becomes larger, the tendency is reversed. When the depth of corrosion defects is different, the relation of collapse pressure with the angle of corrosion defect is also different. When the depth of corrosion defect d/t = 0.1, the influence of corrosion defect angle on collapse pressure can be almost ignored. For pipelines with corrosion defect depth d/t > 0.1, when the corrosion defect angle is small, the collapse pressure decreases faster, and its influence on buckling force decreases gradually as corrosion defect angle increases. As another important index of defect parameters, the depth of corrosion defect also has an important impact on the buckling instability of corroded steel pipeline [42,43]. It can be seen in Figure 17 that the collapse pressure gradually decreases as d/t increases. The change trend of collapse pressure is different under different corrosion defect size, θc. For the case of θc = 0.1, the decline rate of collapse pressure with the increasing d/t gradually becomes faster. For the cases of θc ≥ 0.5, the collapse pressure decreases approximately linearly. Figure 16 shows the effect of θ c on the collapse pressure of pipelines with different corrosion defect depths. It can be noted that for a given d/t, the ratio of p c /p 0 was found to decrease with the growth of θ c , but when θ c becomes larger, the tendency is reversed. When the depth of corrosion defects is different, the relation of collapse pressure with the angle of corrosion defect is also different. When the depth of corrosion defect d/t = 0.1, the influence of corrosion defect angle on collapse pressure can be almost ignored. For pipelines with corrosion defect depth d/t > 0.1, when the corrosion defect angle is small, the collapse pressure decreases faster, and its influence on buckling force decreases gradually as corrosion defect angle increases. Figure 16 shows the effect of θc on the collapse pressure of pipelines with different corrosion defect depths. It can be noted that for a given d/t, the ratio of pc/p0 was found to decrease with the growth of θc, but when θc becomes larger, the tendency is reversed. When the depth of corrosion defects is different, the relation of collapse pressure with the angle of corrosion defect is also different. When the depth of corrosion defect d/t = 0.1, the influence of corrosion defect angle on collapse pressure can be almost ignored. For pipelines with corrosion defect depth d/t > 0.1, when the corrosion defect angle is small, the collapse pressure decreases faster, and its influence on buckling force decreases gradually as corrosion defect angle increases. As another important index of defect parameters, the depth of corrosion defect also has an important impact on the buckling instability of corroded steel pipeline [42,43]. It can be seen in Figure 17 that the collapse pressure gradually decreases as d/t increases. The change trend of collapse pressure is different under different corrosion defect size, θc. For the case of θc = 0.1, the decline rate of collapse pressure with the increasing d/t gradually becomes faster. For the cases of θc ≥ 0.5, the collapse pressure decreases approximately linearly. As another important index of defect parameters, the depth of corrosion defect also has an important impact on the buckling instability of corroded steel pipeline [42,43]. It can be seen in Figure 17 that the collapse pressure gradually decreases as d/t increases. The change trend of collapse pressure is different under different corrosion defect size, θ c . For the case of θ c = 0.1, the decline rate of collapse pressure with the increasing d/t gradually becomes faster. For the cases of θ c ≥ 0.5, the collapse pressure decreases approximately linearly. Influence of Axial Force on the Buckling Responses of Steel Pipelines The buckling response of steel pipeline under different axial forces is shown in Fi 19. It can be seen that for all cases, the collapse pressure of the steel pipeline decreas a parabola as axial force increases, and the decline speed is faster when the defect d is smaller. For example, in the case of D/t = 15, when d/t = 0.1, pc/po decreases from 0. 0.38 as T/T0 increases from 0 to 0.8. When d/t = 0.7, pc/po only decreases by 0.15 as increases from 0 to 0.8. This shows that when the defect depth is small, the steel pip collapse pressure is more sensitive to axial force. In general, compared with a singl ternal pressure condition, the combined action of external pressure and axial force h more obvious impact on the buckling response of the steel pipeline, which is more li to result in buckling failure of the steel pipeline. Therefore, in practical engineering impact of axial force on the buckling of corroded steel pipeline cannot be ignored. Figure 18 shows the effect of corrosion defect length on collapse pressure under different corrosion defect depths. Obviously, as the length of corrosion defects increases, the collapse pressure of the steel pipeline decreases gradually. The falling speed of collapse pressure is different under different corrosion depth. When d/t = 0.1 and 0.7, the length of corrosion defect has little effect on the collapse pressure; when d/t = 0.3 and 0.5, the length of corrosion defect has a relatively large influence on the collapse pressure. Influence of Axial Force on the Buckling Responses of Steel Pipelines The buckling response of steel pipeline under different axial forces is shown in Figure 19. It can be seen that for all cases, the collapse pressure of the steel pipeline decreases in a parabola as axial force increases, and the decline speed is faster when the defect depth is smaller. For example, in the case of D/t = 15, when d/t = 0.1, pc/po decreases from 0.95 to 0.38 as T/T0 increases from 0 to 0.8. When d/t = 0.7, pc/po only decreases by 0.15 as T/T0 increases from 0 to 0.8. This shows that when the defect depth is small, the steel pipeline collapse pressure is more sensitive to axial force. In general, compared with a single external pressure condition, the combined action of external pressure and axial force has a more obvious impact on the buckling response of the steel pipeline, which is more likely to result in buckling failure of the steel pipeline. Therefore, in practical engineering, the impact of axial force on the buckling of corroded steel pipeline cannot be ignored. Influence of Axial Force on the Buckling Responses of Steel Pipelines The buckling response of steel pipeline under different axial forces is shown in Figure 19. It can be seen that for all cases, the collapse pressure of the steel pipeline decreases in a parabola as axial force increases, and the decline speed is faster when the defect depth is smaller. For example, in the case of D/t = 15, when d/t = 0.1, p c /p o decreases from 0.95 to 0.38 as T/T 0 increases from 0 to 0.8. When d/t = 0.7, p c /p o only decreases by 0.15 as T/T 0 increases from 0 to 0.8. This shows that when the defect depth is small, the steel pipeline collapse pressure is more sensitive to axial force. In general, compared with a single external pressure condition, the combined action of external pressure and axial force has a more obvious impact on the buckling response of the steel pipeline, which is more likely to result in buckling failure of the steel pipeline. Therefore, in practical engineering, the impact of axial force on the buckling of corroded steel pipeline cannot be ignored. Figure 20 shows the effect of yield stress on the buckling response. It can be not that the collapse pressure increases as σy increases. For a steel pipeline with D/t = 15, wh d/t = 0.1, the collapse pressure decreases from 9.45 to 82.35 MPa with the σy changing fro 100 to 800 MPa; however, when d/t = 0.7, the collapse pressure only decreases from 4 16.2 MPa for the same σy variation. This shows that for shallow corrosion defects, the eff of σy on collapse pressure is more obvious than that for deeper defects. Figure 21 shows the effect of the strain hardening parameter on collapse pressu with different corrosion defect depths. Obviously, in the initial stage, as the strain ha ening parameter increases, the collapse pressure of the steel pipeline decreases rapidly further increase in β results in a slower decline in pc. It is noted that when β > 5, the collap pressure is not sensitive to changes in β. Figure 20 shows the effect of yield stress on the buckling response. It can be noted that the collapse pressure increases as σ y increases. For a steel pipeline with D/t = 15, when d/t = 0.1, the collapse pressure decreases from 9.45 to 82.35 MPa with the σ y changing from 100 to 800 MPa; however, when d/t = 0.7, the collapse pressure only decreases from 4 to 16.2 MPa for the same σ y variation. This shows that for shallow corrosion defects, the effect of σ y on collapse pressure is more obvious than that for deeper defects. Figure 20 shows the effect of yield stress on the buckling response. It can be note that the collapse pressure increases as σy increases. For a steel pipeline with D/t = 15, whe d/t = 0.1, the collapse pressure decreases from 9.45 to 82.35 MPa with the σy changing fro 100 to 800 MPa; however, when d/t = 0.7, the collapse pressure only decreases from 4 16.2 MPa for the same σy variation. This shows that for shallow corrosion defects, the effe of σy on collapse pressure is more obvious than that for deeper defects. Figure 21 shows the effect of the strain hardening parameter on collapse pressu with different corrosion defect depths. Obviously, in the initial stage, as the strain hard ening parameter increases, the collapse pressure of the steel pipeline decreases rapidly. further increase in β results in a slower decline in pc. It is noted that when β > 5, the collaps pressure is not sensitive to changes in β. Figure 21 shows the effect of the strain hardening parameter on collapse pressure with different corrosion defect depths. Obviously, in the initial stage, as the strain hardening parameter increases, the collapse pressure of the steel pipeline decreases rapidly. A further increase in β results in a slower decline in p c . It is noted that when β > 5, the collapse pressure is not sensitive to changes in β. Influence of Material Properties on the Buckling Responses of Steel Pipelines Metals 2022, 12, x FOR PEER REVIEW The Formula for the Collapse Pressure of Corroded Pipelines The above research shows that the diameter thickness ratio, initial ovality an sion defect size of the steel pipeline have obvious effects on the collapse pressure o pipeline. In this paper, a large number of parameters influencing steel pipeline pressure were analyzed, and the empirical formula of steel pipeline collapse pre fitted according to the parameter analysis results. This empirical Formula (7) takes from [44][45][46][47]. The scope of application for the empirical formula (Equation (11)) is: 7.5 ≤ 200 MPa ≤ σy ≤ 500 MPa,0 ≤ d/t ≤ 0.6, 0 ≤ θ c /π ≤ 0.6, 0 ≤ Δ 0 ≤ 3%, 0 ≤ T/T0 ≤ 0.6. F shows values predicted by Equation (11) and compares them with the numerica of the applicable parameter range. Due to too many fitting parameters, there a errors at individual points. In order to discuss the error in more detail, the error d tion histogram was drawn, as shown in Figure 23. It can be seen that the overall cor is very good; all errors are within 20%. In most cases, the error is no more than 15 The Formula for the Collapse Pressure of Corroded Pipelines The above research shows that the diameter thickness ratio, initial ovality and corrosion defect size of the steel pipeline have obvious effects on the collapse pressure of a steel pipeline. In this paper, a large number of parameters influencing steel pipeline collapse pressure were analyzed, and the empirical formula of steel pipeline collapse pressure is fitted according to the parameter analysis results. This empirical Formula (7) takes its form from [44][45][46][47]. where a 1 -a 9 in Equation (10) are fitting parameters. The fitting parameters were obtained by least squares fitting and brought into Equation (10). The empirical formula of collapse pressure is: The scope of application for the empirical formula (Equation (11)) is: Figure 22 shows values predicted by Equation (11) and compares them with the numerical results of the applicable parameter range. Due to too many fitting parameters, there are large errors at individual points. In order to discuss the error in more detail, the error distribution histogram was drawn, as shown in Figure 23. It can be seen that the overall correlation is very good; all errors are within 20%. In most cases, the error is no more than 15%. Multiple corrosion defects are formed on the surfaces of deep-sea steel pipelines. Therefore, the applicability of the present formula for the collapse pressure of steel pipelines with multiple corrosion defects was studied. Based on the single corrosion defect model, the finite element models for steel pipelines with two, three and four defects were established as shown in Figure 24. 200 MPa ≤ σy ≤ 500 MPa,0 ≤ d/t ≤ 0.6, 0 ≤ θ c /π ≤ 0.6, 0 ≤ Δ 0 ≤ 3%, 0 ≤ T/T0 ≤ 0.6. Fi shows values predicted by Equation (11) and compares them with the numerica of the applicable parameter range. Due to too many fitting parameters, there a errors at individual points. In order to discuss the error in more detail, the error d tion histogram was drawn, as shown in Figure 23. It can be seen that the overall cor is very good; all errors are within 20%. In most cases, the error is no more than 15 Multiple corrosion defects are formed on the surfaces of deep-sea steel p Therefore, the applicability of the present formula for the collapse pressure of ste lines with multiple corrosion defects was studied. Based on the single corrosio model, the finite element models for steel pipelines with two, three and four defe established as shown in Figure 24. Multiple corrosion defects are formed on the surfaces of deep-sea steel pipelines. Therefore, the applicability of the present formula for the collapse pressure of steel pipelines with multiple corrosion defects was studied. Based on the single corrosion defect model, the finite element models for steel pipelines with two, three and four defects were established as shown in Figure 24. In order to calculate the collapse pressures of pipelines with multiple corrosion defects more accurately, the equivalent method was introduced in this study. Assuming that the pipeline has k corrosion defects, the corrosion defects are divided into n and m groups in the circumferential and longitudinal directions. Additionally, the size of each group of corrosion defects can be expressed as: In order to calculate the collapse pressures of pipelines with multiple corrosion defects more accurately, the equivalent method was introduced in this study. Assuming that the pipeline has k corrosion defects, the corrosion defects are divided into n and m groups in the circumferential and longitudinal directions. Additionally, the size of each group of corrosion defects can be expressed as: where, L ce , c e and d e are the equivalent length, equivalent circumferential width and equivalent depth, respectively. By incorporating Equations (12)- (14) into Equation (11), the collapse pressure of the pipeline with multiple corrosion defects can be obtained. Figure 25 shows the comparison for the collapse pressure of present formula and finite element model of steel pipelines with three and four defects. For all cases, the differences between the results of present formula and finite element are within 9%. Therefore, the collapse pressure of steel pipeline with two defects can be predicted by using present formula. where, Lce, ce and de are the equivalent length, equivalent circumferential width and equivalent depth, respectively. By incorporating Equations (12)- (14) into Equation (11), the collapse pressure of the pipeline with multiple corrosion defects can be obtained. Figure 25 shows the comparison for the collapse pressure of present formula and finite element model of steel pipelines with three and four defects. For all cases, the differences between the results of present formula and finite element are within 9%. Therefore, the collapse pressure of steel pipeline with two defects can be predicted by using present formula. Conclusions The buckling behavior of corroded steel pipelines under the combined action of external pressure and axial force was simulated by using ABAQUS software, and the collapse pressure of a steel pipeline was obtained. Full-scale tests of external pressure on corroded steel pipelines were conducted. Through comparison with experiments, the accuracy of our finite element model was verified. Based on the finite element parameter analysis, the empirical formula of the critical collapse pressure of a corroded steel pipeline under the combined action of external pressure and axial force was obtained by fitting. The main conclusions of this research are as follows: (1) Axial force, initial imperfections, material properties and defect size have significant effects on the collapse response of a pipeline. Compared with deep corrosion defects, the influences of various parameters on the collapse pressure of pipelines with shallow corrosion (d/t < 0.3) defects are more obvious. When d/t ≥ 0.7, the effects of D/t, Δ0 and Lc on the collapse pressure can be ignored. As the size of the defect, i.e. in length, width or depth, decreases, the collapse pressure decreases. In the case of θc = Figure 25. Comparison for the collapse pressure of present formula and finite element analysis of steel pipelines with three and four defects. Conclusions The buckling behavior of corroded steel pipelines under the combined action of external pressure and axial force was simulated by using ABAQUS software, and the collapse pressure of a steel pipeline was obtained. Full-scale tests of external pressure on corroded steel pipelines were conducted. Through comparison with experiments, the accuracy of our finite element model was verified. Based on the finite element parameter analysis, the empirical formula of the critical collapse pressure of a corroded steel pipeline under the combined action of external pressure and axial force was obtained by fitting. The main conclusions of this research are as follows: (1) Axial force, initial imperfections, material properties and defect size have significant effects on the collapse response of a pipeline. Compared with deep corrosion defects, the influences of various parameters on the collapse pressure of pipelines with shallow corrosion (d/t < 0.3) defects are more obvious. When d/t ≥ 0.7, the effects of D/t, ∆ 0 and L c on the collapse pressure can be ignored. As the size of the defect, i.e. in length, width or depth, decreases, the collapse pressure decreases. In the case of θ c = 0.1, the collapse pressure decreases gradually faster as d/t increases; for the case of θ c ≥ 0.5, the collapse pressure decreases approximately linearly. (2) The formula for the collapse pressure of a corroded steel pipeline under the combined action of external pressure and axial force was established, and the accuracy of the formula was verified by comparison with the finite element results. (3) A formula was further proposed to conservatively calculate the collapse pressure of a steel pipeline with multiple corrosion defects. The formula proposed in this study can be used for collapse pressure assessments of corroded pipelines in practical engineering to determine whether they can continue to serve safely, so as to avoid unnecessary repairs and replacements. Conflicts of Interest: The authors declare no conflict of interest. Nomenclature A cross-sectional area of the steel pipeline c arc length of defect d depth of corrosion defects D outer diameters of the steel pipeline D max maximum outer diameters of the steel pipeline D min minimum outer diameters of the steel pipeline E elastic modulus ∆F relative resistance L length of the steel pipeline L b gauge length L c length of defect p external pressure p c collapse pressure p co buckling pressure of intact pipeline p c a collapse pressure of experimental p c b collapse pressure of finite element p y yield pressure r notch radius of tensile specimen R mean radius of steel pipeline S C axial spacing between two defects S L circumferential spacing between two defects t wall thickness T axial force T 0 yield axial force
13,155.8
2022-02-10T00:00:00.000
[ "Engineering", "Environmental Science", "Materials Science" ]
The Affects of Genital Myiasis on the Diversity of Vaginal Flora in Female Bactrian Camels Background: One of the most important diseases that affect the reproductive organs of Bactrian camels is called Genital Myiasis. It can cause serious mechanical damage to the vaginal tissue of female Bactrian camels. The accumulation of bacteria in the vagina of female camels can affect their health and reproductive ability. The effect of this damage is commonly found in the vaginal ora and vaginal mucosal immune system. Therefore, this research is a study of the diversity of the vaginal ora and the differences between healthy Bactrian camels and those suffering from Genital Myiasis. Results: Vaginal microbiota samples were collected from two groups of female Bactrian camels of the same age. Illumina Miseq was used to sequence V3-V4 hypervariable genes of 16S rRNA in the samples, and the results showed that the vaginal microora of the infected camel had a signicantly greater OTU value. According to the Alpha diversity index and the level of vaginal pH, the diversity index of the infected camel ora were higher than that of the normal camel ora, and the pH were lower than that of the normal camel ora (P=0.006). There was no signicant difference between the two groups in the abundance of dominant genera of Bactrian camel vaginal (P (cid:0) 0.05), indicating that the structure of dominant ora of Bactrian camel vagina had a certain stability. Conclusions: Overall this comparison revealed the differences and similarities between vaginal ora Bactrian camels in various health states. In addition, this data provides a reference point for understanding the types of bacteria that cause genital myiasis that damages healthy development of Bactrian camels. Introduction Bactrian camel is one of the unique domestic animals to China. It mainly lives in the hot and arid regions of the Gobi and Desert area in northwestern China. It is known as the "boat of the desert" (Mengli et al., 2006;Ji et al., 2010;Zhichao et al., 2016). For a long time,the development of the bactrian camels breeding industry has been hampered by Genital Myiasis which ha s brought serious economic losses to local herders. The Genital Myiasis of Bactrian camels is a serious parasitic disease. Larvae of Wohlfahrtia Magni ca (Schiner, 1862) parasitize around the perineal and vaginal region of Bactrian camels and are responsible for a severe obligatory traumatic myiasis (Robbins et al., 2010). Genital Myiasis has a distinct seasonality which occurs in May-September of each summer and autumn (Kunichkin et al., 1981;Lungu et al., 1985;Hadani et al., 1989;Valentin et al., 1997). Clinical symptoms manifest as severe mechanical damage in a variety of ways to the affected tissue and mucosal sites harmful affects, such as, local in ammation, anxiety and anorexia are some of the symptoms in diseased camels (Valentin et al., 1997;Giangaspero et al., 2011;Sazmand et al., 2017). Through long-term experimental observation we found that the diseased camel's vaginal wound was exposed to the external environment but was rarely infected and purulent. When the larvae of Wohlfahrtia Magni ca was detached from the host, 94.5% of the diseased camel wounds spontaneously recovered (Schumann et al., 1976). In addition, other important elements comprise the vaginal microenvironment. The vaginal mucosa in healthy animal is colonized by an equilibrated and dynamic composition of aerobic, facultative anaerobic and obligate anaerobic microbes (Srinivasan M et al., 2021 antibiotics or antifungal drugs were used systematically within one month. Furthermore, sterile procedures were applied to the sampling area. Routine sterile operations were used before each sampling and strictly followed. In addition, the procedural steps to strictly ensure an aseptic open the female camel's vaginal and rolled 5 times, along the vaginal wall to wipe the vaginal secretions. Then they were quickly placed in a sterile 5ml cryotube. Lastly, the sample was labeled and quickly stored in a liquid nitrogen or in a -80℃ refrigerator and used to extract the 16S rRNA gene. Shortly afterwards, the pH of each sample was measured using an UltraBasic pH meter (Denver Instruments Arvada CO United States). Bacterial DNA Isolation The thawed sample was centrifuged at 10000r for 10 minutes to collect bacterial cells and the supernatant was discarded. The total DNA of the sample was extracted using the vaginal swab genomic DNA kit (Qiagen QIAamp DNA Mini Kit) and the speci c steps were referred to the instructions. The DNA was extracted and stored in a refrigerator at -20 °C. The DNA extraction quality was measured by 0.8% agarose gel electrophoresis and the DNA was quanti ed by an ultraviolet spectrophotometer. Sequencing of 16S rRNA In combination with the uorescence quanti cation results, each sample was mixed in a corresponding ratio according to the sequencing amount requirement of each sample. The processed samples were sent to Beijing WEISHENGTAI Co. Ltd. for double-ended 2×300 bp sequencing (Paired-end) through the Illumina HiSeq 2000 platform. Sequence Read Processing and Statistical Analysis Basic statistical analysis was performed using SPSS Statistics 20.0 statistical analysis software. Two pairs of comparisons of the measured data were taken, in accordance with the normal distribution. They were performed using two independent samples test P < 0.05. Therefore, they were was considered statistically signi cant. Vaginal pH The vaginal pH of all 23 female Bactrian camels was measured. The results showed that the vaginal PH range of the healthy group of bactrian camels ranged from 7.47 to 8.23 with an average of 7.85 ± 0.13. The vaginal PH of the diseased group was in the range of 7.18-7.61. Also, the average was between 7.41 ± 0.11. Such that the vaginal PH of bactrian camels was signi cantly different between the normal group and the group that was ill (p = 0.006). Also, the vaginal PH of the group that was ill was lower than that of the normal group. Sequencing Information After optimization of quality control and chimera removal, a total of 1644139 reads were obtained for all 23 samples. That had with an average of 71484 reads per sample (Table 1). Samples from the group that was ill were taken and received a total of 744455 reads, with an average of 77446±11214 reads per sample. The normal group samples received a total of 899684 reads with an average of 69206 ± 11047 reads per sample. The results showed that the statistically signi cant differences in the number of optimized sequences, obtained between the two groups were not signi cant (P>0.05). Alpha-and Beta-Diversity The sequence obtained above was subjected to merging, and revealed the OTU division by 97% sequence similarity. Also, the OTU having abundance value lower than 0.001% of the total. Also, the sample sequencing amount was removed (Bokulich et al., 2013). A total of 1845 OUTs were detected with an average of 1689. Also, 1267 OUTs were detected in the group that was ill. In addition, the normal environment vagina for each was maintained with 1111 OTUs shared between various vagina environments ( Fig. 1). Alpha-diversity was measured and observed using OTU Chao1, ACE, simpson and Shannon Diversity Index. The conclusive analysis presented in Table1. No signi cant differences existed in alpha-diversity between the normal samples of female Bactrian camels those that were ill and vaginal bacterial observed OTU Chao1 ACE simpson index and Shannon's Diversity Index (p>0.05). But the illness bactrian camels vagina had a signi cantly greater number of OTU than did the normal bactrian camels vagina increased richness as measured by Chao1 and ACE and greater diversity as measured by Shannon's Diversity Index and the Simpson Index all of which are presented in Table 1. Beta-diversity was also analyzed to examine differences in microbial communities between samples. Using an OTU-centric approach PCoA matrices were employed using weighted and unweighted UniFrac distance matrices to compare the phylogenetic divergence among the OTU between samples from ill camels and healthy camel vaginal samples (Fig. 2). The results showed that the clustering of subsets of healthy camel vaginal samples was more closely clustered in the weighted and unweighted UniFrac distance matrix. In addition, ANOSIM analysis showed that there was a signi cant difference between the vaginal samples of ill female camels and normal camels (P=0.033). R statistics showed that the difference between the groups was signi cantly greater than within groups (R=0.1483) and the grouping effect was evidently well done. Taxonomic composition analysis According to the results of OTU classi cation and classi cation status identi cation, the dominant vaginal ora and average relative abundance of Bactrian camels in the normal group and the illness group were respectively identi ed at the phylum level: Firmicutes ( Using the visualization tool GraPhlAn (Asnicar et al., 2015) to build a hierarchical tree of the composition of the sample population at each classi cation level (Fig. 3). More information is evident. While each classi cation unit was distinguished by different colors and their distribution in abundance was also re ected by the node size. Using the Mothur software, called the statistical algorithm of Metastats (http://metastats.cbcb.umd.edu/) (White et al., 2009). We were able to determine the overall classi cation level of all classi cation units in the sample population. The difference of sequence quantity (i.e. absolute abundance) between each taxon at phylum and genus level was analyzed and compared (pin-wise). We found that there were 4 classi cation units with signi cant differences in gate levels (Fig. 4) namely: SR1 (p=0.030 q=0.120); Planctomycetes (p=0.030 q=0.120); Gemmatimonadetes (p= 0.041 q = 0.120); Elusimicrobia (p = 0.048 q = 0.120). There are 51 taxonomies with signi cant differences in levels ( (Rooks, 2016). This study is an analysis of basic research that was conducted. By comparing the differences in the structure and diversity of normal Bactrian camels and with camels that were ill, we were able to analyze the role of the vaginal microecosystem of Bactrian camels, in their immunity and recovery stages, after their infections of vaginal myiasis. More understanding of these stages may provide a new approach for the prevention and treatment of genital myiasis of Bactrian camels, that result in positive results for clinical treatment of genital myiasis. In this study the bacterial phyla with the highest abundance identi ed in the two groups of Bactrian camels' vaginal samples were Firmicutes Proteobacteria Fuso bacterium and Bacteroides. These phyla are representative of the most common phyla found in many environments especially in host-microbiome relationships. Studies have shown that the proportions and relative abundance of these gates are related to changes in host physiology. Therefore when we performed ANOSIM analysis on the samples we found that even if there were differences among different individuals in the same group the difference was obviously smaller than the difference between the groups. We think this difference is reasonable. As a natural channel, the vaginal ora is susceptible to environmental microbes. The increase of the diversity and richness of the bacterial community, in the vagina of the diseased camel can be explained by the fact that its vulva is affected by y maggots, which causes swelling and deformation and cannot be completely closed, while a large number of external bacteria enter the vagina. However, the taxonomic composition analysis of bactrian camels showed that there was no signi cant difference in the overall structure of its vaginal ora, indicating that the vaginal microecology of Bactrian camels had certain stability. In addition immunomodulatory symbionts induce speci c self-targeted responses that indirectly regulate immune responses to surrounding microorganisms (Ost and Round, 2018 In addition, this study also analyzed the microbiome of the Bactrian camels vagina, to determine the relationship between the presence (or absence) of certain microbiota and the vaginal mucosal immune system. Overall this study will be used to document changes in the diversity of vaginal microbiota in healthy camels and also for that that are suffering from vaginal myiasis to identify unique microbes that may be involved in the vaginal mucosal the immunity. And it may help determine changes in the microbiome associated with the immune regulation, that may be bene cial and positive, throughout the pathological cycle. Declarations Ethics approval and consent to participate The sampling process did not cause any damage to the vaginal mucosa of Bactrian camel.In this experiment, the breeding environment was in compliance with the standards relevant to an ordinary animal laboratory facility in China National Standard "Laboratory animal environment and facilities" (GB14925-2010). The feeding of and the experimental operations on animals were in accordance with the animal welfare requirements.All experimental procedures were approved by the Animal Protection and Use Committee of Inner Mongolia Agricultural University and strictly followed animal welfare and ethical guidelines. Consent for publication Not applicable Availability of data and materials We have submitted raw data through supplementary materials. Competing interests The authors declare that they have no competing interests. Funding This study was supported by The National Natural Science Foundation of China (Grant No. NSFC 31360591). Authors' contributions EEDMT developed a research program and funded it; ZLK carried out the experiment, analysis and article writing; BH and HBX participated in the data analysis; ADD helped write the article, and the other authors participated in the sample collection.All authors read and approved the nal manuscript. Principal coordinate analysis of vaginal samples from ill female camels and normal female camels, using UniFrac unweighted (A) and weighted (B) metrics. Vaginas sample from ill female camels (n=10) are represented by red squares and Vagina samples from normal camels(n=13) are represented by blue circles. Figure 3 Sample overall classi cation level tree diagram based on GraPhlAn Note: The classi cation level tree shows the hierarchical relationship of all classi cation units (represented by nodes) from the gate to the genus (from the inner circle to the outer circle) in the sample population. The node size corresponds to the average relative abundance of the classi cation unit. The top 20 units of relative abundance will also be identi ed by letters in the gure (from door to genus in order from outer layer to inner layer) and the shadow color on the letter is the same as the corresponding node color. Figure 4 Abundance distribution of phylum-level taxa, with signi cant differences between sample groups Figure 5 Abundance distribution of the top 20 taxa with signi cant differences in genus levels Note: The abscissa in the gure is the taxonomic unit, that shows a signi cant difference and the ordinate is the sequence quantity of each taxon in each sample group. The border of the gure represents the Interquartile range (IQR), the horizontal line represents the median value, and the upper and lower tentacles represent the 1.5 times IQR range, except the upper and lower quartiles. Also the symbol "•" indicates the extreme value exceeding the range.
3,407.6
2021-05-14T00:00:00.000
[ "Medicine", "Environmental Science", "Biology" ]
Variant alternating Euler sums of higher order A family of alternating variant Euler sums of higher order is investigated. A number of different examples concerning the main theorem are given. A Log-PolyLog integral in terms of special functions is also evaluated. Introduction, preliminaries, and notations There are many particular cases describing the representation of alternating variant Euler sums in closed form; for instance, see [3,4,10,11]. The aim of this paper is to collect all these individual results and present them in a unifying general theorem describing the general nature in terms of parameter values. From this unifying theorem all the particular published examples follow directly. In this regard we study alternating variant Euler sums of the form In this investigation we let N, C, R, Q, and Z denote the sets of positive integers, complex numbers, real numbers, rational numbers, and integers, respectively. The notation introduced by Flajolet and Salvy [5] S ++ p,q will be employed in this study. The representation of these Euler sums in terms of special functions has its beginnings with Euler in 1742 in his communications with Goldbach. Nielsen [7] continued this area of study and it is now known that S ++ p,q can be evaluated in the cases p = 1, p = q, p + q odd, p + q even with the pairs (2, 4) and (4,2) . There also exists the reciprocity identity, see [1] or [15]. . The alternating Euler sum S +− p,q can also be expressed in terms of special functions for odd weight p+q, the pairs (1, 3) , (2, 2) and for q = 1, p ∈ N. The variant Euler sum q may also be expressed in terms of special functions. In this investigation we explicitly give a closed form representation of the alternating variant Euler sum (1) in terms of special functions in the case of even weight p + q. The two case (p, q) = (1, 2) , (2, 1) have been published in the various papers [3,4]. The evaluation of S +− p,q of Euler sums which admit a representation in terms of special functions. The harmonic numbers H n are given by Here γ is the familiar Euler-Mascheroni constant (see, e.g., [21,Section 1.2]) and ψ (z) denotes the digamma (or psi) function defined by where Γ (z) is the Gamma function (see, e.g., [ and H and the Dirichlet eta function η(z) is given by The Dirichlet lambda function λ (s) is defined as the term-wise arithmetic mean of the Dirichlet eta function and the Riemann zeta function: The Bernoulli numbers B n and the Euler numbers E n may be defined via generating functions: and It is noted that B 2n+1 = 0 (n ∈ N) and E 2n+1 = 0 , (n ∈ Z 0 ) . The first few of these Bernoulli and Euler numbers are: , . . . The polylogarithm function Li p (z) of order p is defined by The dilogarithm function Li 2 (z) is given by The polylogarithm function Li p (z) of order p in (10) can be extended as follows (see, e.g., [21, p. 198], or [6]): The polygamma function ψ (k) (z) defined by has the recurrence The generalized (or Hurwitz) zeta function, ζ(s, z) is defined by An important property of the generalized (or Hurwitz) zeta function is: The Dirichlet beta function β(z) is defined by which admits other representations such as:: and β(2) is known as Catalan's constant. Euler sum representation of the form (1), in terms of special functions such as the Riemann zeta function, the Dirichlet beta functions and others are important in various applications of mathematics and to the authors knowledge no representation for the general case (1) exists in the literature.. Other relevant articles on Euler sums include, for example, [2,8,19], and the excellent monographs [20][21][22]. The papers [9,14,[16][17][18]23] also explored various other Euler sums. The Euler sum (1) cannot be evaluated directly using a present CAS software package. The main theorem The following main theorem is established. Theorem 2.1. Let p ∈ Z 0 , t ∈ N ≥2 with p + t an odd integer. Then the following formula holds: where H Proof. Let |a| < 1 and consider using the properties of the polylog function, and can be easily confirmed on "Mathematica", we find where ζ t, 1−a 2 is the Hurwitz zeta function. We now differentiate p times, with respect to a both sides of the resultant identity, which is permissible since the integrand is uniformly convergent on |a| < 1. Finally take the limit as a approaches zero, so that we obtain Putting 1 + x 2 we have, from the publication [10], in the case where p + t is of odd order It is known that (see [10]) and that From (21), we may evaluate, using the well known relation Utilizing the definition (15), we have and so using the binomial expansion on the product of derivatives we have Combining these results together delivers the result (19) and the proof is finished. The integral (20) is obtained by the substitution x = tan (θ) .We note that the special case of t = 1 is listed in the following corollaries. A number of corollaries follow from Theorem 2.1 and we express them in the following results. Corollary 2.2. Let p = 0, and put t = 2t − 1, t ∈ N. Then the following formula holds: Proof. Follows directly from Theorem 2.1, and for the case t = 1 where G is the Catalan constant. Corollary 2.3. Let t = 1 and replace p by 2p, p ∈ N. Then the following formula holds: Some particular instances of the above identities are demonstrated in the following example.
1,348.2
2023-03-08T00:00:00.000
[ "Mathematics" ]
2023 Dynamics on a submanifold: intermediate formalism versus Hamiltonian reduction of Dirac bracket, and integrability. We consider the Lagrangian dynamical system forced to move on a submanifold G α ( q A ) = 0. If on some reasons we are interested to know dynamics of all original variables q A ( t ), the most economical would be Hamiltonian formulation on the intermediate phase-space submanifold spanned by reducible variables q A and an irreducible set of momenta p i , [ i ] = [ A ] − [ α ]. We describe and compare two different possibilities to establish the Poisson structure and Hamiltonian dynamics on intermediate submanifold. They are Hamiltonian reduction of Dirac bracket and intermediate formalism. As an example of the application of the intermediate formalism, we deduce on this base the Euler-Poisson equations of a spinning body, establishing the underlying Poisson structure, and write their general solution in terms of exponential of Hamiltonian vector field. Consider a mechanical system that can be described with help of non-singular Lagrangian L(q A , qA ), defined on configuration space with generalised coordinates q A (t), A = 1, 2, . . .n. Suppose the "particle" q A was then forced to move on a k -dimensional surface S k given by the algebraic equations G α (q A ) = 0.The task is to construct the Hamiltonian formulation for this theory.There are three different possibilities to do this.Let us first shortly describe and compare them. (A) The first possibility is to work with unconstrained variables.Let x i , i = 1, 2, . . ., k be local coordinates on S. Then equations of motion follow from the Lagrangian L(x i , ẋi ) ≡ L(q A (x i ), dq A (x i )/dt).If L is also non-singular, we introduce the conjugate momenta p i for x i , the Hamiltonian H(x i , p j ), and the canonical Poisson bracket {x i , p j } = δ i j .Then the Hamiltonian equations are ẋi = {x i , H}, ṗi = {p i , H}. The transition to independent variables x i is not always desirable.For instance, in the case of a spinning body, the q A variables are 9 elements of orthogonal 3 × 3 matrix R ij (therefore G α = 0 reads as R T R − 1 = 0).To describe a rigid body, we need to know the evolution of q A and not x i . (B) The second possibility is to work with original variables using the Dirac's version of Hamiltonian formalism [1][2][3].Equations of motion follow from the modified Lagrangian action, where the constraints are taken into account with help of auxiliary variables λ α (t) as follows [3,4]: We should pass to the Hamiltonian formulation introducing the conjugate momenta p A , p λα to all original variables q A , λ α .The Hamiltonian equations then obtained with help of canonical Poisson brackets {q A , p B } = δ A B , {λ α , p λβ } = δ αβ , and with help of Hamiltonian of the form H(q A , p B , λ α , p λβ ).The resulting equations depend on the auxiliary variables λ α and p λα .The systematic method for excluding them is to pass from the canonical to Dirac bracket.The latter is constructed with help of second-class constraints that appear in the Hamiltonian formulation of the theory (1).Working with the Dirac bracket, all terms with auxiliary variables disappear from the final equations.This gives Hamiltonian formulation on the phase space with coordinates q A , p B . (C) In the case of a spinning body, a kind of intermediate formulation arises between (A) and (B).The freely spinning body can be described by 9 + 3 Euler-Poisson equations where I is a numerical 3×3 matrix.They turn out to be the Hamiltonian equations [5][6][7][8][9], with the configuration-space variables assembled into a 3 × 3 matrix R ij (t), while Ω i (t) are three components of momenta.There are 9 redundant coordinates R ij , but only 3 independent momenta Ω i .So, if in case (A) we worked with unconstrained set (x i , p j ), and in case (B) with redundant set (q A , p B ), then now we have an intermediate situation: (q A , p j ).This gives the most economical Hamiltonian formulation of a theory in which we are interested in knowing the dynamics of all variables q A .An intermediate formulation for the theory (1) can be obtained in the Dirac's formalism, by first constructing the Dirac bracket (which is a degenerate Poisson structure on original phase space (q A , p B )), and then reducing it on the submanifold Φ α = 0. Let us call it the intermediate submanifold 2 .In the present work we develop an alternative way, allowing to construct the Poisson structure on this submanifold without the need for the Dirac bracket.Roughly speaking, this works as follows.For any theory of the form (1) with positive-definite Lagrangian L, we present an universal procedure to find (non-canonical) phase-space coordinates (q A , π i , π α ) with special properties.They are constructed with help of the matrix G αA ≡ ∂G α /∂q A and fundamental solutions of the linear system G αA x A = 0.The intermediate formulation of the theory (1) is obtained by first rewriting the Hamiltonian formulation of unconstrained theory L in terms of new coordinates, and then excluding the variables π α from all resulting expressions with help of the constraint Φ α = 0.In particular, the Poisson structure on intermediate submanifold turns out to be the canonical Poisson bracket of original variables (q A , p B ), first rewritten in terms of new coordinates (q A , π B ), and then restricted to this submanifold. As we saw above, an interesting application of the intermediate formalism lies in the branch of a spinning body dynamics.These issue is also of interest in modern studies of various aspects related with construction and behaviour of spinning particles and rotating bodies in external fields beyond the pole-dipole approximation [10][11][12][13][14][15][16][17][18].For simple mechanical systems (point particle in an external field or several mutually interacting particles), their equations of motion are postulated on the base of analysis of experimental data.Unfortunately, a spinning body turns out to be too complex system to find its equations in this way.So, even writing the equations of motion of a spinning body turns out to be a non-trivial task.At the dawn of the development mechanics, this was considered as one of the central problems, for which several branches of classical mechanics were developed: Lagrangian mechanics on a submanifold, Hamiltonian mechanics with constraints, symmetry groups and their relation with conservation laws and integrals of motion, integrable systems and so on.In the result, the basic theory of a rotating body was formulated in the works of Euler, Lagrange, Poisson, Poinsot and many others [19][20][21][22].However, a didactically systematic formulation and application of these methods to various problems of rigid body dynamics is still regarded not an easy task [6,7].For instance, J. E. Marsden, D. D. Holm and T. S. Ratiu in their work [6] dated by 1998 write: "It was already clear in the last century that certain mechanical systems resist the usual canonical formalism, either Hamiltonian or Lagrangian, outlined in the first paragraph.The rigid body provides an elementary example of this." Second-order Lagrangian equations of a spinning body can be obtained as the conditions of extreme of a variational problem, where the body is considered as a system of particles subjected to holonomic constraints [8,9].However, the most convenient for applications turn out to be the equations written in a first-order (Hamiltonian) form (3). So, it is desirable to have a formalism that allows one to deduce these equations starting from the Lagrangian variational problem by direct application of the standard prescriptions of classical mechanics for the passage from Lagrangian to Hamiltonian formulations.The intermediate formalism seems to be the most economical way to do this.It should also be noted that a thorough analysis of the Lagrangian and Hamiltonian formulations reveals some specific properties of the formalism, which are not always taken into account in the literature, when formulating the laws of motion and applying them.In several cases this even leads to the need to revise some classical problems of the dynamics of a spinning body [23,24]. The remainder of the paper is organized as follows.In Sects.II we shortly discuss the dynamics on a surface G α (q A ) = 0 in terms of unconstrained variables, and outline the Liouville integration procedure in a form, convenient for the later comparison with the integration method based on Hamiltonian vector field.In Sect.III we describe Hamiltonian reduction on intermediate submanifold with use of Dirac bracket.In Sect.IV we present our intermediate formalism for establishing the Poisson structure and Hamiltonian equations on the intermediate submanifold.In Sect.V we present the method of integration of a first-order equations with help of Hamiltonian vector field.In Sect.VI we illustrate the intermediate formalism on a simple example of a point particle forced to move on a sphere.In Sect.VII we use the intermediate formalism to establish the Poisson structure that lie behind the Euler-Poisson equations of a spinning body, and write their general solution in terms of power series with respect to evolution parameter, and with the coefficients determined by derivatives of the Hamiltonian vector field. II. MOTION ON A SURFACE IN TERMS OF UNCONSTRAINED VARIABLES AND INTEGRABILITY ACCORDING TO LIOUVILLE. We assume that original Lagrangian is non-singular det and the particle q A was forced to move on k -dimensional surface S k determined by n − k functionally independent equations Let x i , i = 1, 2, . . ., k be local coordinates on S k , and q A (x i ) be parametric equations of S k : G α (q A (x i )) ≡ 0 for any x i .Then equations of motion follow from the following unconstrained Lagrangian: and read as follows: By construction, for any solution x i (t) to the problem (7), the trajectories q A (x i (t)) lie on the surface (5).This recipe has clear justification [3,4,25] for the Lagrangians of the form T = 1 2 m A ( qA ) 2 − U (q A ) with m A > 0. For more general Lagrangians it should be taken as the definition of a particle forced to live on a surface. We add one more technical restriction, assuming that the matrix M AB is positive-definite, that is Y T M Y > 0 for any non zero column Y. Then the matrix is non degenerate, see Appendix.In view of this, for positive-definite L(q A , qA ), the Lagrangian L(x i , ẋj ) is nonsingular. The Hamiltonian formulation in terms of unconstrained variables can be obtained as follows.Introduce the conjugate momenta p i = ∂ L/∂ ẋi for x i .As det M ij = 0, these equations can be resolved with respect to ẋi , say ẋi = v i (x j , p k ).Using these equalities, we construct Hamiltonian by excluding ẋi from the expression H = p i ẋi − L(x i , ẋj ).Then, with use of canonical Poisson brackets {x i , p j } = δ i j , the Hamiltonian equations of the theory are If the Hamiltonian does not explicitly depend on time, it is an integral of motion.If, in addition, there are extra k − 1 integrals of motion, then, according to the Liouville's theorem, a general solution to equations of motion can be found in quadratures (that is calculating integrals of some known functions and doing the algebraic operations). Liouville's theorem.Let the Hamiltonian equations (9) admit k integrals of motion F 1 = H, F 2 , F 3 . . . ., F k .We assume that they are in involution and functionally independent with respect to momenta Then the equations of motion are integrable in quadratures. Proof.The proof consists in formulating a recipe for constructing the general solution. (A).Consider the equations F i (x i , p j ) = c i = const for constant-level surface of integrals of motion.Due to the condition (11), they can be solved with respect to p i We first confirm that the vector function f i is a gradient of some scalar function.Omitting x j , which we temporarily consider as parameters, we have F i (f j (c k )) = c i , that is F i and f j are mutually inverse transformations.Calculating derivative of this equality with respect to c j we get Contracting Eq. ( 10) with ∂f k /∂c i and using (13) we get the identity Contracting ∂f b /∂c j with derivative of (12) with respect to x k and using Eq. ( 13) we get the following expression for the derivative of Together with (14) Then Eqs.(10) imply that the quantities p i − f i (x k , c j ) are in involution According to (16), f i (x k ) is a curl-free vector field.So there is the potential Φ: In the result, we demonstrated that equations of constant-level surface ( 12) can be written in the form (B).According to Stokes' theorem, the line integral of a curl-free field does not depend on the choice of the integration path, and gives the potential (C).Substituting the solution (18) to the equation H(x i , p j ) = c 1 ≡ E into this equation, we have the identity Then the function with the property by construction obeys the Hamilton-Jacobi equation According to the theory of canonical transformations (see Sect. 4.7 in [3]), the general solution to the Hamiltonian equations ( 9) with 2k integration constants c k , b i can be now obtained solving the algebraic equations with respect to x i and p j .The resolvability of the second equation is guaranteed by (22).As a result, the problem of integrating the Hamiltonian system ( 9) is reduced to the calculation of line integral (19).In turn, this can be reduced to calculation of definite integrals.To see this, let us specify the equations (24) to the case of a theory with two configuration-space variables x i = (x, y) and two integrals of motion H(x, y, p x , p y ) = E and F (x, y, p x , p y ) = c.Solving these algebraic equations we get p x = f x (x, y, E, c) and p y = f y (x, y, E, c).Taking the path of integration to be the pair of intervals, (0, 0) → (x, 0) → (x, y), we obtain the potential Then Eqs. ( 24) read as follows So the problem is reduced to the calculation of four definite integrals indicated in these equations. III. MOTION ON A SURFACE IN TERMS OF ORIGINAL VARIABLES. To work with a particle on a surface in terms of original variables, we can use the variational problem with the modified Lagrangian (1), where the constraints are taken into account with help of auxiliary dynamical variables λ α (t), called Lagrangian multipliers.In all calculations they should be treated on equal footing with q A (t).In particular, looking for the equations of motion, we take variations with respect to q A and all λ α .The variation with respect to λ α , implies that is the constraints arise as a part of conditions of extreme of the action functional.So the presence of λ α allows q A to be treated as unconstrained variables, that should be varied independently in obtaining the equations of motion. Taking the variation with respect to q A we get Computing the time derivative, these equations read where M AB is the inverse of M AB (q A , qB ) and The theories ( 29) and ( 7) turn out to be equivalent, see [3,4]. The auxiliary variables λ α can be excluded from the system (28) (or ( 29)) as follows.For any solution q A (t), the identity G α (q A (t)) = 0 implies Ġα = G αA qA = 0. Calculating one more derivative we get G αA qA + ∂ B ∂ A G α qA qB = 0. Using expression for qA from (29) we get According to Appendix, C has the inverse matrix C, so we can separate λ β as follows Inserting this λ β into Eq.( 28) or ( 29), we obtain closed equations for determining the physical variables q A (t). Comment.If the condition ( 8) is not satisfied, the invertibility of C is not guaranteed, and we need to continue the analysis of the system (30).The general procedure can be found in the Appendix C of [2].Here we will only show that in a theory with kinematic constraints the auxiliary variables can always be excluded from the equations for physical variables.Without loss of generality, we can assume that the coordinates q A were enumerated in such a way that non-vanishing minor of the matrix G αA is located in the first n − k columns, then Let us consider the original theory (1) in special coordinates q ′A , adapted to the surface and defined as follows: That is we taken the constraint's functions G α (q A ) as a part of new coordinates.In the adapted coordinates our surface is just the hyperplane q ′α = 0, and q ′i can be taken as its local coordinates.For the inverse transformation we get where Gα (q ′A ) is the solution to equations q ′α = G α (q α , q ′i ): G α ( Gβ (q ′A ), q ′i )) = q ′α .An invertible change of variables can be made directly in the Lagrangian (1), this leads to an equivalent formulation of the original theory, see Sect.1.4.2 in [3].Substituting the expressions (34) into (1) we get the Lagrangian of the form L ′ (q ′A , q′A ) − λ α q ′α , which implies equations of the following structure: That is we have closed system (36) for determining q ′A (t), while λ α (t) then can be found algebraically from (35). For the latter use, observe that and its inverse are positive-definite matrices together with M AB . Hamiltonian formulation of the theory (1) on phase space (q A , p B ). Without loss of generality, we assume that equations of the surface G α (q A ) = 0 can be resolved with respect to first n − k -coordinates.In accordance to this, the set q A is divided on two subgroups, q α and q i .Greek indices from the beginning of the alphabet run from 1 to n − k, while Latin indices from the middle of the alphabet run from 1 to k.So and our variational problem is (1).Applying the Dirac's method, we introduce conjugate momenta p A = ∂L/∂ qA and p λα = ∂L/∂ λα for all configuration-space variables q A and λ α .Conjugate momenta for λ α are the primary constraints: p λα = 0. Since the Lagrangian L was assumed non-singular, the expressions for p A can be resolved with respect to velocities: To find the Hamiltonian, we exclude velocities from the expression By ϕ α we denoted the Lagrangian multipliers for the primary constraints.Preservation in time of the primary constraints, ṗλα = {p λα , H} = 0 implies G α = 0 as the secondary constraints.In turn, the equation dG α /dt = {G α , H} = {G α , H 0 } = 0 implies tertiary constraints, that should be satisfied by all true solutions The Lagrangian counterpart of these constraints is qA ∂ A G α = 0, and mean that for true trajectories the velocity vector is tangent to the surface S k .Calculate Note that M AB is inverse of the Hessian matrix M AB .This implies that the constraints Φ α are functionally independent and can be resolved with respect to some n − k momenta of the set p A .This implies also that the constraints G β and Φ α are functionally independent.Calculating their Poisson brackets we get the matrix For our Lagrangian with positive-definite M AB , this matrix is non degenerate, see Appendix. For the latter use, we introduce the matrix composed by brackets of the constraints where the first block corresponds to {G α , G β } = 0, and c αβ = {Φ α , Φ β }.As b is invertible, the matrix △ IJ is invertible, so our constraints G β and Φ α are of second class.Preservation in time of the tertiary constraints gives fourth-stage constraints that involve λ α , and can be used to find them through q A and p B At last, preservation in time of the fourth-stage constraints gives an equation that algebraically determines the Lagrangian multipliers ϕ α through other variables In the absence of new constraints, the Dirac's procedure is over. In resume, we revealed the following chain of constraints and determined the auxiliary variables ϕ α .Note that the phase-space variable p λα is just a constant, while λ α is presented through q A and p B .So we only need to write the dynamical equations for q A and p B .The variables λ α can be excluded from the Hamiltonian (40) using the constraint (45).Besides, we can omit the term ϕ α p λα , since it does not contribute into Hamiltonian equations for the phase-space variables q A , p B .With the resulting Hamiltonian, the equations read as follows: Writing the last equalities we taken into account that G α = 0 for true solutions.Dirac noticed, that these equations can be rewritten in terms of canonical Hamiltonian without auxiliary variables if instead of canonical Poisson bracket we introduce the famous Dirac bracket.Given two phase-space functions A(q, p) and B(q, p), their Dirac bracket is This has all properties of the canonical Poisson bracket, including antisymmetry and the Jacobi identity [28].Besides, its remarkable property is that T I = (G α , Φ β ) represent its Casimir functions, that is Dirac bracket of any phase-space function with any constraint T I vanishes: {A, T I } D = 0.The equations constructed with help of H 0 and Dirac bracket differ from (48) by terms proportional to the constraints, and therefore are equivalent.The final equations (51) do not involve the auxiliary variables and are written on the phase space (q A , p B ).The Dirac bracket determines the Poisson structure of this space. Hamiltonian reduction to the intermediate submanifold.Using the Dirac formalism, we obtained 2n + 2(n − k) equations of our theory written on 2n -dimensional phase space with coordinates (q A , p B ).They are the dynamical equations (51) and the constraints G α (q A ) = 0 and Φ α (q A , p B ) = 0.All solutions to our equations live on 2k -dimensional submanifold specified by these algebraic constraints.They could be used to exclude 2(n − k) variables from the formalism.However, as we saw above, it may be desirable to work with our theory keeping all q A .Therefore we exclude only a part of momenta, making reduction of our theory to intermediate submanifold of equations Φ α (q A , p B ) = 0. Let be a solution to the constraints Φ α (q A , p B ) = 0.The reduction can be done while at the same time keeping the Hamiltonian character of resulting equations, that is we establish Poisson structure and Hamiltonian for our equations on the intermediate submanifold with the coordinates (q A , p i ).Because of the property that the constraints are composed of Casimir functions, the reduction consists in elimination the variables p α from the formalism as follows. 1.It is known [28] that together with Φ α = 0, the functions p α − f α (q A , p i ) also represent Casimir functions of the Dirac bracket, so for any phase-space function A(q A , p B ) we get As a consequence, computation of the Dirac bracket and substitution (52) are commuting operations: 2. Using (50) and (52), we define the following brackets on the submanifold (q A , p i ): {A(q A , p i ), B(q A , p i )} ′ = {A(q A , p i ), B(q A , p i )} D pα=fα(q A ,pi) . Because of the property (53) the brackets {, } ′ obey the Jacobi identity (for the direct proof, see Sect.4.2 in [28]), and hence determine the Poisson structure on the submanifold (q A , p i ). 3. Let us replace p α on f α (q A , p i ) in the Hamiltonian (49), denoting the resulting expression by H ′ 0 (q A , p j ) Because of the property (53), H 0 can be used in Eqs.(51) instead of H, this will give an equivalent Hamiltonian equations.Replacing p α according to (52) in the r.h.s. of these equations, we get an equivalent equations with the bracket (55) Together with the algebraic equations G α = 0 and p α = f α (q A , p i ) they are equivalent to the original system composed of (51), G α = 0 and Φ α = 0.This completes the procedure of reduction to the intermediate submanifold Φ α = 0. IV. INTERMEDIATE FORMALISM. Here we present more economic way to construct Hamiltonian formulation of the theory (1) on the intermediate submanifold, which does not require constructing the Dirac bracket and then reducing it to the submanifold. To this aim we rewrite the obtained Hamiltonian theory (47), (48) in non-canonical phase-space coordinates with special properties.The matrix G αB (q A ) of Eq. ( 41) is composed by (n − k) linearly independent vector fields G α (q A ) orthogonal to the surface S k of the configuration space q A .Let us consider the linear system G αB x B = 0.It has a general solution3 of the form x B = c i G iB , where the linearly independent vectors G i are fundamental solutions to this system.They have the following structure: By construction, these vector fields form a basis of tangent space to the surface S k .Together with G α , they form a basis of tangent space to the entire configuration space.Using the rows G β and G j , we construct an invertible matrix G BA , and use it to define the new momenta π B of the phase space (q A , p B ) as follows: Let us take q A and π B as the new phase-space coordinates.Their special property is that both q A and π i have vanishing brackets with the original constraints the latter equality is due to Eq. ( 58). Let us rewrite our theory in the new variables.Using the canonical brackets {q A , p B } = δ A B , we get Poisson brackets of the new variables where the Lie brackets of basic vector fields G A appear Therefore the Lie bracket of the vector fields G A determines Poisson structure of our theory in the sector π A .The structure functions c ij k vanish for our choice of basic vectors G i of special form, see Eq. (58).In particular, the Poisson brackets of the coordinates q A and π i are The Hamiltonian (40) reads At last, our second-class constraints in the new coordinates are Let us confirm that the tertiary constraints Φ α can be resolved with respect to π α .To this aim we compute the matrix ∂Φ α /∂π β , and show that its determinant is not zero It is not zero for our class of positive-definite Lagrangians (8), see Appendix.Resolving the constraints Φ α = 0, say we use the resulting expressions to exclude π α from (64) and (65), thus obtaining In general, the brackets (69) are non-linear for both q A and π i .Their dependence on the choice of tangent vector fields G i to the surface S k is encoded in three places: in the brackets {q α , π i } ′ , in the matrix G, as well as in the structure functions c ij α , see Eq. ( 62).Using these brackets and Hamiltonian, let us write the following system of equations: Affirmation.The brackets (69) obey the Jacobi identity and hence determine the Poisson structure on the intermediate submanifold Φ α = 0 equipped with the coordinates (q A , π i ).Besides, equations ( 71) and ( 72) represent an equivalent formulation of the original theory (51), (47).Proof.To establish the equivalence, we consider our theory in the variables (q A , π B ), write the Dirac bracket in these variables and then reduce it to the intermediate submanifold. Using the constraints (66), we construct Dirac bracket on the phase space (q A , π B ) as follows: Here T I is the set of all constraints: T I = (G α (q), Φ β (q, π)).Besides, denoting symbolically the blocks b = {G, Φ} and c = {Φ, Φ}, the matrices △ and △ −1 are The constraint's functions (66) are Casimir functions of the Dirac bracket (73).Similarly to the previous section, as Hamiltonian equations of our theory we can take with H written in Eq. ( 65).Eq. ( 74) implies the following structure of the Dirac bracket that is the last two terms on r.h.s.involve at least one constraint G α .Taking into account Eqs.(60), we conclude that in the passage from Poisson bracket (61) to the Dirac bracket (73), the brackets (64) of basic variables q A and π i will not be modified, retaining their original form.Excluding π α from their right hand sides with help of (68), we arrive at the brackets (69).Since π α − f α (q A , π i ) are Casimir functions of the Dirac bracket (73), the brackets (69) obey the Jacobi identity, see Sect.4.2 in [28] for the direct proof. To reduce the equations (75) to the intermediate submanifold Φ α = 0, we proceed in the same way as in the previous section.First, working with Eqs.(75), we can omit the terms with constraints in the Hamiltonian (65), and then use (68) in the resulting expression.This gives the Hamiltonian (70), which therefore can be used instead of H in equations (75) for q A and π i .Second, excluding π α from r.h.s. of these equations with help of (68), they acquire the form (71).This completes the proof of the affirmation. Another set of non-canonical variables.Instead of (59), we can equally consider the following non-canonical set q A , π B : That is we taken the third-stage constraints Φ α as a part of new momenta.Using the adapted coordinates (33), we conclude that the change (77) is invertible with respect to p A Here we used that in adapted coordinates G ′ αA = (δ αβ , 0) and G′ Dβ = (δ αβ , 0) T .As M ′AB is positive-definite matrix (see (37)), we have det M ′αβ > 0. Together with (78), this implies det(∂π A /∂p B ) = 0. Representing p A through q A and π B , we can rewrite the theory in terms of new variables.Our second-class constraints in the new coordinates are Using the canonical brackets {q A , p B } = δ A B , we get the following Poisson brackets for the variables q A and π where appeared the Lie brackets of basic vector fields G i As above, the special property of new variables is that q A and π i have vanishing brackets with the original constraints the latter equality is due to Eq. ( 58).On this reason, when we pass to the Dirac bracket, the brackets (80) will not be modified, while the brackets of π α = Φ α with any phase-space function vanish.The final Hamiltonian is obtained from (40) disregarding the last two terms and substituting p A (q A , π i , π α = 0) into the remaining terms The final brackets are (80), where we substitute π α = 0 on r.h.s. of the last equation.Hamiltonian equations are obtained with use of the final brackets as follows: qA = {q A , H ′ 0 (q B , π j )} ′ , πi = {π i , H ′ 0 (q B , π j )} ′ . V. INTEGRATION OF FIRST-ORDER EQUATIONS WITH USE OF HAMILTONIAN VECTOR FIELD. To apply in practice the Liouvolle's theorem discussed in Sect.II, we need to find the integrals of motion, then solve the algebraic equations ( 12), then we need to calculate the integrals given in Eqs. ( 26) (for the rigid body they typically are the elliptic integrals), and finally, solve the algebraic equations (26).In this section we present another possibility to integrate first-order equations in terms of power series with respect to t. Consider the differential operator acting on space of functions f (x) and defined by formal series where h = const, and ∂ x = ∂ ∂x .This obeys the properties e h∂x x = x + h, e h∂x f (x) = f (e h∂x x), as can be verified by expansion in power series of both sides of these equalities.There is a generalization of the last equality for the case of a function h(x).For the latter use we introduce the parameter t.Then in particular e th(x)∂x h(x) = h(e th(x)∂x x). To prove this4 , let us consider the following Cauchy problem for partial differential equation where h(x) and f (x) are given functions.It is known (see Sect. 60 in [26]), that this problem has unique solution ϕ(t, x).The function e th(x)∂x f (x) obeys to this problem Denoting e th(x)∂x x ≡ y(x), we verify that the function f (e th(x)∂x x) also obeys to this problem Since the solution is unique, the two functions must coincide, which proves the equality (85).As a consequence, the series z(t, x) = e th(x)∂x x turns out to be a general solution to the equation with x being the integration constant.Indeed, we have where the penultimate equality is due to (85).This observation immediately generalized for the case of several variables: the functions provide a general solution to the system żi = h i (z j ). (92) Any Hamiltonian system ẋi = {x i , H}, ṗj = {p j , H} has this form.So its general solution is There is a generalization of these formulas to the case of time-dependent Hamiltonian, see [3]. VI. APPLICATION OF INTERMEDIATE FORMALISM TO A TOY MODEL. Here we illustrate the intermediate formalism on the example of a particle on a sphere, obtaining a non standard Hamiltonian description of these model in five-dimensional symplectic manifold. Consider a point particle with coordinates x i (t) in three-dimensional Euclidean space, forced to move freely on the sphere x 2 = c 2 .It can be described by the Lagrangian action In the phase space with canonically conjugated coordinates (x, p), this action implies two second-class constraints x 2 − c 2 = 0 and (x, p) = 0.The first is analogous to G α = 0 of general formalism, while the second is analogous of Φ α = 0 and determines five-dimensional intermediate submanifold in the phase space.Then the analogous of G αA is the vector 1 2 grad(x 2 − c 2 ) = (x 1 , x 2 , x 3 ).Assuming that we work in the local coordinate chart with x 3 = 0, fundamental solutions to the equation (x, z) = 0 are (1, 0, − x1 x3 ) and (0, 1, − x2 x3 ).The change of variables (59) reads We get Hence in the new coordinates x and π, the intermediate submanifold is just the hyperplane π 3 = 0.The inverse transformation to (96) is The next step is to rewrite the canonical Poisson brackets {x i , p j } = δ ij and Hamiltonian H = 1 2m p 2 in terms of new coordinates, and then substitute π 3 = 0 in all resulting expressions.Using the expressions (95) and the canonical brackets, we obtain the following non vanishing brackets for the coordinates (x 1 , x 2 , x 3 , π 1 , π 2 ) of intermediate submanifold They do not involve π 3 , so they already give the Poisson structure of intermediate manifold.Using Eqs.(97) in the canonical Hamiltonian, and then setting π 3 = 0, we obtain the Hamiltonian reduced to the intermediate submanifold Eqs. ( 98) and (99) represent a Hamiltonian system on a five-dimensional symplectic manifold foliated by the leaves Here we apply the intermediate formalism to a spinning body.We show that Euler-Poisson equations turn out to be a Hamiltonian system on the intermediate submanifold, and deduce the Poisson geometry (112) that lie behind these equations. Motions of a spinning body can be described [8,9] starting from the Lagrangian action of the form (1) where R T R − 1 play the role of G α of the general formalism.The action is written in Laboratory system with the origin chosen at center of mass of the body.R ij (t) is 3 × 3 matrix.Its nine elements are the dynamical degrees of freedom which, at the end, describe rotational motions of the body.The numeric symmetric matrix g ij encodes the distribution of mass of the body at initial instant where m N are masses of the body's particles with position vectors x N (t).The mass matrix and inertia tensor are related as follows: Choosing Laboratory axes at t = 0 in the directions of axes of inertia, the two tensors acquire a diagonal form, g ij = g i δ ij , I ij = I i δ ij .For a non planar body, g i are positive numbers [9], so the Hessian matrix of the theory (100) evidently is positive-defined.Therefore we can apply the intermediate formalism developed in Sects.III and IV. Introducing the conjugate momenta for all dynamical variables: p ij = ∂L/∂ Ṙij and p λij = ∂L/∂ λij , we obtain the expression for p ij in terms of velocities and the primary constraints p λij = 0. To construct the final Hamiltonian of the intermediate formalism, we will need only canonical part H 0 = p ij Ṙij (p) − L( Ṙij (p)) of the complete Hamiltonian (40).For the present case, its explicit form is The non vanishing Poisson brackets of canonical variables are (there is no summation over i and j): Next, the explicit form of tertiary constraints (41) in our case is The surface determined by equations R T R = 1 and (104) is equally determined by 6 + 6 equations We take these Φ ij as analogues of the constraints (41) of the general formalism. According to the intermediate formalism, we now need to find non-canonical momenta with two properties.First, 9 − 6 = 3 of them should have vanishing Poisson brackets with the orthogonality constraint, see Eq. (60).Second, the constraints (106) can be used to represent other momenta through these three, see Eq. (68).To achieve this, consider the phase-space functions They are constructed from p ij with use of invertible matrix, so the transition (R ij , p ij ) → (R ij , P ij ) is a change of variables on the phase space.We emphasize that R ij in the action (100) is an arbitrary (not orthogonal!) matrix.We decompose P ij on symmetric and antisymmetric parts, P ij = S ij − Mij , where S = R −1 p + (R −1 p) T and M = R −1 p − (R −1 p) T , and then replace the antisymmetric matrix M on an equivalent vector 5 So the final form of the decomposition is In accordance to this, we consider the following change of variables: The coordinates M k have the desired properties: their brackets with orthogonality constraint vanish: {M k , R pi R pj − δ ij } = 0, and the variables S ij can be presented through M k resolving (106) as follows (there is no summation on i and j in this expression) Therefore, the change of variables ( 109) is analogous of the change (59) of the general formalism.To obtain the last equality, we used the following relations among elements of diagonal mass matrix and inertia tensor [9]: Computing the canonical Poisson brackets of the new variables R ij , M k and S ij we get According to Sect.IV, to reduce our theory on the submanifold Φ ij = 0, it is sufficient to rewrite it in the variables R ij , M k , S ij and then, using Eq. ( 110), to exclude from all resulting expressions the variables S ij .The brackets (112) do not involve S ij .So they already give a Poisson structure of intermediate submanifold Φ ij = 0. Using Eqs. ( 107) and (108) in the canonical Hamiltonian (103), the latter can be written as follows: Second term is proportional to the orthogonality constraint.Therefore it does not contribute into Hamiltonian equations for the variables R ij and M k , and hence it can be omitted.The remaining term can be written as follows: Using the relations (110), for any chosen i = j we get Note that the final expression, being composed of tensors and vectors, is invariant under rotations.Hence the Hamiltonian will be of this form in any Laboratory system.If the Laboratory frame was not adapted with the axes of inertia at initial instant, the inertia tensor in this expression will be a numerical symmetric matrix with non-vanishing off-diagonal elements. By direct computations, it can be verified that they still satisfy the Jacobi identity and lead to the same equations (117).They were suggested by Chetaev [5] as the possible Poisson structure corresponding to the Euler-Poisson equations. General solution to the Euler-Poisson equations and the motions of a rigid body.Not all solutions to the equations (117) describe the motions of a spinning body.By construction [8,9], they should be solved with the universal initial conditions where Ω 0i is initial angular velocity measured in the body-fixed frame.That is only those trajectories that at some instant of time pass through the unit of SO(3) -group can describe possible motions of the body +tH k (Ω0) After applying the differential operator in the exponential, R 0kp should be replaced on δ kp in each term of the obtained power series.The resulting function R ij (t, Ω 0k ) will represent the motion of a spinning body, that at t = 0 has its inertia axes parallel with the Laboratory axes, and the initial angular velocity equal to Ω 0i . VIII. CONCLUSION. The most economical Hamiltonian formulation of the theory (1), in which we are interested in knowing the dynamics of all variables q A , is achieved on the intermediate submanifold of phase space determined by the constraints (41) (or, equivalently, by (68)).We have described and discussed two ways of Hamiltonian reduction to this submanifold.The final result of the reduction using the Dirac bracket is written out in equations ( 55 To further compare the two reductions, let us denote coordinates (q A , p B ) of original phase space by z i , while coordinates (q A , p j ) of intermediate submanifold by z a .Let the Poisson tensor of original space is ω ij , the Dirac tensor is ω ij D and the Poisson tensor induced on intermediate submanifold is ωab .Generally, in the process of reduction with use of Dirac bracket, ω ij → ω ij D → ωab , we have ω ab = ω ab D = ωab .An alternative possibility, developed in this work, can be resumed as follows: in the theory (1) with a positive-definite Lagrangian L, there are phase-space coordinates z ′i = (q A , π B ) such that in the process of reduction ω ′ij → ω ′ij D → ω′ab we have ω ′ab = ω ′ab D = ω′ab .In view of this, the reduction consists in exclusion of redundant momenta (see Eq. ( 68)) from the block ω ′ab of original tensor ω ′ij . Using this H 0 with the brackets (112), the Hamiltonian equations ż = {z, H 0 } read as follows:Ṙij = −ǫ jkm (I −1 M ) k R im , Ṁ = [M, I −1 M].Introducing the phase-space quantity Ω i = I −1 ij M j , they acquire the standard form of Euler-Poisson equations:Ṙij = −ǫ jkm Ω k R im , I Ω = [IΩ, Ω].(117)By this, we completed Hamiltonian reduction on intermediate submanifold (106), showing that Euler-Poisson equations are the Hamiltonian system on this submanifold, with the Poisson structure given by the brackets (112).The Chetaev bracket is the Dirac bracket.Using the orthogonality constraint on r.h.s. of the brackets (112), we obtain more simple expressions )-(57).The intermediate formalism gives the equations (69)-(72).As we have shown in the last section, namely the intermediate formalism directly leads to the Euler-Poisson equations of a spinning body. The quantity x 2 is a Casimir function of the Poisson structure (98).So any trajectory that passes through a point of the symplectic leaf x 2 = c 2 with given c, entirely lies in this leaf.
10,725.4
2023-09-10T00:00:00.000
[ "Mathematics" ]
DFT study of Se and Te doped SrTiO3 for enhanced visible-light driven phtocatalytic hydrogen production The pure STiO3 has been experimentally demonstrated to catalyze H2 production using water splitting, but the reaction can only be driven by Ultraviolet (UV) radiation due to the large band gap of SrTiO3. This motivated us to search efficient strategy to tune its band gap, so that it can function in the visible region of the solar spectrum. In this study, the electronic, optical and photocatalytic properties of Se-doped, and Te-doped SrTiO3 has been investigated using density functional theory (DFT) within the generalized gradient approximation (GGA). Our results reveal that the effect of doping can lead to band gap narrowing without introducing any isolated mid-gap states. This improves greatly the visible light activity of SrTiO3 and depresses the recombination of photogenerated electron–hole pairs. Furthermore, the locations of calculated band edges relative to the water reduction and oxidation levels for doped systems meet the water-splitting requirements. Consequently, our results show that the performance of SrTiO3 for hydrogen generation by photocatalytic water splitting is significantly enhanced with Se and Te doping. In particular, Te doping can enhance greatly the visible light photocatalytic activity of SrTiO3. We expect this study can provide a theoretical basis for a prospective experimental works. Introduction With the continuous increasing demand for the energy as a consequence of rapid demographic, economic and social developments, the energy supply should be permanently and sufficiently provided. However, nowadays, the main energy supply comes from fossils, that are not renewable and have disastrous impacts on the environment and climate change. Consequently, Hydrogen energy is considered as a clean and sustainable energy carrier of the future.The most economic method to produce hydrogen is the photocatalytic splitting of water using sunlight. Hence, this has inspired large research efforts into designing and improving a wide range of photocatalytic systems over the past few decades. Most of these systems are metal oxide semiconductors. For example, titanium dioxide (TiO 2 ) has been widely used for water splitting (Zhu et al. 2016) and contaminants treatment in air and water (Dulian et al 2019). Besides, ABO 3 -type perovskite oxides have gained a huge interest due to their low cost, high stability and non-toxicity. The main shortcoming which restricts their application at a large scale is their large band gap (superior to 3 eV) which makes them active only in the UV region of the solar spectrum. Fortunately, this limitation can be overcome by employing adequate dopant elements. This can regulate the band-gap to an appropriate level that enables the absorption of visible light. In the present work, we are concerned with narrowing the band gap of SrTiO 3 (3.15 eV), which has been investigated widely in the field of production of hydrogen from water splitting. In fact, several studies showed that adjusting SrTiO 3 through doping has proved to be inexpensive, effective, easy to handle and constitutes a successful way to redshift its absorption spectrum, which may increase its visible light activity and enhances its photocatalytic efficiency. For example the n-type Nb-doping of SrTiO 3 in the Ti site, shows strong absorption in the visible light region and higher photo-catalytic activity (Shujuan et al. 2016). Zhou et al. (2018) synthesized Er 3+ -doped SrTiO 3 nanoparticles, they found that the doped SrTiO 3 exhibits higher photocatalytic activity for hydrogen production under simulated solar irradiation compared to undoped SrTiO 3 . In order to develop the SrTiO 3 photocatalyst for photodegradation of Rhodamine B solution, Wang et al. (2020) investigated the photocatalytic properties of SrTiO 3 powders doped with Eu 3+ ion. The study revealed a band gap narrowing owing to the hybridization of Eu d-states with the Ti 3d or O 2p orbitals, which results in a remarkable improvement of light absorption capability in the visible light region of doped system compared to the pure one. Thanh et al. (2014) studied the influence of Mn doping on the structural, optical and magnetic properties of SrTi 1−x Mn x O 3 (x = 0.0-0.1) synthesized by a solid-state reaction method. They found that an increasing of Mn concentration in the host material, leads to a significant band gap narrowing, which shifts the absorption spectrum of SrTiO 3 to the visible wavelengths range, and increases its photocatlytic performance. In general, doping strategy enhances the photocatalytic properties of semiconductors through narrowing the band gap by lowering the CB, raising the VB, and/or inducing midgap states. However, it is well known that inducing mid-gap states associated with some types of monodoping, minimizes the photoconversion efficiency by trapping the photogenerated charge carriers. For example, in an experimental and theoretical study, Bae et al. (2008) showed that doping noble metals (Ru, Rh, Ir, Pt, Pd) in the perfect SrTiO 3 introduces new energy levels between the VBM and CBM. This improves the photon harvesting ability, but does not guarantee an improvement in photocatalytic H 2 generation, because these mid-gap levels act as trapping centres, which promote the unfavourable effect of recombination. Also, Chen et al. (2012) studied the effect of doping by Ru on the photocatalytic activity of SrTiO 3 based photocatalyst using DFT calculations. They found that the midgap states introduced by Ru dopant promote faster charge carriers recombination. To avoid the formation of mid-gap states introduced by some dopants, researchers tend to choose codoping with two different types of elements. For example, it has been reported that codoping of Ta or Sb and Cr into the pure SrTiO 3 , exhibited a higher efficiency in H 2 evolution. By codoping Ta 5+ or Sb 5+ to Sr sites and Cr 3+ in the Ti sites, it is able to establish the charge compensation (Ishii et al. 2004;Kato and Kudo 2002). Besides, Miyauchi et al. (2004) and Wang et al. (2005) successfully prepared (La, N) codoped SrTiO 3 , they found a decrease in the possibility of structure defects and an improvement in the photocatalytic activity under visible light towards H 2 evolution process. This is because the codoping of La 3+ ions in Sr 2+ sites and N 3− in O 2− sites maintained the charge compensation and minimized the generation of O vacancies. Furthermore, it has been demonstrated in a theoretical investigation by Wei et al. (2010) that codoping of either nonmetal (H, F, Cl, Br, I) or metal (V, Nb, Ta, Sc, Y, La) is a successful way to passivate the N-induced ingap states. Their study aims at exploring the codoping synergistic effects for higher energy conversion efficiency. Generally, the formation of mid-gap states is associated with the charge imbalance introduced by the aliovalent dopants. According to our previous work (Bentour et al. 2020), the bandgap of SrTiO 3 was narrowed in the case of the substitution of S 2− for O 2− . Besides the band gap of S-doped SrTiO 3 is free from energy levels thanks to the same valence state of S and O. This can depress the undesirable electron-hole recombination in the crystal. Since, Se and Te are in the same group element with S and O in the periodic table of elements, and respectively possess the anionic (Se 2− ) and (Te 2− ) characters; therefore, doping by these anions on the O anion site is expected to give best results. Consequently, the present study is concerned with the investigation of the effect of anionic doping with Se and Te for the enhancement of visible light absorption as well as the photocatalytic performance of SrTiO 3 material. To our best knowledge, no experimental or theoretical work has been reported for these kinds of doping. We used DFT calculations to evaluate the stability, electronic, optical and photocatalytic properties of SrTiO 3, Se-doped SrTiO 3 and Te-doped SrTiO 3 . We discussed the stability of the both doping cases based on the computed formation energies, and explored the band structures, density of states and electronic density distributions of pure and doped systems. Besides, we analysed the optical and photocatalytic performance for all systems based on their absorption spectra, variations of imaginary part of dielectric function and band alignment. Our findings show that the narrowing of band gap, absorption spectrum shifting into the visible light region and the photocatalytic performance of SrTiO 3 is further improved with Te doping compared to our previous study about (S, Mn)-codoped SrTiO 3 (Bentour et al. 2020).Our study serves to provide a theoretical basis for a possible experimental study. Computational methods We calculated the stability, the electronic and optical properties of pure, Se-doped and Tedoped SrTiO 3 , using the full potential linearized augmented plane wave (FP-LAPW) method implemented in the Wien2K code (Blaha et al. 2001), within the framework of density functional theory (DFT), by treating the exchange and correlation in the level of the GGA-Perdew-Burke-Ernzerhof (GGA-PBE) approximation (Perdew et al. 1992). The basic functions, electron densities and the potential are calculated in self-consistent manner. These quantities are developed by dividing the unit cell into non-overlapping atomic spheres where a linear combination of radial functions times spherical harmonics is used with a cut-off parameter l max = 10, and an interstitial region where a plane wave expansion is used with a cut-off parameter R MT *K max = 7 (RMT is the smallest muffin-tin radius in the unit cell and K max is the magnitude of the largest K wave vector). The muffin-tin sphere radii used in the calculations are 1.63, 2.00, 1.70, 1.85 and 1.63 a.u. for respectively, Se, Te, Sr, Ti and O. The results are obtained with an energy convergence criterion of 10 -4 Ry. We used a [7 × 7 × 7] grid with 51 special points for sampling the brillouin zone, corresponding to 800k-points in the irreducible Brillouin zone. In the calculations, 2 × 2 × 2 supercells of cubic SrTiO 3 are constructed; one O atom is replaced by a Se or Te atom in the optimized geometry of supercells. The optimized pure, Sedoped and Te-doped SrTiO 3 supercells are shown in Fig. 1. The absorption property of our systems can be explored using the complex dielectric function: The dielectric function was calculated in the momentum representation, which requires matrix elements of the momentum between occupied and unoccupied eigen states. The real part of the dielectric function 1 ( ) is obtained from Kramers-kronig transformation (Zaari et al. 2014): and the imaginary part 2 ( ) is obtained by summation over the empty states (Okoye 2003): where p is standing for the principal value of the integral, e is the elementary charge, m is the electron's mass, M is the dipole matrix element, i and j are respectively, the initial and final states, f i is the Fermi-Dirac Distribution Function for the i-th state and E i is the energy of an electron in the i-th state with a wave number k. Indeed, 1 ( ) and 2 ( ) . parts allow one to find many optical properties, like absorption coefficient ( ) using the following formula (Bhattacharya 2015): The optimized 2 × 2 × 2 supercell of: a pure, b Se-doped and c Te-doped SrTiO 3 , respectively 3 Results and discussion Defect formation energy The defect formation energy calculation provides insight information about the experimental growth possibility of a material. In fact, the lower the formation energy is, the easy feasible and more stable the doping structure is (Zhou et al. 2011). To calculate the defect formation energy for the doped materials, the following formula is used: where E doped and E undoped are the total energies of the supercell with dopant and without it, respectively. µ M (M = Se, Te) and µ O are the chemical potentials of the dopants (Se and Te) and O, respectively. The value of µ Se and µ Te is calculated from bulk trigonal selenium (Cherin and Unger 1967) and rhombohedral form of tellurium (Bradley 1924), respectively, while µ O is calculated from the energy of a O 2 molecule centred in a cubical box of 15 × 15 × 15 Å 3 as μ O = ½μ O2(gas) . As we know the formation energy depends on the growth conditions. Therefore, we have examined the Ti-rich and the O-rich conditions. In fact, in a state of thermodynamic equilibrium, the following condition must be satisfied between SrTiO 3 and the reservoir of Sr, Ti and O. In the case of equilibrium between SrTiO 3 and reservoir of Sr, Ti, and O, one has the following formula: In a Ti-rich environment, the Ti chemical potential is supposed to be the energy of bulk Ti, while the O chemical potential is deduced from Eq. (2). In an O-rich environment, the O chemical potential is calculated as previously mentioned, while Ti is achieved from Eq. (2). The µ Sr is calculated from the energy of the Sr atom in the bulk crystal. The results are shown in Table 1. Table 1 shows that the formation energies are positive for all types of doping and in both conditions, with some preference for The Ti-rich condition, which means that making these compounds in experiment requires energy from the surroundings. Besides, Se-doped SrTiO 3 is easy feasible compared to the Te-doped SrTiO 3 , which can be explained by that the ionic radius of Se 2− is smaller than that of Te 2− . Electronic properties In order to examine the effect of selenium doping and tellurium doping on the SrTiO 3 electronic properties, we calculate the band structures and partial density of states (PDOS) for pure, Se-doped and Te-doped SrTiO 3 . The calculated band structures are shown in Fig. 2. We see that the overall band aspect for the pure SrTiO 3 (Fig. 2a) is in good agreement with previous calculated band structures (Zhang et al. 2013). Besides, its band gap is direct at the Г point, and with value of about 1.80 eV, which is in line with previous computational results (Wei et al. 2009). The observed underestimation in the band gap value compared to the experimental value (3.15 eV) (Thanh et al. 2014) is due to the well-known (Godby et al. 1986). After doping with Selenium (Fig. 2b), and with tellurium ( Fig. 2c) we remark that the band gap is still direct but it is narrowed by about 0.52 eV, and 1.04 eV, respectively. As a result, the energy required to pass an electron from the valence band to the conduction band was significantly decreased from the undoped system, to the doped with Se to the doped with Te; this can red-shift the optical absorption edge of SrTiO 3 and improves its visible light activity progressively in this order. Besides, no in-gap states appeared; Which can be explained by that the number of valence electrons in the crystal remains the same after substituting an O atom with a Se or Te one, due to the similarity of valence shells of Se (4s 2 4p 4 ), Te (5s 2 5p 4 ) and O (2s 2 2p 4 ); this helps to reduce the effect of recombination between the electrons and holes. Therefore, the photo-catalytic efficiency of these doped systems is improved. The calculated (PDOS) for pure, Se-doped and Te-doped SrTiO 3 are presented in Fig. 3. It is shown that the valence band maximum (VBM) of pure SrTiO 3 is dominated by O 2p states, while its conduction band minimum (CBM) is consisted of Ti 3d states. Besides, the VBM of Se-doped SrTiO 3 is consisted from O 2p and Se 4p states, while its CBM is contributed primarily by Ti 3d states. For Te-doped SrTiO 3 the VBM is consisted from O 2p, Te 5p and some Ti 3d states, whereas its CBM is mainly consists of Ti 3d states. In order to reveal the origin of band gap narrowing observed in doped compound, let's go back to the previous band structures, we see that there is a split in the levels of VBM and CBM for all compounds, which can be explained by the crystalline field that reigns inside, and the existing mutual electrostatic interactions in the CBM in-between Ti 3d states for all materials and in the VBM in-between O 2p states for the pure material, inbetween O 2p and Se 3p states for the Se-doped material and in-between O 2p, Ti 3d and Te 6p states for the Te-doped material. However, the CBM and VBM for both Se-doped and Te-doped SrTiO 3 are more dispersive compared to these of pure SrTiO 3 , which indicates that the covalency strength of the Ti-Se and Ti-Te bonds in doped systems is more than that of Ti-O bond in the pure SrTiO 3 . This agrees with the obtained distribution of electronic density shown in the Fig. 4, where we see that the Ti-O bond is covalent in the pure SrTiO 3 (Fig. 4a) and the covalency strength increases with doping by Se and Te (Fig. 4b, c). Besides, we see that the Sr-O bond is ionic in pure SrTiO 3 (Fig. 4a′) and its ionicity decreases with doping by Se and Te (Fig. 4b′, c′). These variations in the covalency and ionicity strength of bounds can be explained by the lower electro negativity of Se and Te compared to that of O, and may be the origin of the observed band gap narrowing in selenium and tellurium doped SrTiO 3 . Optical properties In order to investigate the effect of doping on the optical properties of SrTiO 3 , the calculated imaginary part 2 ( ) of dielectric function and absorption coefficient of pure, Se-doped and Te-doped SrTiO 3 are shown respectively, in Fig. 5a, b. The optical properties are obtained in the random-phase approximation (RPA), with a Drude-like shape for the intra-band contribution (Ambrosch-Draxl and Sofo 2006). The underestimation of band gap enrgy E g affects the optical absorption spectra form. To remediate this shortcoming of GGA approximation, we applied an energy shifting of 1.35 eV (Scissors operator = E g(experimental) − E g(theoretical) ) when calculating the optical properties. As showed in Fig. 5a, The curve ε 2 (ω) exhibits threshold energy arising at 3.15 eV for pure SrTiO 3 . This energy corresponding to the band edge that may have originated from the electron transitions between the occupied O 2p states in the VBM and the unoccupied Ti 3d states in the VBM. The threshold energy shifts significantly to low energy, when we substitute O by S, and further shifts when we substitute O by Te, due to decreased band gap values. This shifting indicates a response expansion of SrTiO 3 to visible light region, as interesting region to explore for photocatalytic applications. These results are similar to experimental results about the effect of Se and Te doping in TiO 2 (Rockafellow et al. 2010;Mathew et al. 2020). In the Fig. 5b, the spectrum of pure SrTiO 3 clearly shows the limited response of SrTiO 3 in the ultra violet light region, due to its large band gap. After doping, the absorption edge shifts to higher wave length of 447.65 nm and 587.68 nm for respectively, Se-doping and Te-doping cases, signifying enhancement in the visible absorptive ability for these systems, which might improve the photocatalytic performance of SrTiO 3 . The obtained shifts of absorption edges to higher wavelengths correspond to the narrowing of band gaps after doping. We remark that the response of SrTiO 3 to visible light increases more with Te doping than with Se doping. Photocatalytic properties In photocatalytic water splitting process a heterogeneous semiconductor material is used as photocatalyst. Under illumination, the photons absorbed into the semiconductor (suspended in water) can excite electrons in the valence band (VB) into the conduction band (CB) leaving excited holes in the VB. The photogenerated electron-hole pairs then migrate to the surface of the photo-catalyst, and initiate the oxydo-reduction reactions with adsorbed water molecules in the active sites. This leads to obtain the dihydrogen H 2 and di-oxygen O 2 , simultaneously, according to the following equations: (3) For the photogenerated electron-hole pairs to be used for water splitting reaction, the CBM potential should be more negative than the reduction potential of H + /H 2 (0 eV vs. normal hydrogen electrode NHE), and the VBM potential should be more positive than the oxidation potential of O 2 /H 2 O (1.23 eV vs. normal hydrogen electrode NHE) as it is illustrated in Fig. 6a. Therefore, photocatalyst must be a semiconductor with band gap energy more than 1.23 eV, and its CBM and VBM must sandwich the water redox levels, to be used for water splitting. To verify that the above conditions are met, we calculated the CBM and VBM potentials for all systems using the formulas (Wang et al. 2017): where E 0 = − 4.5 eV is the energy level of the normal hydrogen electrode (NHE) bellow the zero vacuum energy level, E g is the band gap energy, is the absolute electronegativity of system calculated by the formulas: where SrTiO 3 the absolute electronegativity of pure SrTiO 3, SrTiO 3−x M x the absolute electronegativity of M-doped SrTiO 3 system (M = Se, Te and x = 0.125 for one M atom substitutes for one O atom), χ Sr , χ Ti , χ O and χ M are the absolute electro-negativities of Sr, Ti, O and M elements, respectively. According to the study of Bartolotti (Bartolotti 1987): χ Sr = 1.75, χ Ti = 3.05, χ O = 8.92, χ Se = 5.91 and χ Te = 5.35. The calculated CBM and VBM positions vs. NHE at pH = 0 for all systems are summarized in Table 2, and their corresponding alignments are plotted in Fig. 6b (5) It is demonstrated that the CBM and VBM for the pure SrTiO 3 straddle the water redox potentials, which means that water can be split into H 2 and O 2 by pure SrTiO 3 . Besides the calculated CBM potential of pure SrTiO 3 is 0.88 eV, which is in line with the experimental result of Xu and Schoonen (2000). Figure 6b also shows that the CBM is significantly shifted in the downward direction due to doping with Se, and further shifted in the same direction due to doping with Te. This means that photo-reduction ability is improved. Besides, the VBM is shifted upward by Se doping and further shifted upward by Te doping, leading to photo-oxidation capacity improvement. Moreover, the CBM positions of doped systems are above the hydrogen reduction level and their VBM positions are bellow the water oxidation level. Consequently, water can be decomposed into H 2 and O 2 for doped systems. Besides, the absence of isolated states in the forbidden band, and the good absorptive ability in the visible region for these systems, make them a part of promising materials for H 2 production by photocatalytic water splitting. This positive effect of selenium and tellurium on the optical and photocatalytic properties of SrTiO 3 is similar to the effect revealed experimentally in Se-doped TiO 2 (Xie et al. 2018;Rockafellow et al. 2010), Te-doped TiO 2 (Mathew et al. 2020) and S-doped SrTiO 3 (Le et al. 2016). The comparison between the two doped systems indicates that Te-doped SrTiO 3 has the more suitable band gap energy, absorptive ability and band edges levels for producing hydrogen from photo-catalytic water splitting. Conclusion In summary, compared to the undoped SrTiO 3 , the Te and Se doping can reduce the width of bandgap significantly. The increase of covalency strength of Ti-O bond and the decrease of iconicity strength of Sr-O bond through doping Se and Te atoms in O site may be the origin of observed band gap narrowing. Both the Se doping and Te doping enhance the absorption of SrTiO 3 in the visible light region. But the Te doping enhanced more effectively the optical absorption in the visible light region and strengthen the red-shift, which is likely due to the higher covalency strength of Ti-Te bond in Te-doped SrTiO 3 compared to the covalency strength of Ti-Se in Se-doped SrTiO 3 . The narrowed band gaps are free from any isolated states, which expected to help in inhibiting the faster recombination of photogenerated carriers and consequently improve the photocatalytic performance. The potentials of the CBMs and VBMs satisfy the thermodynamic requirements to trigger the water splitting reaction. Hence we suggest that the doping of Se and Te (particularly Te) into the SrTiO 3 material is one of the better choices to improve the yield for photosplitting of water under visible light.
5,731.6
2021-04-19T00:00:00.000
[ "Chemistry", "Materials Science", "Environmental Science", "Engineering" ]
Spin texture of time-reversal symmetry invariant surface states on W(110) We find in the case of W(110) previously overlooked anomalous surface states having their spin locked at right angle to their momentum using spin-resolved momentum microscopy. In addition to the well known Dirac-like surface state with Rashba spin texture near the -point, we observe a tilted Dirac cone with circularly shaped cross section and a Dirac crossing at 0.28 ×   within the projected bulk band gap of tungsten. This state has eye-catching similarities to the spin-locked surface state of a topological insulator. The experiments are fortified by a one-step photoemission calculation in its density-matrix formulation. In the past decade topological insulators have attracted large scientific interest because of their unusual electronic properties [1][2][3] . Topologically protected Dirac-type surface states appearing in the bulk band gap give rise to metallic behavior at their surfaces 2,3 . The rigid spin-locking of these surface states perpendicular to the crystal momentum bears high potential for the development of novel spintronic devices and improvement of existing electronic devices, suitable for spin injection and manipulation without applying external magnetic fields 3 . It was a surprise that recently strong spin-polarized surface states with linear dispersion resembling a Dirac-cone were found on metallic surfaces [4][5][6] . This was unexpected as, e.g. W(110) has no similarities to known topological insulators except the strong spin-orbit interaction due to a large atomic number. Instead of the fundamental band gap of an insulator, tungsten exhibits a spin-orbit induced local band gap 7 . The energy range of the observed surface state is populated by d electrons while the fundamental band gap in known topological insulators (e.g. Bi 2 Se 3 ) is caused by p electrons. Miyamoto et al. 4,5 found a "massless" (i.e., Dirac-like) surface state with linear dispersion and Rashba-type spin signature in a large energy range of 220 meV, which is an anomalous behavior in metals. Further experimental [8][9][10][11][12] as well as theoretical work 4,5,7,9,13 was performed in order to clarify the origin of this anomalous surface resonances on W(110). Additionally, the "direct neighbors" of W(110) in the periodic table, Mo(110) and Ta(110), were investigated. The analogous surface resonance was confirmed in our previous work for Mo (110) 14 , whereas Ta(110) does not show a "Dirac-like" surface state 15 . For Mo(110) we found a second state with anomalous dispersion behavior in the middle between the Γ and N points 14 . Also, recent work of K. Miyamoto et al. 16 experimentally confirmed the prediction of ref. 9, i.e. the change of spin polarization comparing p-and s-polarized light excitation. This effect is explained by an orbital-symmetry-selective excitation of states. This new work also comes to the conclusion that p-polarized light reflects at least the sign of the ground state spin polarization. Here, we give for the first time evidence for a time-reversal symmetry invariant surface state with high spin polarization inside a spin-orbit band gap of W(110). The newly found anomalous surface state appears at . × Γ 0 28 N inside the projected bulk band gap. This state has striking similarities to the surface state of a topological insulator. Our experimental results also confirm the spin texture of a strongly elliptically warped surface state near the Γ point. This band is a surface resonance hybridizing with bulk bands near the crossing point. A third anomalous band feature occurring near . × Γ 0 57 N turns out to result from a pair of spin-locked surface states, which do not have the typical cone-like appearance. These results are facilitated by time-of-flight (ToF) spin-resolved momentum microscopy allowing for a parallel detection of spin resolved three-dimensional (k x , k y , E B )-maps. Thus, this method allows finding spin textures off the usually studied high symmetry points that are easily overlooked by measuring with standard techniques. Results Spin-integral measurements. First of all, we discuss results obtained from the spin-integrated measurements of the clean W(110) surface. Figure 1(a-d) show measured constant energy sections between the Fermi energy (a) and binding energy E B = 1.25 eV (d). Dashed rectangles mark areas which were measured with spin resolution as shown in (e-h) correspondingly as well as in the 3D data array E-vs-k presented in Fig. 1(k). Spinresolved figures at E B = 0.8 eV (i) and at E B = 1.1 eV (j) represent details from the 3D array in order to particularly probe two new linear band crossings at k x = 0.4 and 0.8 Å −1 . In (a-d) we show only 4 out of total 100 energy slices acquired within 20 min. The data stack was treated in a way to eliminate the linear dichroism, i.e. the sections show the sum of I(k x , k y , E kin ) + I(k x , − k y , E kin ), making use of the mirror symmetry. Binding energies (in eV) are given at the bottom of the frames. We can clearly identify several bands. As it was already shown in previous work for Mo(110) in ref. 14 some of these correspond to surface resonances and some are bulk band features. We will see below that the agreement between experiment and theory is very good, with small deviations caused by the relative intensities of the bands and the exact positions of hybridization gaps and band maxima. The total numbers of observed and calculated bands and their principal behavior are identical. With increasing binding energy (a-d) we observe a contraction of the intense patterns S3 and S6. The oval bands around Γ and S expand with increasing binding energy. The presence of the oval bands around S is an indication of the cleanness of the W(110) surface according to the work by Rotenberg et al. 17 . S7 moves towards Γ and unifies with S4, developing into a six-fold star (c, d). This star runs along the outer borderline of the bandgap as visible in (c). S5 is an oval band close to the H point that contracts and becomes a dot at E B = 0.7-0.8 eV and forms the top and the bottom of the star as visible in (c, d) around the dashed rectangles. The anomalous band S1 only shows up in the vicinity of 1.25 eV. It is visible as a narrow ellipse at E B = 1.0 eV (c), but at the crossover energy just as a single intense line running along the Γ-N direction (d). Spin-resolved measurements. Spin resolved distributions were achieved by recording data sets at two working points selected by the scattering energy E Scatt , where highest (lowest) spin sensitivity occur at E Scatt = 26.5 eV (E Scatt = 30.5 eV). The potential difference between sample and spin detector was chosen such as to place the highest spin sensitivity at E B = 1 eV 18 . For the data evaluation we assumed a constant value of the Sherman function S(E) = 0.2 but corrected for the energy-dependent reflectivity R(E). This assumption underestimates the evaluated spin polarization with an increasing distance from 1 eV. The resulting 3D data array of the E-vs-k spectral function with spin information is shown in Fig. 1(k). The corresponding two-dimensional color code, shown in Fig. 1 (bottom left) represents the value of the spin polarization P and simultaneously the intensity. Red and blue colors correspond to higher spin-up or spin-down intensities, respectively, while unpolarized intensities are shown as grey. In all cases, white means no intensity (as P then has no physical meaning). In comparison to spin-integrated measurements, where we examined the full surface Brillouin zone (SBZ) of W(110), the spin measurements were performed using a higher magnification of the microscope in order to look in more detail on the inner part of the SBZ. Taking into account the given experimental geometry with p-polarized radiation in the horizontal plane preserves the mirror plane along Γ N, with sensitivity to the x-component P x of the spin-polarization. Artificial experimental asymmetries in the raw data were removed using the fact that the Γ N-line represents a mirror plane imposing the condition P As we can see from all sections ( Fig. 1(e-h)), the upper (lower) half-plane with respect to the Γ N-line shows predominantly blue (red) color. Thus, e.g. at E F and at a binding energy E B = 0.4 eV ( Fig. 1(e,f)), the spin orientation rotates clockwise for the outer (diamond shaped) and inner (oval ring) surface bands when viewed from above the sample surface. With further increase of binding energies the oval band expands. This band is a surface state centered at Γ. At E B = 1 eV a new surface state arises close to Γ that is characterized by a high polarization opposite to the oval shaped surface band. This surface state forms the elliptically shaped Dirac cone discussed by Miyamoto et al. 4,5 with the Dirac point at E B = 1.25 eV. The binding-energy dependence is visualized by extracting the band dispersions and spin-textures at k x = const. parallel to the Γ H-line as shown in Fig. 2 These sections were extracted from the 3D data array (Fig. 1(k)) along k x = const. planes indicated by vertical lines A, B, C and D in Fig. 2(q). All crossing points have features of anomalous surface resonances of W(110). The linear band crossings (in Fig. 2(a-h)) correspond to the already well known Dirac-like surface state at the Γ point 4,5,12 . It shows a crossing point at E B = 1.25 eV with a change of the spin asymmetry and Rashba type spin texture. Two additional crossings (Fig. 2(i-p)) are new features which were overlooked earlier with downward dispersing extension due to a hybridization gap, but different topology. They show a similar reversal of the spin asymmetry from k y < 0 to k y > 0 close to the crossing point as present in Fig. 2(j,l) For a more detailed representation of the ground state electronic structure in Fig. 3 we present Bloch spectral functions of W(110) calculated parallel to Γ N-direction that represent the usual dispersion relation 19,20 . These calculations are based on a screened SPR-KKR formalism, where the electronic structure results for a fully relativistic self-consistent calculation for a semi-infinite stack of atomic layers. Detailed comparison of the ground state Bloch spectral functions shown in Fig. 3 and the corresponding photoemission calculations in Fig. 2 (3 rd and 4 th column) reveals that there are several changes in the spin polarization which come from the photoemission process as for example, matrix element and final state effects. This concerns in particular surface states that disperse in the band gaps and are located at higher k y values (~0.6-0.8 Å −1 ). On the other hand, the spin polarization of spin-locked surface states close to the Dirac-cones discussed in this article shows the same polarization pattern as the ground state. Following our previous very detailed theoretical analysis 7 , it was shown that the spin polarization of the Dirac like state follows the ground state spin texture for a very wide range of photon energies with p-polarization (from UV up to soft X-ray). The topologies of the linear band crossings are sketched in Fig. 4. This figure reveals the spin texture near the three band crossings appearing in the I(k x = const., k y , E B ) maps. Near Γ the observed spin texture confirms the previously described elliptical Dirac cone with pseudo-topological spin orientation 4,5 . A similar surface state was also analyzed in detail for the case of Mo(110) 14 The linear band dispersion observed near k x = 0.80 Å −1 (Fig. 2 (bottom row)) is caused by two intersecting concavely shaped surface bands with clockwise and counter-clockwise spin locking, dispersing in the direction of Γ. In the theoretical calculation this particular spin structure appears more complicated. The calculation (Fig. 2(p)) indicates that each of these surface bands is spin-orbit split resulting in a double cross, that is not observed in our experiment. Discussion In summary, we have shown for the first time that in a projected bulk band gap of W(110) a time-reversal symmetry invariant helical Dirac state exists. This result was achieved using the novel technique of time-of-flight momentum microscopy with a W(001) imaging spin-filter. We observed a topological surface state with circularly shaped constant energy cross section and a Dirac point at . × Γ 0 28 N within the projected bulk band gap of W(110). This state has eye-catching similarities to the spin-locked surface states of topological insulators, the well known Dirac-like surface states with Rashba spin texture near Γ. The band crossing at . × Γ 0 57 N has a more complicated structure and does not reveal Dirac-like behavior. 3D (k x , k y , E B )-maps in the full surface Brillouin zone with 3.4 Å −1 diameter and 4 eV binding energy range were measured simultaneously, resolving 2.5 × 10 5 voxels in the spin-integral branch and more than 10 4 voxels in the spin-resolved branch. The detailed spin texture and topology of three new band crossings with linear dispersion over a large energy range were discussed in comparison to one-step model photoemission calculations. Near Γ the occurrence of the elliptically shaped Dirac-type pseudo-topological surface state is confirmed 4,5 . with different kinetic energies 22 . This gives us the (k x , k y , E B )-voxels of the data array which in k-space exceeds the first SBZ. Figure 5 shows a schematic view of the experimental set-up. The sample is mounted on a He-cooled sample stage, consisting of a commercial helium flow cryostat and a high-precision hexapod manipulator providing a minimum temperature of 29 K (measured by a silicon diode attached to the sample holder) and six degrees of freedom for sample alignment. The imaging electron optics is the same as described previously 21 . Behind the scattering crystal are two drift sections with delay-line detectors (DLD) for spin-integral (horizontal branch, DLD 1) and spin-filtered (vertical branch, DLD 2) imaging. More details of this instrument are given in ref. 22 and the experimental geometry as well as the scheme of the 3D data acquisition is described in our previous work by Chernov et al. 14 . Spin-resolved images are obtained by inserting the W(001) spin-filter crystal under 45° into the electron optical path of the microscope between column and the spin-resolved ToF branch 23 . Spin contrast appears due to the spin dependent reflectivity of low-energy electrons at the scattering target caused by spin-orbit interaction at the non-magnetic surface. For the evaluation of the spin polarization we followed the recipe described in ref. 18. It requires the aquisition of two data-sets at two different scattering energies (efficient working points for a clean W(001) spin-filter): at E Scatt = 26.5 eV with a reflection asymmetry of A = 0.3 and at 30.5 eV where the asymmetry is negligibly small 24 . Methods The photoemission time-of-flight experiment has been performed exploiting the time structure of the synchrotron radiation at BESSY II (Helmholtz-Zentrum Berlin, Germany) at beamline U125/2-10m Normal Incidence Monochromator (NIM) 25 during single-bunch operation (pulse duration 50 ps, repetition rate 1.25 MHz). The monochromator provides photons in the energy range 4-35 eV and an energy resolution down to 1 meV. Given the work function of W(110) we end up with a kinetic energy range of 18 to 23 eV with the lowest 10 eV being cut off by a transfer lens 26 . The angle of incidence was 68° with respect to the surface normal, the plane of incidence was parallel to the Γ M direction; the photon beam was p-polarized with the electric field vector E in the plane of incidence. The overall energy and k || resolution for the present experiment were 86 meV and 0.03 Å −1 (best resolution of the instrument is 20 meV and 0.01 Å −1 ). For the spin-resolved measurements in the upper branch we exploited specular reflection from a W(001) spin-filter crystal at 45°. Prior to the measurements both crystals (sample W(110) and spin-filter W(001)) were treated by a standard procedure as described in ref. 24. All measurements were conducted with the sample cooled by liquid helium to 29 K. Theoretical approach. The calculations were done by a fully relativistic one-step model in its spin-density matrix formulation. This approach allows describing properly the complete spin-polarization vector in particular for Rashba systems using one step model of photoemission as implemented in SPR-KKR package 19,20 . More details of the computational method applied for the calculation of Dirac-like surface resonances on W(110) at different polarizations and photon energies are described in ref. 7. Figure 5. Experimental set-up. Schematic view of the spin-filtered ToF momentum microscope, consisting of He-cooled sample stage, imaging electron optics, two drift sections with delay-line detectors for spin-integral (horizontal branch, DLD 1) and spin-filtered (vertical branch, DLD 2) imaging. The W(001) spin-filter crystal is located in a field free space and can be inserted and retracted.
3,973.2
2016-07-12T00:00:00.000
[ "Physics" ]
Block cipher four implementation on field programmable gate array Block ciphers are used to protect data in information systems from being leaked to unauthorized people. One of many block cipher algorithms developed by Indonesian researchers is the BCF (Block Cipher-Four) - a block cipher with 128-bit input/output that can accept 128-bit, 192-bit, or 256-bit keys. The BCF algorithm can be used in embedded systems that require fast BCF implementation. In this study, the design and implementation of the BCF engine were carried out on the FPGA DE2. It is the first research on BCF implementation in FPGA. The operations of the BCF machine were controlled by Nios II as the host processor. Our experiments showed that the BCF engine could compute 2,847 times faster than a BFC implementation using only Nios II / e. Our contribution presents the description of new block cipher BCF and the first implementation of it on FPGA using an efficient method. Introduction Block cipher is one of the cryptographic components used to protect information. Information can be in the internet network, financial system, military, and IoT (internet of things). IoT is a network of interconnected objects in various forms such as wireless sensor networks, electrical, electronic, mechanical devices, and their interaction with computer data via the internet [5]. In the IoT period, embedded devices were connected to the internet. The advent of IoT has put telecommunications and embedded systems at risk [6]. BCF is an encryption algorithm based on AES [13], Camellia [14], TwoFish [15], and Khazad [12]. It has 128 bits of input /output and 128, 192, and 256 bits keys. BCF is an encryption algorithm designed by Indonesian researchers [1]. This algorithm has an advantage over AES: The key schedule in BCF is more secure than AES because the main key is very difficult to find even when all sub-keys of BCF have been found. The SBox from BCF changes dependent on the key, while the SBox from AES does not change. Thus, BCF is safer than AES. There are two types of BCF keys: master key and sub-keys. A master key is processed by key schedule function becoming the sub-keys. Every sub-key is used to encrypt or decrypt partial data in every round. Encryption is a process to convert plaintext to be cipher text and decryption converts cipher text to be plaintext. Cryptanalysis is used to crack the key of a block cipher in an unusual way or test the security of a cryptographic algorithm that has been created. Correlation power analysis, for instance, tries to find all of the sub-keys using the correlation between the hamming weights and the power used in the embedded device when calculating the encryption algorithm [7]. The hardware implementation is very important in terms of a performance and security, especially as a countermeasure against timing attacks [8] in particular and as side-channel attacks in general. This paper aims to introduce the BCF algorithm implemented in FPGA with an efficient method. This paper proposes a hardware architecture of the BCF algorithm as a co-host processor (encryption engine accelerator). This architecture was written in Verilog and tested on the Altera Cyclone IV EP4CE115F29 [9] using NIOS as the host processor. We compared the results with AES, Camellia, and TDEA data taken from SASEBO [10]. Moreover, we compared the BCF hardware accelerator with software implementation enabling us to measure how fast the BCF encryption engine accelerator computed, compared to software. BCF Algorithm BCF uses the Feistel structure [11], in contrast to AES which uses the SPN structure. The SPN structure requires fewer rounds than does Feistel to achieve the same diffusion rate. The advantage of using the Feistel structure over SPN is related to the use of the same structure for the encryption and decryption processes so that it will require few memories in the implementation. SPN requires two different algorithms for encryption and decryption. The BCF algorithm has two main components: scheduling part and randomization part. Key Schedule is performed to generate sub keys and randomization is performed to encrypt or decrypt data using sub keys generated by key scheduling. The number of rounds at the randomization stage depends on the length of the key in which 128-bit keys are used in the randomization of 15 rounds, 192-bit keys require 16 rounds and 256-bit keys for 18 rounds. In each round, the F0 function is applied. This function uses sub keys to manipulate the input data for each round. The main features of the BCF algorithm are: 1. The input and output data are 128 bits (plain text and cipher text) respectively. 2. The length of the master key has 3 variants: 128, 192 and 256 bits. 3. Key scheduling is done in 8 rounds using the F0 function. 4. The number of rounds at the randomizing stage (for encryption or decryption) depends on the length of the key. The key schedule stage is carried out at the beginning to generate sub-keys for the randomizing stage, but in this paper we will begin by explaining the randomizing stage. BCF Encryption BCF uses the Feistel structure as in the Twofish algorithm, so it can use the same algorithm for encryption and decryption. BCF has 128-bit input / output. The pseudo code of the BCF encryption algorithm is presented as follows. In the BCF algorithm, there is an FO function that has an input of two data words x and two words of the k sub-key and produces two output words of y, where 1 word is 32 bits. This function is the heart of BCF encryption/decryption. For a note, 1 word is 32 bits. SBox has 1 byte input / output. Because each x consists of 8 bytes, there are 8 SBox operations for each input x{ x 0 , x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 }. There are 4 SBox-es used in BCF. The Substitution Boxes (in hexadecimal) are shown in (Tables 1-4). Table 5. shows the algorithms used to select BCF SBox before encryption/decryption. P is the product between the matrix M and the input x with an aim to obtain optimal diffusion. BCF Key Expansion The BCF keyschedule (key expansion) algorithm has a key input of 128 bits (16 bytes or 4 words), and performs a Key Expansion to generate some sub-keys. The Key Expansion produces a total of 17 sub-keys, 15 sub-keys for the regular round ( K 0 , K 1 , ..., K 14 ) and 4 sub-keys for whitening keys ( KW 1, KW 2, KW 3, KW 4 ). If the primary key is 192 bits or 256 bits, then we perform XOR operation between the left side and the right side of the primary key so that it still generates a key of 128 bits to be included in the key schedule. At the beginning of the key schedule, the intermediate keys:K A , K B , ..., K G are generated. From these intermediate keys, all sub-keys required for the encryption and decryption processes are generated. Figure 2 shows the beginning of the key expansion to generate K A , K B , K C and K D. Table 6 describes the complete key expansion process. The table is connected to the figure 2 as key expansion process. BCF Decrytpion The BCF decryption ( Figure 3) procedure can be performed in the same way as encryption, but with the sub-key order reversed. More details are shown in the following pseudo code. BCF Core IP Design IP BCF core design is implemented in Verilog HDL language, top down method. The design begins by defining the system, making the architecture first, and designing the supporting modules. The IP BCF core symbol and pin out are shown in Figure 4. Table 7 describes the function of each pin. Figure 5 shows the general architecture of BCF. Key_len pin out functions to set the number of rounds and the number of rounds depends on the size of the key. For a key with a size of 128 bits, 192 bits, 256 bits, it takes 15, 16, and 18 rounds, respectively. The decrypt pin out determines whether BCF_Core will encrypt or decrypt. The core of the BCF Engine contains encryption, decryption and keyschedule. Figure 6 illustrates the core algorithm for the BCF engine. This core architecture was used interchangeably for randomizing and key schedule, resulting in large latency. For the encryption and decryption process, the BCF engine required 316 clocks. Figure 7 shows the BCF timing diagram. The system was controlled by clk. To implement BCF, we needed a FSM (Finite State Machine). The BCF FSM is shown in Figure 8. Figure 9 and Figure 10 illustrate the keys schedule (key expansion) architecture and the FO module, respectively. One Core FO was used interchangeably in encryption, decryption, and key schedule. The advantage of this design is to use a small area but it has the disadvantage of having a large latency. The further explanation of FSM in Figure 8 and FO Core in Figure 9 is described in more detailed in Table 14 and Table 15 (appendix). Subbyte BCF operations were implemented using LUTs for the ease of design and minimization of critical paths [4]. The description of FO module pins is described in Table 16 (appendix). The search for the substitution box number was obtained from this equation : si = ((K >> ( (2*i) + (8*r) ) ) & 8'h03) ^ ( (((K >> (8*r + 2)) & 8'h03) ^ ((K >> (8*r + 4)) & 8'h03) ^ ((K >> (8*r + 6)) & 8'h03)) & zero) () The MixColumn operation used a systolic array architecture ( Figure 11). The architecture used only eight processing elements in MixColumn and one processing element in Subbyte processing. The tradeoff of this architecture was the latency from one clock to eight clocks. The design of processing elements is implemented using the architecture as shown in Figure 13. Table 8 shows some descriptions of the Processing Element pinouts. Pinout R contains the result of polynomial multiplication between data from pinout A and B. Xtime algorithm was used for polynomial multiplication in mixcolumn operation. For efficiency in Mixcolumn operation, shift and xor operation was applied [3]. Figure 14 depicts the architecture of xtime algorithm implemented in mixcolumn. Table 9 shows some descriptions of the XTime pinouts. Xtime architecture used in this design had one input data and eight output data. It was purposely to enable the polynomial multiplication to be completed in one clock. BCF Integration to FPGA Atera DE II DE2-115 is a development board with the main component in the form of Altera Cyclone® IV 4CE115 FPGA. Soft core NIOS processor can be implemented on FPGA. NIOS is a soft core 32-bit RISC Microprocessor. In this paper, we used 50 MHz frequency on BCF. The BCF module was wrapped with the Avalon interconnect interface. Figure 15 shows the block diagram of the NIOS interface. To access IP BCF, NIOS write / read registers in Table 11. The functionality test of IP BCF was carried out by comparing the computing results of BCF IP with program that run on a computer. Figure 16 and 17 show the result. Based on the above test, the ciphertext and plaintext values generated by the IP Core BCF were found similar with those generated by the C program running on the computer. This means that the implementation of BCF on BCF has been functionally successful. One way to measure the BCF performance is by comparing the hardware and software implementation. If the speed of the hardware far exceeds the speed of the software, the hardware implementation can be stated to be successful. The measurement results are presented in Table 12 showing that the computation time for BCF software implementation depends on the processor architecture. Hardware BCF Engine can speed up BCF compute 488-2847 times compared to software, dependent on processor architecture and BCF key length. The speed of BCF and AES was measured, and the results of the comparison are shown in Table 13 informing the BCF Engine was 44 times faster than the AES hardware accelerator, where the devices operated at a clock of 50MHz. From this data, BCF Engine is suitable to be implemented in devices with small computing resources such as IoT, where these devices require a low clock for power saving, but require a high level of security for sending data to the internet [17]. Conclusion This paper describes the BCF encryption algorithm, the algorithm implementation on the Altera DE2-115 FPGA and its performance. On Altera DE2-115 boards, hardware implementations were found 488-2847x faster than software implementations, dependent on processor architecture and BCF key length. BCF also has a high speed to be implemented on devices with small resources such as IoT. For further research, we will perform a Correlation Power Analysis (CPA) attack on this proposed BCF device. The attack will be based on the previous paper [2].
3,033.8
2020-12-26T00:00:00.000
[ "Computer Science", "Mathematics" ]
Search for diffuse fluxes of cosmic neutrinos with the ANTARES telescope Summary. — The ANTARES neutrino telecope is the largest operating underwater telescope. Searches for high-energy cosmic sources of neutrinos have been conducted looking at data collected from 2007 to 2013. Good sensitivity is reached in searching for diffuse fluxes of cosmic neutrinos, both in all sky and in defined regions. The most recent results of these searches are reported in this contribution. -Introduction Neutrinos are predicted to be produced nearby the expected Cosmic Rays (CR) accelerators, such as Supernova Remnants, Active Galactic Nuclei or Gamma Ray Bursts. A diffuse flux of cosmic neutrinos is expected from unresolved individual sources. The energy spectrum of these neutrinos should be similar to that of primary CRs, produced by Fermi shock acceleration, and flatter than the observed atmospheric neutrino background. The IceCube Collaboration has recently reported [1] the observation of a diffuse, all flavour excess of high energy neutrinos, not compatible with atmospheric expectations. This observation has opened the path to high-energy neutrino astronomy. The ANTARES detector [2] is currently the largest neutrino telescope in the Northern Hemisphere, located at a depth of 2475 m in the Mediterranean Sea, 40 km from Toulon, France, and continuously operated since 2007. It consists of almost 900 photomultiplier tubes (PMT), distributed on 12 vertical strings anchored to the sea bed. It detects neutrinos using the Cherenkov light emitted by particles produced after neutrino interactions in the surroundings of the detector. Good pointing accuracy is achieved in reconstructing the arrival direction of the neutrino [3][4][5]. -Full sky searches An update of the analysis on diffuse ν μ fluxes reported in [6], extending the data sample until 2011 has been done. The equivalent livetime is 885 days, about a factor three larger than the previous analysis. Upgoing events are selected and the quality parameter Λ from the track reconstruction algorithm together with the angular error estimation β are used to reject wrongly reconstructed atmospheric muons [3]. Atmospheric neutrinos are rejected by applying an energy-related cut, based on the estimation of the muon energy loss in the detector [7]. The optimal cut on this variable is chosen through a Model Rejection Factor (MRF) procedure [8], determining the best sensitivity flux E 2Φ90% track = 4.7 · 10 −8 GeV cm −2 s −1 sr −1 . After all cuts 8.4 events are expected from the background and 1.4 events should be observed from the [1] signal. The central 90% of the expected signal after cuts corresponds to an energy range from 45 TeV to 10 PeV. After unblinding 8 events are found in data. Using the method from [9] the upper limit at 90% confidence level is The error on the normalisation of the atmospheric neutrino flux measurement [7] with respect to the Bartol flux [10] is take as systematic error on the background. Systematics on the signal are evaluated by varying water properties, PMT efficiency and angular acceptance in the simulation. An all flavour analysis for cascade events [11] is also performed. A vertex likelihood fit, followed by an energy and direction fit is performed with a reconstruction algorithm. This leads to an energy resolution of 0.2-0.3 for the log 10 E shower and a median angular resolution of 6 • in the hundred TeV region. Improved angular resolution is achieved with a more sophisticated reconstruction technique [5]. The analysis collects all data from 2007 to 2012, with a total livetime of 1247 days. An event pre-selection is done using a cut on the vertex log-likelihood and requiring signal hits to be present on at least three detector lines. This selection significantly reduces the contribution from track events, including atmospheric muon background. The MRF optimisation is done on the fitted shower energy and zenith. The optimal cut is found to be E shower > 10 TeV and θ > 94 • , leading to a sensitivity with E 2Φ90% showers = 2.2 +0.9 −0.7 · 10 −8 GeV cm −2 s −1 sr −1 . 5 ± 3 background events are expected, and the IceCube flux would correspond to 2.1 +0.5 −0.7 cosmic events added to the atmospheric expectations. 8 events are observed in data after unblinding. The excess over background has a significance of 1.5σ. Considering an E −2 spectrum, the 90% C.L. upper limit on the cosmic neutrino diffuse flux is E 2 Φ 90% showers = 4.9 · 10 −8 GeV cm −2 s −1 sr −1 , including systematics on signal and background. The validity energy range for this limit is 23 TeV-7.8 PeV. -Special regions Fermi/LAT data [12] have revealed the presence of two large γ ray emission regions above and below the Galactic plane. If hadronic mechanisms are responsible for the production of such a signal, diffuse neutrino emissions are expected from these regions with various possible energy cut-offs, from few to some hundreds of TeV [13]. Data collected in the ν μ CC channel with the ANTARES telescope from 2007 to 2011 are considered in the analysis [14]. Event selection, based of the MRF procedure, involves the quality of upgoing reconstructed tracks and the energy estimation through an Artificial Neural Network (ANN) [15]. The final selection cut is Λ > −5.14 and E AN N > 11 TeV, when optimising for the sensitivity to a neutrino flux with a cutoff at 100 TeV. After the unblinding of the on-zone, 16 events are observed, while 11 are expected, on average, from the off-zones. The significance of this excess can be estimated, following the prescription of [16], as 1.2σ and upper limits are calculated. A further analysis with two additional years of data is reported in [17]. Preliminary results report an excess over the background with a significance of 1.9σ. A diffuse neutrino flux is expected from the decays of charged mesons produced in CR interactions in the interstellar medium in the Galactic Plane. The corresponding emission from neutral mesons is clearly visible in γ ray observation of the sky [18]. Different models for the neutrino flux coming from CR propagation are proposed, each leading to different expectations. Broken power law spectra with spectral index Γ = 2.4-2.5 can describe these behaviours. The same MRF optimisation was produced, leading to a sensitivity (per flavour) E 2.4(2.5) Φ 90% gal = 2.0(6.0) · 10 −5 GeV cm −2 s −1 sr −1 in the energy range 3-300 TeV. No significant excess is observed in data and the corresponding upper limit is equal to the sensitivity [19]. -Conclusions The ANTARES neutrino telescope is in its 7th year of operation. Despite its moderate size it yields good diffuse flux sensitivity in the relevant range and the best limits for the Galactic Plane and the Fermi bubble regions thanks to its location and good event reconstruction performances. A joint track and shower analyses is being performed, improving the overall sensitivity of the telescope [20]. The next generation KM3NeT neutrino telescope will eventually further improve the results for diffuse flux sensitivities [21].
1,629.6
2016-01-01T00:00:00.000
[ "Physics" ]
Tribological and Thermal Transport of Ag-Vegetable Nanofluids Prepared by Laser Ablation Lubricants and fluids are critical for metal-mechanic manufacturing operations as they reduce the friction and wear of tooling and components, and serve as coolants to dissipate the heat generated in these operations. The proper application of these materials improves machine operative life and tooling, and decreases cost, energy, and time consumption for maintenance, damage, repairs, or the need to exchange pieces/components within the machinery. Natural or vegetable-based lubricants have emerged as a substitute for mineral oils, which harm the environment due to their low biodegradability and have negative effects on human health (e.g., causing skin/respiratory diseases). Thus, finding biocompatible and efficient lubricants has become a technology objective for researchers and industry. This study evaluates soybean-, corn-, and sunflower-based lubricants reinforced with silver (Ag) nanostructures by a pulsed laser ablation process. Thermal and tribological evaluations were performed with varying Ag contents, and temperature-dependent behavior was observed. Thermal conductivity improvements were observed for all nanofluids as the temperature and Ag concentration increased (between 15% and 24%). A maximum improvement of 24% at 50 ◦C and 10 min exposure time of the pulsed laser ablation process for soybean oil was observed. The tribological evaluations showed improvements in the load-carrying capacity of the vegetable oils, i.e., an increase from 6% to 24% compared to conventional materials. The coefficient of friction performance also showed enhancements with Ag concentrations between 4% and 15%. Introduction In metal-mechanic processes, using the appropriate type of fluids and lubricants, together with the proper working materials, could reduce the friction and wear of machinery and tooling components. This also increases machine efficiency in terms of workpiece surface finish and tolerances, thus improving machines' operative life and eventually reducing the vibrations and required cutting force [1][2][3][4][5]. According to Oakridge National Laboratory (USA), wear and friction contribute to about 25% of worldwide total energy loss [6]. Approximately 85% of lubricants used worldwide are petroleum based [7]. The ecological issues related to the extensive usage of these oils and the geopolitical strategies regarding crude oil exploitation are the main drivers for the development of novel alternatives from eco-friendly raw materials [8][9][10]. heat capacity, refractive index, dielectric constant, among others, are also important in determining the size and morphology of the produced nanostructures. PLAL was successfully applied to obtain surfactant-free, stable nanofluids of metals, magnetic materials, semiconductors, and ceramics. Based on our previous research, we can successfully synthesize nanofluids of metals, metal oxides, ceramics, and semiconductors. Furthermore, thin films, which were deposited from their respective nanofluid suspensions using spin-coating and dip-coating methods, were produced from these semiconductor nanofluids [38][39][40][41]. As for sunflower, soybean, and corn oils, past studies have described how heat is dissipated and how their tribological performance can be improved. Therefore, this experimental study shows the effects on Ag-nanostructures by a pulsed laser ablation technique dispersed within vegetable lubricants to improve their tribological and heat-transfer characteristics, i.e., wear resistance, coefficient of friction (COF), load-carrying capacity, and thermal conductivity. Nanofluids Preparation To prepare the Ag-nanofluids, 55 mL of vegetable oil was taken in a glass beaker and the Ag target was immersed in it. A convex lens with a 50 cm focal length was used to concentrate the laser energy on the target. The immersed target within the lubricant was kept 30 cm from the lens. Laser pulse energy/area gives the fluence. An energy meter was used to measure the laser energy (300 mJ/pulse). The normal laser spot size was 10 cm. A lens with a 50 cm focal length was used in this study. The target was set at 30 cm apart from the lens. The actual laser spot size on the Ag target was 4 mm in diameter. The laser energy per area (fluence) was calculated as 2.4 J/cm 2 . This laser fluence was used to prepare the nanostructures in various vegetable oils. After aligning the laser using the operation mode, and Q switch mode to obtain 10 nanosecond pulses with high energy, the pulsed laser enhanced mode was used for the ablation process using nanosecond pulses. The laser ablation was performed for 5 and 10 min to obtain two different concentrations of nanofluids. After every minute, a new spot on the target was ablated to avoid the effects of continuous irradiation at the same spot. Stable Ag-nanofluids were obtained in the lubricants. Ag-nanostructures were synthesized in isopropyl alcohol (IPA) for the same ablation time. The Ag nanostructures were evaluated for their morphology by scanning electron microcopy (SEM), average size by dynamic light scattering (DLS), concentration by an inductively coupled plasma-optical emission spectroscopy (ICP-OES), and elemental composition and chemical states by X-ray photoelectron spectroscopy (XPS). Morphology by SEM A morphological analysis of the silver nanostructures was performed by a Hitachi SU8020-scanning electron microscope. Samples were prepared by drying a few drops of the nanofluid on silicon substrates and analyzed in secondary electron mode for an applied acceleration voltage of 1 kV. Figure 1a,b depicts the Ag-nanostructures at two different magnifications, i.e., 20,000× and 40,000×; in general, this process yields a spherical nanostructure morphology. Additionally, it was observed that some of the nanostructures aggregated to form chain-like nanostructures in the nanofluids. These were formed as an effect of pulsed laser ablation in the fluid in the absence of additives or surfactants. A morphological analysis of the silver nanostructures was performed by a Hitachi SU8020scanning electron microscope. Samples were prepared by drying a few drops of the nanofluid on silicon substrates and analyzed in secondary electron mode for an applied acceleration voltage of 1 kV. Figure 1a,b depicts the Ag-nanostructures at two different magnifications, i.e., 20,000⨉ and 40,000⨉; in general, this process yields a spherical nanostructure morphology. Additionally, it was observed that some of the nanostructures aggregated to form chain-like nanostructures in the nanofluids. These were formed as an effect of pulsed laser ablation in the fluid in the absence of additives or surfactants. The composition and chemical states of the nanostructures obtained by laser ablation was analyzed by the XPS technique. Thermo Scientific K-Alpha equipment for the XPS analysis was used. Figure 1d shows the high-resolution X-ray photoelectron spectrum of the Ag-nanostructures. The figure shows spin orbit split for Ag 3d photoelectron spectrum. The major intensity peak of Ag3d5/2 and lower intensity peak of Ag3d3/2 were at peak binding energies of 368.01 and 374.02 eV, respectively. These binding energy values agree with those of metallic silver [42]. The separation of the peaks was evaluated at 6.01 eV, which was also in agreement with the results reported in the literature [43,44]. A high-resolution spectral analysis confirmed that the nanostructures were in their elemental state. Thermal Conductivity Characterization The thermal conductivity of vegetable nanofluids at various concentrations and temperatures was measured by a transient hot-wire technique. The KD2 Pro equipment was calibrated using glycerol, and thermal conductivity results were verified to three decimal points. A thermal water bath was used for temperature-dependent evaluations. The specimens (40 mL glass vials) were thermally equilibrated for 10 min before each set of measurements. Thermal conductivities were compared with each of the control fluids at different temperatures. At least eight readings were measured for each set of experiments; the average values with standard deviation as error bars were reported and are discussed in this research. The composition and chemical states of the nanostructures obtained by laser ablation was analyzed by the XPS technique. Thermo Scientific K-Alpha equipment for the XPS analysis was used. Figure 1d shows the high-resolution X-ray photoelectron spectrum of the Ag-nanostructures. The figure shows spin orbit split for Ag 3d photoelectron spectrum. The major intensity peak of Ag3d 5/2 and lower intensity peak of Ag3d 3/2 were at peak binding energies of 368.01 and 374.02 eV, respectively. These binding energy values agree with those of metallic silver [42]. The separation of the peaks was evaluated at 6.01 eV, which was also in agreement with the results reported in the literature [43,44]. A high-resolution spectral analysis confirmed that the nanostructures were in their elemental state. Thermal Conductivity Characterization The thermal conductivity of vegetable nanofluids at various concentrations and temperatures was measured by a transient hot-wire technique. The KD2 Pro equipment was calibrated using glycerol, and thermal conductivity results were verified to three decimal points. A thermal water bath was used for temperature-dependent evaluations. The specimens (40 mL glass vials) were thermally equilibrated for 10 min before each set of measurements. Thermal conductivities were compared with each of the control fluids at different temperatures. At least eight readings were measured for each set of experiments; the average values with standard deviation as error bars were reported and are discussed in this research. Tribological Experimentation Tribological characterization was evaluated with a four-ball tribotester configuration to obtain the load-carrying capacity under extreme pressures. In this tribotest, the nanofluids were subjected to a linearly increasing load from 0 to 7200 N, with a rotational speed of 500 rpms for 18 s (Table 2), where 12.7 mm diameter spheres of AISI 52,100 steel with 60 HRC were used [45]. The Institute for Sustainable Technologies -National Research Institute(ITEePib) Polish technique was selected due to the sensitivity to extreme pressure lubricants [1,[46][47][48][49], as well as being less time consuming. In this study, when the frictional torque reached 10 N m, nanofluid seizure occurred and the nanofluid-"protective" film was destroyed; the load at this point corresponded to the seizure load (P oz ). If seizure did not occur by the end of the measurement, P oz was 7200 N. The limiting pressure of seizure or p oz , was calculated using Equation (1) as follows [45]: where p oz is limiting pressure of seizure, P oz is seizure load, and WSD is wear scar diameter. An Alicona IF-EdgeMaster optical 3D surface microscope was used to measure the average wear scar diameter of the three lower spheres, obtaining the average in millimeter to calculate the load-carrying capacity (p oz ) of the nanofluid; the greater the p oz , the better the tribological characteristics of the lubricant. ICP-OES/Elemental Composition Analysis The elemental composition was quantified using an ICP-OES (Thermo Scientific iCAP 6500-ICP-OES CID). It was observed that as the ablation time doubled, the concentration had no significant increase in nanostructure filler fraction; this might be due to the post irradiation effect of the nanofluids [50]. Thus, the particle size will be reduced further as the laser ablation time increases. It was observed that the concentrations of the nanostructures for 5 and 10 min pulsed laser irradiation in IPA were 5.54 mg/L and 6.55 mg/L, respectively. DLS Analysis/Average Size Determination by DLS The average size of silver nanostructures was determined by the DLS method by Zetasizer Nano ZS. The DLS readings of the obtained nanofluids by laser ablation for 5 min were as follows: sunflower oil 122 nm, corn oil 255 nm, soybean oil 172 nm, and IPA 67 nm. The DLS curves corresponding to the given values along with the polydispersity index (PDI) are shown in Figure 2. It can be observed that the size of nanostructures in IPA appeared to be smaller, whereas those in vegetable oils are bigger. This may be due to the density of the oil. In comparison to the vegetable oils, IPA has low density. Thus, during the nanostructure production by laser ablation, a plasma plume is formed in which the corresponding ions of the elements/compounds are present [36]. As the ablation process proceeds, the plasma plume expands and the nanostructures are liberated into the corresponding liquid medium. As the density varies, the time taken for plasma plume expansion also increases, which results in the production of larger particles or the agglomeration of smaller particles. These results will be influenced by the temperature too, i.e., mainly by the boiling point of the vegetable oils. The refractive indexes of the vegetable oils used for DLS measurements are tabulated in Table 1 Figure 3 shows the thermal conductivity performance in the temperature-dependent evaluations for the investigated vegetable nanofluids. It was observed that conventional vegetable oils were not significantly affected by the temperature dependence evaluation (i.e., less than 2% increase at 50 °C, compared to room temperature). Figure 3 shows the thermal conductivity performance in the temperature-dependent evaluations for the investigated vegetable nanofluids. It was observed that conventional vegetable oils were not significantly affected by the temperature dependence evaluation (i.e., less than 2% increase at 50 • C, compared to room temperature). For sunflower oil, improvements of 15% and 18% were observed at 50 °C for 5 and 10 min of laser ablation, respectively. Similarly, soybean oil showed an improvement of 21% and 24% at 5 and 10 min of laser ablation, respectively. The effects on corn oil were also satisfactory, showing a 15% and 21% increase at 5 and 10 min of laser ablation, respectively ( Figure 3). Thermal Performance The thermal conductivity of nanofluids increases with increasing temperature and laser ablation process time. This indicates the influence of Ag nanostructures on thermal conductivity [51][52][53], and demonstrates the influence of Brownian motion in the thermal transport behavior [54,55]. Figure 4 shows a comparison of the poz of various vegetable nanosystems. Sunflower oil showed an increase in the load-carrying capacity of 14% and 24% with 5 and 10 min of laser ablation, respectively. Similarly, soybean oil exhibited an improvement of 16% and 23% with 5 and 10 min of laser ablation, respectively. The Ag-laser ablation process on corn oil also showed improvement in load-carrying capacity by 6% and 10% with 5 and 10 min of laser ablation, respectively. For sunflower oil, improvements of 15% and 18% were observed at 50 • C for 5 and 10 min of laser ablation, respectively. Similarly, soybean oil showed an improvement of 21% and 24% at 5 and 10 min of laser ablation, respectively. The effects on corn oil were also satisfactory, showing a 15% and 21% increase at 5 and 10 min of laser ablation, respectively ( Figure 3). Tribological Performance The thermal conductivity of nanofluids increases with increasing temperature and laser ablation process time. This indicates the influence of Ag nanostructures on thermal conductivity [51][52][53], and demonstrates the influence of Brownian motion in the thermal transport behavior [54,55]. Figure 4 shows a comparison of the p oz of various vegetable nanosystems. Sunflower oil showed an increase in the load-carrying capacity of 14% and 24% with 5 and 10 min of laser ablation, respectively. Similarly, soybean oil exhibited an improvement of 16% and 23% with 5 and 10 min of laser ablation, respectively. The Ag-laser ablation process on corn oil also showed improvement in load-carrying capacity by 6% and 10% with 5 and 10 min of laser ablation, respectively. Tribological Performance Diverse studies have presented the roles of nanomaterials in fluids and lubricants [56,57] in which similar behavior is observed. The frictional power losses are reduced due to the nanostructure mechanism that converts the sliding to rolling friction and the formation of tribofilms on the contact surfaces. The tribological improvement of Ag nanostructures could be due to a tribosintering effect on the surfaces, and the spacer effect could be due to their small size and interlayer interactions within vegetable oils. Diverse studies have presented the roles of nanomaterials in fluids and lubricants [56,57] in which similar behavior is observed. The frictional power losses are reduced due to the nanostructure mechanism that converts the sliding to rolling friction and the formation of tribofilms on the contact surfaces. The tribological improvement of Ag nanostructures could be due to a tribosintering effect on the surfaces, and the spacer effect could be due to their small size and interlayer interactions within vegetable oils. The effects on COF during tribotesting under scuffing conditions by Ag nanostructures dispersed by pulsed laser ablation within vegetable oils are shown in Table 3. Conclusions In the present study, a tribological and thermal transport evaluation of environmentally friendly, pulsed laser ablated, Ag-based vegetable nanofluids was performed. Soybean, sunflower, and corn oils were used as liquid media to trigger bulk silver by laser irradiation. The incorporation of silver nanostructures within these natural lubricants showed overall positive results. The effects on COF during tribotesting under scuffing conditions by Ag nanostructures dispersed by pulsed laser ablation within vegetable oils are shown in Table 3. Conclusions In the present study, a tribological and thermal transport evaluation of environmentally friendly, pulsed laser ablated, Ag-based vegetable nanofluids was performed. Soybean, sunflower, and corn oils were used as liquid media to trigger bulk silver by laser irradiation. The incorporation of silver nanostructures within these natural lubricants showed overall positive results. It was observed that the irradiation time had significant effects on the nanofluids, and showed improvements in their tribological and heat-transfer characteristics. For instance, the limiting pressure of seizure improved compared to conventional lubricant, ranging from a 6% to 14% increase at 5 min irradiation, and up to a 24% increase at 10 min irradiation. This was attributed to the nanostructures displaying a rolling friction behavior, forming tribo-films, and tribosintering on the contact surfaces. On the other hand, in general, all nanofluids showed a temperature-dependent behavior in thermal transport evaluations, also showing an interlayer interaction of silver with natural oils. Thermal conductivity improved in the range of 15% for 5 min ablation and up to 24% at 10 min ablation. The results showed the potential of this laser ablation technique and the application of natural oils as coolants or for metal-forming processes. Increased environmental awareness is the main driving force for the development novel technologies. Therefore, biodegradable fluids for use in environmentally sensitive areas have great potential to succeed in industrial applications.
4,088
2020-03-05T00:00:00.000
[ "Materials Science" ]
Nalfurafine Hydrochloride, a κ-Opioid Receptor Agonist, Induces Melanophagy via PKA Inhibition in B16F1 Cells Selective autophagy controls cellular homeostasis by degrading unnecessary or damaged cellular components. Melanosomes are specialized organelles that regulate the biogenesis, storage, and transport of melanin in melanocytes. However, the mechanisms underlying melanosomal autophagy, known as the melanophagy pathway, are poorly understood. To better understand the mechanism of melanophagy, we screened an endocrine-hormone chemical library and identified nalfurafine hydrochlorides, a κ-opioid receptor agonist, as a potent inducer of melanophagy. Treatment with nalfurafine hydrochloride increased autophagy and reduced melanin content in alpha-melanocyte-stimulating hormone (α-MSH)-treated cells. Furthermore, inhibition of autophagy blocked melanosomal degradation and reversed the nalfurafine hydrochloride-induced decrease in melanin content in α-MSH-treated cells. Consistently, treatment with other κ-opioid receptor agonists, such as MCOPPB or mianserin, inhibited excessive melanin production but induced autophagy in B16F1 cells. Furthermore, nalfurafine hydrochloride inhibited protein kinase A (PKA) activation, which was notably restored by forskolin, a PKA activator. Additionally, forskolin treatment further suppressed melanosomal degradation as well as the anti-pigmentation activity of nalfurafine hydrochloride in α-MSH-treated cells. Collectively, our data suggest that stimulation of κ-opioid receptors induces melanophagy by inhibiting PKA activation in α-MSH-treated B16F1 cells. Introduction Autophagy is a self-degradative process that removes damaged or unnecessary organelles as well as misfolded or aggregated proteins [1]. Upon autophagy activation, the isolation membrane encloses a portion of the cytoplasm to form an autophagosome, which engulfs target components and subsequently fuses with the lysosome to form an autolysosome [2]. Autophagy-related genes (ATG) are essential for autophagy activation in the regulation of autophagosomes and autolysosome formation [3]. Two ubiquitin-like systems are involved in autophagic vesicle formation: ATG7, an E1-like activating enzyme, binds to ATG8/microtubule-associated protein light chain 3 (LC3) or ATG12 and is then transferred to one of the E2-like conjugation enzymes, ATG3 and ATG10. The ATG12-ATG5 complex then conjugates with ATG16, an E3-like enzyme for the ATG8-PE conjugate, binding to the autophagosome membrane through a lipidation reaction [4]. Although autophagy is considered a non-selective bulk-degradation process under starvation conditions, it can remove specific target organelles. For example, organelles such as mitochondria (mitophagy) and peroxisomes (pexophagy) can be eliminated by selective autophagy [5,6]. Transcription factor EB (TFEB) is a major transcriptional regulator of autophagy genes [7]. Under normal conditions, TFEB is retained in the cytoplasm after phosphorylation by the mammalian target of rapamycin (mTOR). However, activation of autophagy in response to different stimuli, including starvation or mTOR inhibition, leads to dephosphorylation of TFEB and its rapid translocation to the nucleus, resulting in the expression of its target genes [8,9]. The skin is the largest organ of the body, which serves as a protective barrier against various stress stimuli, such as UV exposure. Structurally, the epidermis and the outer layer of the skin are mainly composed of keratinocytes and melanocytes. The dermis, which is the inner layer of the skin, contains connective tissue and hair follicles. The subcutaneous layer consists of fat and provides the primary structural support to the skin [10]. Melanocytes are specialized melanin-producing cells found in the skin, hair follicles, eyes, and brain, originating from neural crest melanoblasts [11]. In the skin, melanocytes generate melanin pigment in the melanosome and transfer them to the surrounding keratinocytes [10]. A series of interactions collectively termed melanogenesis, then occur to synthesize melanin by catalysis with enzyme complexes. Several pigmentation disorders trigger skin discoloration, including melasma, albinism, and vitiligo [12]. Melanosome formation and maturation occur during melanogenesis, and several factors, such as tyrosinase and tyrosinase-related protein 1/2 (TRP1/2), control melanin production [13]. As a transcription factor, microphthalmia-associated transcription factor (MITF) mainly controls the expression of melanogenesis-related proteins, including tyrosinase and TRP1/2 [14]. α-melanocyte-stimulating hormone (α-MSH) transactivates MITF by increasing the cyclic adenosine monophosphate (cAMP)-cAMP response element-binding (CREB) signaling cascade [14,15]. MITF target genes are enriched in DNA replication and repair, mitotic events, and pigmentation [16,17]. Recently, our group demonstrated that autophagy regulates pigmentation in melanocytes by controlling melanosome-selective autophagy, melanophagy [18]. Hormones produced by glands can circulate throughout the body to trigger different effects in target cells, tissues, and organs [19]. However, the endocrinological regulation of autophagy and melanophagy remains unexplored. Therefore, we screened an endocrinology-hormone library in B16F1 melanoma cells to identify novel melanophagy regulators associated with hormones and identified nalfurafine hydrochloride as the most potent inducer of melanophagy. Opioid receptors are G-protein coupled receptors (GPCRs), of which nalfurafine hydrochloride is highly selective for κ-opioid receptors and has been approved for treating central pruritus in patients with liver disease [20]. However, the effect of nalfurafine hydrochloride on skin pigmentation has not been investigated. In this study, we observed that activation of the κ-opioid receptor with nalfurafine hydrochloride strongly inhibited pigmentation by promoting melanosomal autophagy (melanophagy) in B16F1 cells. Cell-Based Hormone Library Screening For cell-based hormone library screening, an endocrinology-hormone library was purchased from TargetMol (L2400) (Boston, MA, USA). B16/GFP-LC3 cells were seeded in 96-well plates. After 24 h, 1, 10, and 100 µM of the hormone library was added to each well. The GFP-LC3 puncta in the cells were monitored using fluorescence microscopy. The experiments were repeated twice and yielded consistent results. Melanin Content Assay Melanin content was determined using a slightly modified version of a previously described method. To measure melanin content, B16F1 cells were harvested by trypsinization and dissolved in a solubilization buffer at 100 • C for 30 min. The relative melanin content was determined by measuring the absorbance at 405 nm using a microplate reader (BioTek, Santa Clara, CA, USA). Autophagy Analysis and Melanophagy Assay with Fluorescent Punctuation For the autophagy assay, B16F1/GFP-LC3 cells were treated with nalfurafine hydrochloride (100 µM) or ARP 101 (10 µM). Autophagy was determined by the number of cells with GFP-LC3 punctate structures, indicative of autophagosomes, via fluorescence microscopy (IX71, Olympus, Tokyo, Japan). For the melanophagy assay, B16F1/TPC2-mRFP-EGFP cells were seeded onto coverslips in 12-well plates. The cells were pre-treated with α-MSH (0.5 µM) for 48 h and then incubated with nalfurafine hydrochloride (100 µM) in the presence or absence of bafilomycin A1 (100 nM) for 24 h. Subsequently, cells were washed with phosphate-buffered saline (PBS, pH 7.4), fixed with 4% paraformaldehyde at room temperature for 20 min, and then washed with PBS. After mounting on coverslips, cells were evaluated under a confocal microscope (LSM 800; Objective C-Apochromat 40×/1.2 W Corr UV-VIS-IR M27; Carl Zeiss, Thornwood, NY, USA). The number of cells with red punctate structures was counted, and the findings are presented as a percentage of the total counts of 200 cells. Western Blotting All lysates were prepared using 2 × Laemmli sample buffer (Bio-Rad, Hercules, CA, USA). Total protein was measured using the Bradford assay (Bio-Rad), according to the manufacturer's instructions. The samples were separated using SDS-polyacrylamide gel electrophoresis (PAGE) and transferred to polyvinylidene fluoride (PVDF) membranes. After blocking with 4% skim milk in Tris-buffered saline supplemented with Tween-20, the membranes were incubated with primary antibodies, including anti-LC3 (NB100-2220), anti- Additionally, anti-ACTA1 (MAB1501, Sigma Aldrich, St. Louis, MO, USA). For protein detection, membranes were incubated with HRP-conjugated secondary antibodies (Pierce, Rockford, IL, USA). Additionally, the protein levels were further analyzed by a CS analyzer software (ATTO, Tokyo, Japan). Statistical Analysis Data were obtained from at least three independent experiments and are presented as the mean ± SEM. Statistical evaluation of the results was performed using one-way ANOVA. Data were considered significant at p < 0.05 (*), p < 0.01 (**), p < 0.001 (***). Nalfurafine Hydrochloride Induces Autophagy Activation in B16F1 Cells Autophagy is an important quality control system in skin aging. To identify novel autophagy regulators in the skin, we established a stable cell line with GFP-LC3 in B16F1 cells (B16F1/GFP-LC3) [18] and performed a cell-based high-content screening with an endocrinology-hormone library. From this screening, we identified nalfurafine hydrochlorides as a potent inducer of autophagy in B16F1 cells. To validate the screening results, B16F1/GFP-LC3 cells were treated with either nalfurafine hydrochloride or ARP101, a potent inducer of autophagy [21]. As shown in Figure 1A, we confirmed that the formation of punctate GFP-LC3 protein substantially increased in nalfurafine hydrochloride-treated cells ( Figure 1A). To further address the increased autophagic flux induced by nalfurafine hydrochloride, B16F1 cells were treated with bafilomycin A1. The protein level of LC3-II was more accumulated in cells treated with nalfurafine hydrochloride and bafilomycin A1 than that in control cells. These results indicate that nalfurafine hydrochloride is a potent autophagy inducer in B16F1 cells ( Figure 1B). Transcription factor EB (TFEB) is a major regulator of autophagy and lysosomal biogenesis. Thus, we observed that treatment with nalfurafine hydrochloride induced nuclear translocation of TFEB in B16F1 cells ( Figure 1C) [22]. Torin1, a potent mTOR inhibitor was used as a positive control. The protein level of TFEB phosphorylation was increased at the basal level; however, treatment with nalfurafine hydrochloride or Torin1 induced dephosphorylation of TFEB in B16F1 cells ( Figure 1D). Concordantly, treatment with nalfurafine hydrochloride decreased the phosphorylation of p70S6K in α-MSH-treated B16F1 cells ( Figure 1E), suggesting that nalfurafine hydrochloride activates autophagy by inhibiting mTOR activation. Nalfurafine Hydrochloride Promotes Melanosomal Degradation by Inducing Melanophagy Our group recently reported that the induction of autophagy controls melanin content [18,23]. To examine the whitening effect of nalfurafine hydrochloride, B16F1 cells stimulated with α-MSH were incubated with nalfurafine hydrochloride or arbutin, a potent anti-melanogenic agent [24]. Consistently, despite the strong melanogenic stimulus induced by α-MSH, nalfurafine hydrochloride significantly reduced the melanin content in B16F1 cells (Figure 2A). Cellular organelles can be degraded by selective autophagy [25]. As we observed that nalfurafine hydrochloride decreased melanin content and induced autophagy in B16F1 cells, we further examined the effect of nalfurafine hydrochloride on melanophagy. To confirm these results, we developed a melanophagy monitoring system using TPC2, a melanosome-membrane protein, followed by tandem fluorescent tags (mRFP-EGFP). Similar to the mRFP-EGFP-LC3 protein for autophagy flux assay, the basic principle of the tandem assay involves the difference in pH sensitivity of the red (mRFP) and green (EGFP) fluorescent proteins [26]. The cells were fixed for fluorescence imaging, and the nuclear localization of GFP-TFEB was analyzed. (D) B16F1 cells were treated with either nalfurafine hydrochloride (Nalf 10, 100 μM) or Torin1 (0.25 μM for 1 h). Protein expression was assessed by Western blotting using the indicated antibodies. (E) B16F1 cells pre-treated with α-MSH for 48 h were further incubated with nalfurafine hydrochloride (Nalf, 100 μM) for 24 h. Protein expression was assessed by Western blotting using the indicated antibodies. Additionally, the protein levels were measured by densitometry analysis. (n = 3, ns: non-significant, * p < 0.05, *** p < 0.001.) The scale bar indicates 10 μm. Nalfurafine Hydrochloride Promotes Melanosomal Degradation by Inducing Melanophagy Our group recently reported that the induction of autophagy controls melanin content [18,23]. To examine the whitening effect of nalfurafine hydrochloride, B16F1 cells stimulated with α-MSH were incubated with nalfurafine hydrochloride or arbutin, a potent anti-melanogenic agent [24]. Consistently, despite the strong melanogenic stimulus induced by α-MSH, nalfurafine hydrochloride significantly reduced the melanin content in B16F1 cells (Figure 2A). Cellular organelles can be degraded by selective autophagy [25]. As we observed that nalfurafine hydrochloride decreased melanin content and induced autophagy in B16F1 cells, we further examined the effect of nalfurafine hydrochloride on melanophagy. To confirm these results, we developed a melanophagy monitoring system using TPC2, a melanosome-membrane protein, followed by tandem fluorescent tags (mRFP-EGFP). Similar to the mRFP-EGFP-LC3 protein for autophagy flux assay, the basic principle of the tandem assay involves the difference in pH sensitivity of the red (mRFP) and green (EGFP) fluorescent proteins [26]. During melanophagy, targeted melanosomes are enclosed by autophagosomes, which are subsequently transported to lysosomes. In lysosomes, the green fluorescence signal is readily quenched as GFP is more acid-sensitive than RFP, whereas the red signal remains stable, suggesting melanophagy. Based on this novel monitoring system, B16F1/TPC2-mRFP-EGFP cells were treated with nalfurafine hydrochloride in the presence or absence of bafilomycin A1. As shown in Figure 2B, nalfurafine hydrochloride treatment increased the number of RFP-positive dots, which were blocked by bafilomycin A1 ( Figure 2B). To investigate melanosome-selective autophagy, we further examined other organelles, including the mitochondria, ER, Golgi, and peroxisomes, in nalfurafine hydrochloride-treated cells. Consistently, melanosomal proteins such as tyrosinase were degraded, however, other membrane proteins, including a mitochondrial protein (TOMM20), ER protein (P4HB), Golgi protein (FTCD), and peroxisomal protein (ABCD3), were not substantially altered in nalfurafine hydrochloride-treated cells ( Figure 2C), suggesting that nalfurafine hydrochloride induces melanosomal degradation. During melanophagy, targeted melanosomes are enclosed by autophagosomes, which are subsequently transported to lysosomes. In lysosomes, the green fluorescence signal is readily quenched as GFP is more acid-sensitive than RFP, whereas the red signal remains stable, suggesting melanophagy. Based on this novel monitoring system, B16F1/TPC2-mRFP-EGFP cells were treated with nalfurafine hydrochloride in the presence or absence of bafilomycin A1. As shown in Figure 2B, nalfurafine hydrochloride treatment increased the number of RFP-positive dots, which were blocked by bafilomycin A1 ( Figure 2B). To investigate melanosome-selective autophagy, we further examined other organelles, including the mitochondria, ER, Golgi, and peroxisomes, in nalfurafine hydrochloride-treated cells. Consistently, melanosomal proteins such as tyrosinase were degraded, however, other membrane proteins, including a mitochondrial protein (TOMM20), ER protein (P4HB), Golgi protein (FTCD), and peroxisomal protein (ABCD3), were not substantially altered in nalfurafine hydrochloride-treated cells ( Figure 2C), suggesting that nalfurafine hydrochloride induces melanosomal degradation. Next, we investigated the effects of autophagy inhibition on nalfurafine hydrochloride-induced melanophagy. ATG5 is an essential autophagy regulatory protein involved in the extension of the phagocytic membrane in the autophagosome. Thus, the loss of ATG5 almost completely blocks autophagy activation [27]. Notably, the knockdown of Atg5 suppressed the nalfurafine hydrochloride-induced decrease in melanin content (Figure 3A). Moreover, the depletion of Atg5 restored the reduced levels of tyrosinase in nalfurafine hydrochloride-treated cells ( Figure 3B). In addition, inhibition of autophagic flux by bafilomycin A1 also restored decreased melanin content by nalfurafine hydrochloride ( Figure 3C). Collectively, these results further suggest that nalfurafine hydrochloride induces melanosomal degradation by promoting melanophagy in B16F1 cells. Next, we investigated the effects of autophagy inhibition on nalfurafine hydrochlorideinduced melanophagy. ATG5 is an essential autophagy regulatory protein involved in the extension of the phagocytic membrane in the autophagosome. Thus, the loss of ATG5 almost completely blocks autophagy activation [27]. Notably, the knockdown of Atg5 suppressed the nalfurafine hydrochloride-induced decrease in melanin content ( Figure 3A). Moreover, the depletion of Atg5 restored the reduced levels of tyrosinase in nalfurafine hydrochloride-treated cells ( Figure 3B). In addition, inhibition of autophagic flux by bafilomycin A1 also restored decreased melanin content by nalfurafine hydrochloride ( Figure 3C). Collectively, these results further suggest that nalfurafine hydrochloride induces melanosomal degradation by promoting melanophagy in B16F1 cells. Activation of the κ-Opioid Receptor Induces Melanophagy in B16F1 Cells Nalfurafine hydrochloride is a selective kappa (κ)-opioid receptor agonist [28]. Therefore, we examined the effect of activation of κ-opioid receptors by other potent κ-opioid receptor agonists such as MCOPPB and mianserin on melanophagy in B16F1 cells. Similar to nalfurafine hydrochloride, MCOPPB or mianserin also strongly induced autophagic puncta with GFP-LC3 and accumulation of LC3 II in B16F1 cells ( Figure 4A,B). Consistently both MCOPPB and mianserin significantly inhibited the excessive melanin content in α-MSH-stimulated B16F1 cells ( Figure 4C). These results suggested that stimulation of the κ-opioid receptor influences melanophagy activation in B16F1 cells. Activation of the κ-Opioid Receptor Induces Melanophagy in B16F1 Cells Nalfurafine hydrochloride is a selective kappa (κ)-opioid receptor agonist [28]. Therefore, we examined the effect of activation of κ-opioid receptors by other potent κopioid receptor agonists such as MCOPPB and mianserin on melanophagy in B16F1 cells. Similar to nalfurafine hydrochloride, MCOPPB or mianserin also strongly induced autophagic puncta with GFP-LC3 and accumulation of LC3 II in B16F1 cells ( Figure 4A,B). Consistently both MCOPPB and mianserin significantly inhibited the excessive melanin content in α-MSH-stimulated B16F1 cells ( Figure 4C). These results suggested that stimulation of the κ-opioid receptor influences melanophagy activation in B16F1 cells. Activation of the κ-Opioid Receptor Induces Melanophagy in B16F1 Cells Nalfurafine hydrochloride is a selective kappa (κ)-opioid receptor agonist [28]. Therefore, we examined the effect of activation of κ-opioid receptors by other potent κopioid receptor agonists such as MCOPPB and mianserin on melanophagy in B16F1 cells. Similar to nalfurafine hydrochloride, MCOPPB or mianserin also strongly induced autophagic puncta with GFP-LC3 and accumulation of LC3 II in B16F1 cells ( Figure 4A,B). Consistently both MCOPPB and mianserin significantly inhibited the excessive melanin content in α-MSH-stimulated B16F1 cells ( Figure 4C). These results suggested that stimulation of the κ-opioid receptor influences melanophagy activation in B16F1 cells. Inhibition of PKA Mediates Melanophagy in Nalfurafine Hydrochloride-Treated Cells Opioid receptors are widely expressed throughout the nervous system. Thus, their physiological roles have been intensively elucidated in nervous systems [29,30]. However, the association between κ-opioid receptors and skin melanogenesis has not been explored. Therefore, we further investigated the potential regulatory mechanism of κ-opioid receptormediated melanophagy. It was reported that activation of the κ-opioid receptor inhibits the cyclic adenosine monophosphate (cAMP)/protein kinase A (PKA) signaling pathway [31,32]. Consistent with this notion, we also observed that treatment with nalfurafine hydrochloride inhibited PKA phosphorylation in α-MSH-stimulated B16F1 cells ( Figure 5A). However, it was restored by treatment with forskolin, which directly increased intracellular cAMP levels by activating adenylyl cyclase ( Figure 5A) [33]. Previously, we found that nalfurafine hydrochloride reduces the phosphorylation of p70S6K, which is a downstream target for mTOR signaling ( Figure 1C). Therefore, we investigated the role of cAMP in nalfurafine hydrochloride-induced mTOR inhibition. As shown in Figure 5A, treatment with forskolin recovered the deceased phosphorylation of p70S6K, suggesting that activation of cAMP/PKA inhibits autophagy by modulating the mTOR signaling pathway ( Figure 5A). p < 0.05, *** p < 0.001.) The protein levels were measured by densitometry analysis. The scale bar = 10 μm. Inhibition of PKA Mediates Melanophagy in Nalfurafine Hydrochloride-Treated Cells Opioid receptors are widely expressed throughout the nervous system. Thus, their physiological roles have been intensively elucidated in nervous systems [29,30]. However, the association between κ-opioid receptors and skin melanogenesis has not been explored. Therefore, we further investigated the potential regulatory mechanism of κ-opioid receptor-mediated melanophagy. It was reported that activation of the κ-opioid receptor inhibits the cyclic adenosine monophosphate (cAMP)/protein kinase A (PKA) signaling pathway [31,32]. Consistent with this notion, we also observed that treatment with nalfurafine hydrochloride inhibited PKA phosphorylation in α-MSH-stimulated B16F1 cells ( Figure 5A). However, it was restored by treatment with forskolin, which directly increased intracellular cAMP levels by activating adenylyl cyclase ( Figure 5A) [33]. Previously, we found that nalfurafine hydrochloride reduces the phosphorylation of p70S6K, which is a downstream target for mTOR signaling ( Figure 1C). Therefore, we investigated the role of cAMP in nalfurafine hydrochloride-induced mTOR inhibition. As shown in Figure 5A, treatment with forskolin recovered the deceased phosphorylation of p70S6K, suggesting that activation of cAMP/PKA inhibits autophagy by modulating the mTOR signaling pathway ( Figure 5A). Therefore, we examined the inhibitory effect of PKA on melanophagy in nalfurafine hydrochloride-treated cells. The enhancement of GFP-LC3 puncta by nalfurafine hydrochloride was significantly reduced by combination treatment with forskolin in B16F1/GFP-LC3 cells ( Figure 5B). Furthermore, treatment with forskolin significantly inhibited the autophagic degradation of melanosomes by nalfurafine hydrochloride in B16F1/TPC2-mRFP-EGFP cells ( Figure 5C). Consistent with these results, we found that nalfurafine hydrochloride treatment reduced melanin content in α-MSH-stimulated B16F1 cells while forskolin restored the reduced melanin content in nalfurafine hydrochloride-treated cells ( Figure 5D). Collectively, our results suggest that κ-opioid receptor agonists induce melanophagy by inhibiting PKA activation in B16F1 cells. Discussion As melanin is the primary determinant of mammalian skin pigmentation, disorders in melanin production and melanosome transport to keratinocytes are associated with various pigmentary diseases, such as melasma, vitiligo, and ash leaf spots [34]. Although the quality and quantity control of melanosomes by melanophagy are vital mechanisms for understanding pigmentary diseases, the precise regulatory events underlying melanophagy remain largely unknown. Hormones are strongly implicated in the maintenance of skin homeostasis, thus disturbances in hormonal regulation are involved in various skin perturbations [35]. In this study, we screened an endocrine hormone library and identified several candidates for autophagy inducers such as moxisylyte hydrochloride, tamsulosin hydrochloride, balicatib, VTP27999, aminoglutethimide, and metyrapone as well as nalfurafine hydrochloride. Notably, it was previously reported that aminoglutethimide and metyrapone induce autophagy in different cells [36,37]. In this study, a newly developed monitoring system (B16F1/TPC2-mRFP-EGFP cells) for melanosomal degradation revealed that nalfurafine hydrochloride, a selective agonist of the κ-opioid receptor, induces melanophagy ( Figure 2B). Nalfurafine hydrochloride has previously been clinically used for treating itching in patients undergoing kidney dialysis and those with chronic liver diseases [38]. Several opioid receptors contribute to numerous physiological processes, including pain control, reproduction, growth, respiration, and immune reactions [39]. Among them, the κ-opioid receptor is predominantly expressed in the central nervous system, but it is also expressed in the adrenal medulla, digestive tissues, heart, kidney, placenta, peripheral vasculature, uterus, and immune cells [30,40]. Notably, the κ-opioid receptor is upregulated in several solid tumors and is associated with cancer development and poor prognosis, and it mediates the immunosuppressive effects [41,42]. Despite the multiple functions of the κ-opioid receptor in various tissues, its role in skin pigmentation has not yet been elucidated. In this study, we elucidated the effect of the κ-opioid receptor activation on the regulation of skin pigmentation by addressing an effect of nalfurafine hydrochloride on melanosomal degradation. Blockage of autophagy by Atg5 knockdown or bafilomycin A1 substantially restored the reduced melanin content and inhibited the autophagy by nalfurafine hydrochloride in α-MSH-stimulated B16F1 cells (Figure 3). Our findings support the hypothesis that stimulation of the κ-opioid receptor with nalfurafine hydrochloride decreases melanin content by activating melanophagy. Hyperactivation of the κ-opioid receptor with dynorphin, an endogenous opioid peptide in mouse hippocampal neurons, exerts an anti-epileptic effect by activating the mTOR signaling pathway, which is a major autophagy regulatory pathway [43]. In addition, stimulation of the κ-opioid receptor by the chemical agonist, U50488H, protects against hypoxic pulmonary hypertension by inhibiting autophagy via adenosine monophosphate-activated protein kinase (AMPK)-mTOR signaling [44]. These reports suggest that activation of the κ-opioid receptor inhibits autophagy by activating the mTOR pathway. Nonetheless, we found that treatment with nalfurafine hydrochloride did not activate but inhibit the mTOR pathway in B16F1 melanoma cells ( Figure 1C-E). TFEB activity is largely controlled by its subcellular localization. Phosphorylated TFEB is sequestered into the cytosol; hence, transcriptional induction of its target genes is inhibited. In contrast, dephosphorylated TFEB rapidly translocates to the nucleus to promote the expression of its target genes [7]. Similar to Torin1, treatment with nalfurafine hydrochloride induced the translocation of TFEB by inhibiting mTOR signaling in B16F1 cells ( Figure 1C,D). These results suggest an alternative mechanism of nalfurafine hydrochloride in autophagy activation via regulation of mTOR signaling in B16F1 cells. Therefore, we explored another potential mechanism for nalfurafine hydrochloride-mediated melanophagy. Opioid receptors are G protein-coupled receptors (GPCRs) that mediate multiple intracellular signaling pathways by modulating cAMP and calcium [30,45,46]. Thus, stimulation of the κ-opioid receptor activates signaling kinase cascades, including G protein-coupled receptor kinases and mitogenactivated protein kinases (MAPK) proteins [30,46]. Some opioid receptors transduce signals through G inhibitory proteins (G i ) to inhibit adenylyl cyclase, subsequently decreasing cAMP production and inactivating PKA. Concordantly, activation of the κ-opioid receptor with nalfurafine hydrochloride/TRK820 inhibits the cAMP/PKA signaling pathway to suppress vascular endothelial growth factor receptor 2 (VEGFR2) expression in endothelial cells [31,32]. Furthermore, treatment with TRK820 sufficiently blocked tumor development and angiogenesis in a xenograft mouse model [31]. In contrast, treatment with a µ-opioid receptor agonist, DAMGO, or a δ-opioid receptor agonist, SNC80, did not prevent angiogenesis in human umbilical vein endothelial cells [31]. These reports suggest that κ-opioid receptors suppress angiogenesis by inhibiting cAMP/PKA signaling. Notably, treatment with α-MSH stimulated the melanocortin 1 receptor (MC1R) to activate adenyl cyclase, which induced an increase in cAMP levels. Thus, cAMP-inducing agents, such as forskolin, lead to increased melanin content and the expression of melanin-producing proteins such as tyrosinase [47,48]. However, the effect of cAMP on melanophagy has not been elucidated. It was reported that cAMP signaling pathway is linked to AMPK activation, which is key regulatory protein for mTOR signaling. For example, elevated cAMP promotes AMPK phosphorylation at Thr 172, which subsequently promotes autophagy by inhibiting mTOR signaling [49]. In this study, we confirmed that nalfurafine hydrochloride inhibited the phosphorylation of PKA and p70S6K in α-MSH-treated B16F1 cells ( Figure 5A). However, combination treatment with forskolin and nalfurafine hydrochloride largely reversed the decreased phosphorylation of PKA and p70S6K as well as melanin content induced in α-MSH-treated cells ( Figure 5A,D). Notably, forskolin inhibited nalfurafine hydrochlorideinduced melanophagy in α-MSH-treated B16F1 cells ( Figure 5C). A recent study reported that cAMP might inhibit or promote autophagy depending on the cell type [50]. The cAMP/PKA signaling cascade is compartmentalized in distinct functional units termed microdomains [51]. Our findings suggest that inhibition of cAMP/PKA signaling promotes the autophagy-dependent clearance of melanosomes in B16F1 cells. Thus, further studies on downstream pathways, including CREB and MAPK proteins, and transcriptional control with MITF and TFEB will help understand the underlying mechanism of melanosome degradation in nalfurafine hydrochloride-treated cells. Conclusions In conclusion, our findings suggest that activation of the κ-opioid receptor by nalfurafine hydrochloride promotes melanophagy in α-MSH-treated melanocytes. Thus, we provided novel insights into the underlying mechanism of melanophagy and highlighted the potential of nalfurafine hydrochloride to be used as an ingredient in skin-care cosmetics.
5,892.6
2022-12-29T00:00:00.000
[ "Medicine", "Biology" ]
Playing with universality classes of Barkhausen avalanches Many systems crackle, from earthquakes and financial market to Barkhausen effect in ferromagnetic materials. Despite the diversity in essence, the noise emitted in these dynamical systems consists of avalanche-like events with broad range of sizes and durations, characterized by power-law avalanche distributions and typical average avalanche shape that are signatures dependent on the universality class of the underlying dynamics. Here we focus on the crackling noise in ferromagnets and scrutinize the traditional statistics of Barkhausen avalanches in polycrystalline and amorphous ferromagnetic films having different thicknesses. We show how scaling exponents and average shape of the avalanches evolve with the structural character of the materials and film thickness. We find quantitative agreement between experiment and theoretical predictions of models for the magnetic domain wall dynamics, and then elucidate the universality classes of Barkhausen avalanches in ferromagnetic films. Thereby, we observe for the first time the dimensional crossover in the domain wall dynamics, and the outcomes of the interplay between system dimensionality and range of interactions governing the domain wall dynamics on Barkhausen avalanches. according to the structural character of the sample, placing polycrystalline and amorphous materials in two distinct universality classes differing in the kind and range of interactions governing the DW dynamics. For films in turn, universality is still under debate. The Barkhausen avalanches have been investigated primarily through magneto-optical techniques [32][33][34][35][36][37][38][39][40][41][42] and, just more recently, with the inductive technique [43][44][45][46][47][48][49] . These first reliable magneto-optical experiments have shown that the magnetic behavior in thin films typically differs from that found for bulk materials, a characteristic entirely devoted to the dimensionality of the system. A step forward in the subject has been given by Ryu and colleagues 37 , who addressed the scaling behavior of Barkhausen criticality in a ferromagnetic 50-nm-thick MnAs thin film, a system with essentially two-dimensional magnetization dynamics due to the reduced thickness. From sophisticated magneto-optical observations of the avalanches, the scaling behavior has been experimentally tuned by varying the temperature close but below the Curie temperature of this given film. The modification of a single scaling exponent, taking place simultaneously to a change in the DW morphology, discloses a crossover between two distinct universality classes, which is caused by the competition between long-range dipolar interaction and the short-range DW surface tension. However, it worth noting that this is not the whole story. In the last years, our group [43][44][45][46][47] has explored the Barkhausen noise from inductive experiments, bringing to light the scaling exponents of the avalanche distributions and the average avalanche shape for polycrystalline and amorphous thicker films. Strikingly, the results of this wide statistical treatment of Barkhausen avalanches, besides corroborating the universality classes found for bulk materials, suggest that the two-dimensional DW dynamics is not shared among films in all thickness ranges. Nowadays, generally, the scaling behavior of Barkhausen avalanches is understood in terms of depinning transition of domain walls 30 . Remarkably, experimental investigations confronted to theoretical predictions and simulations have uncovered that scaling exponents and average shape of Barkhausen avalanches reflect fundamental features of a 100-nm-thick ferromagnetic NiFe film submitted to a smooth, slow-varying external magnetic field. The magnification of the curve reveals that the change in magnetization is not smooth, but exhibits discrete and irregular jumps. The jumps of magnetization are due to the jerky motion of the magnetic domain walls in a disordered medium, a result of the interactions between DW and pinning centers, such as defects, impurities, dislocations, and grain boundaries. In a typical Barkhausen noise experiment, the changes of magnetization are detected by a pickup coil wound around the ferromagnetic material. As the magnetization changes, the respective variation of the magnetic flux induces a voltage signal in the coil that can be amplified and recorded. (b) The crackling response in magnetic systems is the Barkhausen noise, which itself consists in the time series of voltage pulses detected by the pickup coil. Notice that the Barkhausen noise shown in (b) is proportional to the time derivative of the magnetization in (a). The noise in correspondence to the magnetization jumps is a series of Barkhausen avalanches with broad range of sizes and durations. The inset shows an example of how the avalanches are extracted. A threshold (dashed line) is set to properly define the beginning and end of each Barkhausen avalanche. Three different avalanches are denoted here by the gray zones. The duration of the example avalanches is marked by solid intervals. The duration T is thus estimated as the time interval between the two successive intersections of the signal with the threshold. The area underneath the avalanche signal, between the same points, is defined as the avalanche size s. of the underlying magnetization dynamics, as system dimensionality and kind and range of interactions governing the DW motion 12,13 . But despite the recent advances in the field, in contrast to bulk materials, our understanding on Barkhausen avalanches in films is far from complete. Specifically, due to experimental difficulties and scarce statistical data, the influence of structural character and film thickness on the scaling behavior is an open question. So a general framework for the universality classes of Barkhausen avalanches in films is still lacking. Here we report an experimental study of the statistics of Barkhausen avalanches in ferromagnetic films and show how scaling exponents and average shape of the avalanches evolve with the structural character of the materials and film thickness. By comparing our experiments with theoretical predictions of models for the DW dynamics, we interpret the universality classes of Barkhausen avalanches in ferromagnetic films. Thereby, we provide an experimental evidence for the dimensional crossover in the DW dynamics, and disclose outcomes of the interplay between system dimensionality and range of interactions governing the DW dynamics on Barkhausen avalanches. Results Scrutinizing Barkhausen avalanches in films. We systematically analyze the statistics of Barkhausen avalanches in polycrystalline and amorphous ferromagnetic films with thicknesses from 20 to 1000 nm (see Methods for details on the films, experiments and statistical analysis of the avalanches). Having established a sophisticated method of extraction of the avalanches due the low intensity of the signal (see Methods and Fig. 1(b)), we obtain a wide statistical analysis measuring the distributions of avalanche sizes and durations, the joint distribution of sizes and durations, the power spectrum, and the average avalanche shape. So, we probe for the influence of the structural character of the materials and film thickness on the DW dynamics, and play with universality classes of Barkhausen avalanches in an experimentally controlled manner. Scaling exponents and the average avalanche shape. Theoretical models has always been crucial in the broad field of crackling noise. Quantitative comparison between experiment and predictions is primary done through scaling exponents. This means that if the theory correctly describes an experiment, the exponents will agree 1 . Here we consider three exponents τ, α, and 1/σνz (See Methods). In the scaling regime, these are defined from P(s) ~ s −τ , P(T) ~ s −α , and 〈s〉 ~ T 1/σνz . Specifically for ferromagnetic films, the key to the understanding of the Barkhausen avalanche statistics resides in the interplay between system dimensionality and range of interactions. Many approaches capturing essential features of the magnetic systems have been developed to mimic the DW dynamics. So, to interpret our experimental results, we summarize in Table 1 the scaling exponents predicted for two-and three-dimensional systems, with long-and short-range interactions governing the DW dynamics [29][30][31][50][51][52][53][54] . Going beyond power laws, the average avalanche shape characterized by universal scaling functions is a sharper tool to identify universality classes 1 . There are two types of averages that can be performed to find different universal profiles. The first is the average temporal avalanche shape 〈V(t|T)〉, obtained averaging over avalanches of a given duration, whereas the second type is the average avalanche shape for a specific size, 〈V(S|s)〉, involving an average over avalanches of the same size; both avalanche shapes follow scaling forms dependent on the universality class through the scaling exponent 1/σνz. So, we also look at the avalanche shape and compare our experimental results with the recent theoretical advances achieved by Laurson et al. 4 (See Methods). Polycrystalline films. Figure 2 shows the Barkhausen avalanche statistics for the polycrystalline films having different thicknesses, and Table 2 presents the measured scaling exponents. For all thicknesses, the distributions in Fig. 2(a-c) show cutoff-limited power-law scaling behavior, revealing genuine scale invariance. The power laws with cutoffs are understood as a fingerprint of a critical behavior of the magnetization process 1 . The most noticeable feature related to the power-law behavior is that the scaling exponents vary as the film thickness is reduced from 100 to 50 nm. Different sets of exponents support the idea that there are distinct kinds of behaviors, the universality classes. Here we clearly see that the polycrystalline films split into two universality classes. The first class includes films with thicknesses above 100 nm, characterized by exponents τ ≈ 1.50, α ≈ 2.0, and 1/σνz ≈ 2.0 measured for the smallest magnetic field frequency. These results are also shown and discussed in detail in ref. 45 . For the films in this first class, we observe well-known rate effects, including the frequency dependence of τ and α, in agreement with earlier findings for bulk polycrystalline materials 31 . Moreover, through the comparison between experimental and theoretical exponents, we find that these films exhibit critical behavior consistent with the mean-field theory describing three-dimensional magnets, which predicts τ = 1.50, α = 2.0, and 1/σνz = 2.0. It discloses in polycrystalline films thicker than 100 nm a typical three-dimensional DW dynamics governed by long-range dipolar interactions 29-31 , as we can see from Table 1. In contrast to these thickest films, the films thinner than 50 nm belong to the second universality class, characterized by frequency-insensitive exponents Here, the solid lines are power laws obtained using Eq. (4) with slopes 1/σνz, the exponent measured from the relationship between 〈s〉 and T for each film. In (a-d), the data are vertically shifted for clarity. The dashed lines are power laws whose slopes correspond to the exponents of the two universality classes found for polycrystalline films. In particular, the experimental results for the universality class that includes the thickest films are also found and discussed in detail in ref. 45 . τ ≈ 1.33, α ≈ 1.5, and 1/σνz ≈ 1.6. It is worth remarking that a similar experimental τ value has previously been reported for different crystalline films with thicknesses below 50 nm [35][36][37][38][39] . Moreover, on the theoretical side, we verify that the exponents are in quite-well concordance with the set of values  τ . Table 1). So, the agreement of experimental results with theoretical predictions and simulations reveals that polycrystalline films thinner than 50 nm have an universal two-dimensional DW dynamics dominated by long-range dipolar interactions [52][53][54] . An important test of consistency with theoretical predictions is provided by the exponent relation (α − 1)/ (τ − 1) = 1/σνz 55 . We verify that the exponents within the measurement error satisfy this equation for all thicknesses. Yet we observe that the power spectrum in Fig. 2(d) follows a power-law behavior at the range of high frequencies. Thus, we also confirm another theoretical prediction, S(f) ~ f −1/σνz , corroborating that the very same exponent may describe the scaling regime in the power spectrum and the power-law relationship between 〈s〉 and T. Moreover, it is interesting to notice the remarkable stability of the scaling exponents within each universality class. Specifically, the exponents have similar values despite the magnetic properties, including magnetic domain structure, magnetic anisotropy and permeability, as well as density of defects, stress level, and the own thickness are changing simultaneously 45,56 . This result is consistent with theoretical studies predicting that micro and macroscopic details of the material do not affect the exponents, but only alter the cutoff 1,12,13 . In particular, a straight consequence of the interplay of all these changes is that no systematic variation of the cutoff with thickness is found. Further, we focus on the measurement of the average avalanche shape. Figure 3 presents the avalanche shapes for both thick and thin polycrystalline films as representative results for the two universality classes. Notice the striking agreement between experiment and theoretical predictions, including three important features: symmetry of the shapes, the exponent 1/σνz, and the scaling function. By employing films, retardation effects due to eddy currents are suppressed by the sample geometry 44 . It avoids the familiar leftward asymmetry found for bulk materials, yielding symmetric avalanche shapes 44 . It is worth noting that two well-known predictions for mean-field systems are retrieved here: 〈V(t|T)〉 is described in the scaling regime by an inverted parabola, and 〈V(S|s)〉 is given by a semicircle 13,27 . Both are found for the films thicker than 100 nm, whose 1/σνz ≈ 2 in the scaling regime, as we can see in Fig. 3(a,b). For thinner films in Fig. 3(c,d) though, whose 1/σνz ≈ 1.6, the average avalanche shapes differ from the mean-field forms. These findings disclose that the average shapes of the avalanches evolve with the universality class, and are perfectly described by the general scaling forms reported in ref. 4 , both in and beyond mean field. Amorphous films. Figure 4 shows the dependence with thickness of the Barkhausen avalanche statistics for amorphous films, while Table 2 presents the measured scaling exponents. Similarly to polycrystals, the cutoff-limited power-law scaling behavior in the distributions of Fig. 4(a-c) and the power-law in the power spectrum of Fig. 4(d) are found for amorphous films. It is noteworthy that, as a test of consistency with theoretical predictions, we confirm that the τ and α, and 1/σνz measured for all thicknesses also satisfy the equation relating these three exponents 55 . Curiously enough, at first glance, the exponents τ and α might mislead us, suggesting a common critical behavior for all amorphous films, irrespective of the thickness and composition. Moreover, notice in Table 1 that two universality classes have very similar exponents τ and α. So theoretical predictions for both exponents would lead us to two controversial interpretations, raising doubts on the underlying critical behavior. However, despite τ and α behave in a remarkably similar manner, a closer examination of the exponents, including 1/σνz, shows us that the amorphous films split into two distinct universality classes too. Indeed, the scaling relation between 〈s〉 and T is known as a robust quantity, and a reliable test to identify universality classes 25 . For all amorphous films, no field frequency dependence of the exponents is found. The first universality class includes films with thicknesses above 100 nm and is characterized by exponents τ ≈ 1.28, α ≈ 1.5, and 1/σνz ≈ 1.8, values comparable with those previously reported for bulk amorphous materials 31 . By the way, these results for the FeSiB films in this class are also found in ref. 46 , but we recall them here to reinforce the robustness of the scaling behavior, corroborating that it is not affected by the composition of the films as well as by the strong modifications of the magnetic properties and magnetic domain structure taking place within this thickness range 43,46,57 . In addition, the values of the exponents in the first universality class are compatible with τ = 1.27, α = 1.5, and 1/σνz = 1.77 (See Table 1), (1), (2) and (3), while for the power spectrum, the solid lines are power laws obtained using Eq. (4) with slopes 1/σνz for each film. In (a-d), the data are vertically shifted for clarity. The dashed lines are power laws whose slopes correspond to the exponents of the two universality classes found for amorphous films. In particular, the experimental results for the FeSiB films with thickness of 100 and 500 nm are also found in ref. 46 . Nevertheless, notice here the robustness of the scaling behavior for each universality class. This behavior is clearly not affected by the composition of the films. predictions of models in which dipolar interactions are neglected in the DW motion [29][30][31]50,51 , as expected for amorphous materials 31 . Therefore, the exponents suggest that amorphous films thicker 100 nm present a three-dimensional magnetic behavior with short-range DW surface tension governing the DW dynamics. Next, amorphous films thinner than 50 nm, also irrespective of the composition, are found in the second universality class, characterized by the exponents τ ≈ 1.33, α ≈ 1.5, and 1/σνz ≈ 1.55. Here, it is very interesting to note that amorphous and polycrystalline thin films have similar exponents within the measurement error (See Table 2), suggesting they share the very same DW dynamics. The exponents are in quantitative agreement with  1 33 τ . ,  1 5 α . , and σν . z 1/ 15  52-54 shown in Table 1. As a straight consequence, we find that amorphous films thinner than 50 nm also present a two-dimensional DW dynamics dominated by long-range interactions of dipolar origin. Last but not least, Fig. 5 presents the avalanche shapes for selected thick and thin amorphous films as representative results for the two universality classes. We clearly see that the average avalanche shapes evolve with the universality class, as expected 4 . Noticeably, experiment and theory again agree quite well, including features as the symmetry of the shapes due to absence of eddy current effects 44 and the scaling form ascribed to 1/σνz 4 . Thus, we corroborate the exponent estimated from the joint distribution of sizes and durations, as well as we also confirm the collapse and form of average avalanche shapes as a powerful alternative way to estimate the exponent 1/σνz. Discussion Our findings raise interesting issues on the universality classes of Barkhausen avalanches. By comparing our experiments with theoretical predictions of models for the DW dynamics, we find that polycrystalline and amorphous films with distinct thicknesses assume values consistent with three well-defined universality classes. Specifically, the films split into the following classes of materials: (i) Polycrystalline films thicker than 100 nm presenting three-dimensional DW dynamics governed by long-range dipolar interactions; (ii) Amorphous films thicker than 100 nm having three-dimensional magnetic behavior with short-range DW surface tension governing the DW dynamics; (iii) Polycrystalline and amorphous films thinner than 50 nm with a two-dimensional DW dynamics dominated by strong long-range dipolar interactions. As a consequence, the changes found in scaling exponents and avalanche shape indicate modifications in the critical behavior of the system, i.e., the system passes from one universality class to another. Why do the scaling behavior change with the film thickness? Noticeably, our results confirm that polycrystalline films have the old-plain DW dynamics governed by long-range interactions found for polycrystalline materials [29][30][31] . Hence, we interpret the change of the exponents and the evolution of the avalanche shape, found in polycrystalline films as the thickness is reduced from 100 to 50 nm, as a clear experimental evidence of a dimensional crossover in the DW dynamics, from three-to two-dimensional magnetic behavior. Our results directly reveal that a dimensional crossover in the DW dynamics takes place within the thickness range between 100 and 50 nm for both, polycrystalline and amorphous films. But what makes this thickness range special? The thickness has fundamental role on the magnetic domain structure and DW formation, as well as it also affects the characteristics of the DW motion. Figure 6 shows representative domain images ilustrating the evolution of the magnetic domain structure with thickness. It is noteworthy that similar domain patterns have previously been reported for polycrystalline and amorphous samples with different compositions 15,43,56,[58][59][60] . The thickness dependence of the magnetic properties and domain structure has been focus of many investigations in the last decades and, nowadays, despite the complexity of the issue, its main aspects are well understood 14,15 . For our set of samples, films with thicknesses above ~150-200 nm present stripe magnetic domain structure, a configuration strictly related with the isotropic in-plane magnetic properties and an out-of-plane anisotropy contribution 14,15,43,45,46,[56][57][58][59][60] . Below this thickness, the films exhibit magnetic behavior of a classical in-plane uniaxial magnetic anisotropy system, without any out-of-plane anisotropy component, characterized by large in-plane magnetic domains with antiparallel magnetization oriented along the easy axis, separated by various types of domain walls strongly dependent on the film thickness 14,15,43,45,46,[56][57][58][59][60] . However, although modifications of the magnetic domain structure are found within the thickness range between 100 and 50 nm, the magnetization in these films essentially lies in the plane, suggesting that the domain wall itself plays the major role here on the critical behavior of the magnetization process. In contrast to bulk materials with relatively simple magnetic structure and nearly parallel DW, films show richer and often more complicated domains and DW patterns 12,13,30 . In soft ferromagnetic films with in-plane magnetic domains, despite the diversity of DW (Bloch walls, symmetric and asymmetric Néel walls, and the conspicuous cross-tie wall, this latter a complex pattern of Néel wall), the basic types of DW are simply Bloch and Néel walls. The type of domain wall will depend on the domain wall energy 61,62 , which in turn for both wall types is dependent on the thickness, domain-wall thickness, effective magnetic anisotropy, saturation magnetization and exchange stiffness constant, or, in other words, is a result of the sum of the magnetostatic, exchange and anisotropy energy contributions 14,15,61,62 . Generally, the domain wall assumes the form of Bloch wall (in which an out-of-plane stray field exists in the domain wall due to the rotation of magnetic moments occurs in a perpendicular direction from the adjacent domains) when the film is thicker; and it will become Néel wall (in which the magnetic moments inside the wall strictly lie in the film plane, thus reducing the magnetostatic contribution to the wall energy) when the film thickness is below a critical value 14,15,[61][62][63] . Actually, classical books 14,15 and reports addressing theoretically and experimentally the stability of DW in films [61][62][63][64][65][66][67] reveal that the well-known transition in which the domain wall passes from Bloch type to Néel type takes place in a critical thickness range between 100 and 50 nm. Hence, we understand that deviations from the critical behavior observed for the thickest films may be ascribed to the thickness, i.e., the smaller geometrical dimension of the system. Specifically, magnetic domains and domain walls are influenced by the film thickness due to the increasing importance of stray fields along the Figure 6. Evolution of the magnetic domain structure with thickness. Magnetic domain structure for amorphous FeSiB films with thicknesses of (a) 50, (b) 100 and (c) 500 nm. Films with thicknesses above ~150-200 nm have the same features observed for the 500-nm-thick film, which presents stripe magnetic domain structure, a configuration strictly related with isotropic in-plane magnetic properties and an out-of-plane anisotropy contribution 14,15,43,45,46,[56][57][58][59][60] . Below this thickness, the films exhibit magnetic behavior of a classical in-plane uniaxial magnetic anisotropy system, without any out-of-plane anisotropy component 14,15,43,45,46,[56][57][58][59][60] . Films with thickness between ~100 and ~150-200 nm present domain structure similar to that found for the 100-nm-thick film, characterized by large in-plane magnetic domains with antiparallel magnetization oriented along the easy axis. However, with decreasing thickness, we observe the emergence of domain walls with zigzag pattern, separating the in-plane magnetic domains, as evidenced for the 50-nm-thick film. In (a) and (b), the image size is 400 × 400 μm 2 , whereas it is 30 × 30 μm 2 in (c). All images are taken at the remanence, after inplane magnetic saturation. Specifically considering these images, the field is first applied along the vertical direction. ScIentIfIc REPORtS | (2018) 8:11294 | DOI:10.1038/s41598-018-29576-3 direction normal to the plane 12 . Between 100 and 50 nm, the thickness becomes of the same order of magnitude of the DW width and the stray fields constitute an appreciable source of magnetostatic energy, having straight impact on the inner structure of the DW 14,15,61,62 and, therefore, on the DW motion. Thereby, from the phenomenological point of view, the dimensional crossover may be seen as a consequence of this change in the type of DW, the Bloch-Néel transition. Due to the lost of one degree of freedom of the DW, with an essentially in-plane distribution of magnetic moments inside the wall, a two-dimensional description of the DW dynamics become reasonable for the films thinner than 50 nm. So do we measure different exponents and avalanche shapes for polycrystalline and amorphous films? Yes, we do. It's natural to ask whether the established link found for bulk materials between microstructure of the materials and range of interactions 31 is still valid for films. Indeed, we confirm this relationship for polycrystalline films. Intuitively, one could expect that amorphous films irrespective of the thickness present a DW dynamics governed by short-range elastic interactions. This is particularly true for all films thicker than 100 nm, which share a common three-dimensional DW dynamics, despite of significant changes in the magnetic domain structure, as we can see in Fig. 6. And, what happen with decreasing thickness? For thinner amorphous films though, the crucial agreement between experiment and theory reveals an unexpected critical behavior -films in two-dimensional regime naturally evolves towards a DW dynamics in which dipolar interactions are stronger than surface tension effects. Apparently, a crossover to an universality class describing two-dimensional DW dynamics with short-range interactions is only found when an external parameter, as temperature, is experimentally altered, thus tuning the scaling behavior according the dominant interaction in the system by modifying the DW structure 37 . The most striking finding here is that the change of exponents and avalanche shape for amorphous films reveals a crossover between two universality classes that is caused by both, change of system dimensionality and competition between the short-range DW surface tension and the long-range dipolar interaction. The interpretation that the dominant interaction changes from short-range to long-range interaction, simultaneously to the dimensional crossover, is consistent with the modification of the DW morphology 54 with decreasing thickness. Specifically, the contribution to the scaling behavior of strong long-range interactions of dipolar origin arises due to the appearance of the charged zigzag DW morphology [35][36][37][38][39][40][41][42][52][53][54] as the thickness is reduced from 100 nm, as we can confirm in Fig. 6. This report is the first to show the dimensional crossover in the DW dynamics and to disclose the outcomes of the interplay between system dimensionality and range of interactions governing the DW dynamics on Barkhausen avalanches. The critical behavior in many systems can be explained by the range of interactions and system dimensionality. Theories and experiments are crucial to explain the signatures of the underlying avalanche dynamics, and they can help to uncover mysteries in a wide sort of systems. However, achieving a global perspective on the universality classes for crackling noise remains an open question. Inspired by numerous challenges in the field, we address here the crackling noise in ferromagnets. We believe that measuring an only power law is almost never definitive by itself. So we scrutinize the traditional statistics of Barkhausen avalanches in polycrystalline and amorphous ferromagnetic films having different thicknesses. Our results show how scaling exponents and average shape of the avalanches evolve with the structural character of the materials and film thickness, informing these features of the samples play fundamental role on the signatures of the underlying domain wall dynamics. Specifically, for films thicker than 100 nm, systems with three-dimensional magnetic behavior, scaling exponents vary according to the structural character of the sample, placing polycrystalline and amorphous materials in distinct universality classes associated with the kind and range of interactions governing the DW dynamics. Moreover, the exponents are dependent on the sample thickness, thus splitting thick and thin films into distinct classes, and inferring the need of a common two-dimensional description for films thinner than 50 nm, irrespective of the structural character. By comparing our experiments with theoretical predictions, we bring experimental evidence that supports the validity of several models for the DW dynamics. We also reveal that the films split into three well-defined universality classes of Barkhausen avalanches. Through the changes of the scaling exponents and avalanche shape, we observe the dimensional crossover in the DW dynamics and the outcomes of the interplay between system dimensionality and range of interactions governing the DW dynamics on Barkhausen avalanches. Thereby, we provide a clear picture to the crackling noise in magnetic systems with reduced dimensions. But of course the whole story is not over. After playing with universality classes of Barkhausen avalanches, we wonder how many systems throughout nature share similar interplay of fundamental features underlying crackling noise. So, let's play! Methods Ferromagnetic films. We investigate Barkhausen avalanches in polycrystalline and amorphous fer- The films are deposited by magnetron sputtering onto glass substrates, with dimensions 10 mm × 4 mm, covered with a 2-nm-thick Ta buffer layer. The deposition process is carried out with the following parameters: base vacuum of 10 −7 Torr, deposition pressure of 5.2 mTorr with a 99.99% pure Ar at 20 sccm constant flow, and DC source with current of 50 mA and 65 W set in the RF power supply for the deposition of the Ta and ferromagnetic layers, respectively. During the deposition, the substrate moves at constant speed through the plasma to improve the film uniformity, and a constant magnetic field of 1 kOe is applied along the main axis of the substrate in order to induce magnetic anisotropy. Structural and magnetic characterizations. The structural characterization is obtained by x-ray diffraction. While low-angle x-ray diffraction is employed to determine the deposition rate and calibrate the film thickness, high-angle x-ray diffraction measurements are used to verify the structural character of each sample. Quasi-static magnetization curves are obtained along and perpendicular to the main axis of the films, in order to verify the magnetic properties. Detailed information on the structural characterization and magnetic properties is found in refs [44][45][46]56 . To obtain further information on the magnetic behavior and magnetic domain morphology, images of the domain structure of the films are acquired by high resolution longitudinal Kerr effect experiments, on a 400×400 μm 2 sample area, as well as by magnetic force microscopy, visualizing a 30 × 30 μm 2 sample area. In particular, all images are taken at the remanence, after in-plane magnetic saturation. Barkhausen noise experiments. We record Barkhausen noise time series using the traditional inductive technique in an open magnetic circuit, in which one detects time series of voltage pulses with a pickup coil wound around a ferromagnetic material submitted to a smooth, slow-varying external magnetic field, as we can see in Fig. 1(a). In our setup, sample and pickup coils are inserted in a long solenoid with compensation for the borders to ensure an homogeneous magnetic field on the sample. The sample is driven by a triangular magnetic field, applied along the main axis of the sample, with an amplitude high enough to saturate it magnetically. Here we perform experiments with driving field frequency in the range 0.05-0.4 Hz. Barkhausen noise is detected by a pickup coil (400 turns, 3.5 mm long and 4.5 mm wide) wound around the central part of the sample. A second pickup coil, with the same cross section and number of turns, is adapted in order to compensate the signal induced by the varying magnetic field. The Barkhausen signal is then amplified and filtered using a 100 kHz low-pass preamplifier filter, and finally digitized by an analog-to-digital converter board with sampling rate of 4 × 10 6 samples per second. Barkhausen noise measurements for all driving field frequencies are performed under similar experimental conditions. The time series are acquired just around the central part of the hysteresis loop, near the coercive field, where the DW motion is the main magnetization mechanism and the noise achieves the condition of stationarity 12,13,25 . In particular, for each ferromagnetic film, the following analyses are obtained from 200 time series. Statistical analysis of the Barkhausen avalanches. Barkhausen noise is composed by a series of intermittent voltage pulses, i.e., avalanches, combined with background instrumental noise. At a pre-analysis stage, we employ a Wiener deconvolution, which optimally filters the background noise and removes distortions introduced by the response functions of the measurement apparatus in the original voltage pulses, thus obtaining reliable statistics despite the low intensity of the signal. Detailed information on the Wiener filtering is provided in ref. 44 . The following noise statistical analysis is performed using the procedure discussed in refs 21,31,44,68 , in which a threshold is set to properly define the beginning and end of each Barkhausen avalanche. The inset in Fig. 1(b) shows an example of how the avalanches are extracted. The duration T of the Barkhausen avalanche is estimated as the time interval between the two successive intersections of the signal with the threshold. The area underneath the avalanche signal, between the same points, is defined as the avalanche size s. In contrast to magneto-optical techniques that restrict the analysis to the distribution of avalanche sizes, our experiments allow us to perform for films the wide statistical treatment usually employed for bulk materials. Here we identify the universality class of Barkhausen avalanches by measuring the distributions of Barkhausen avalanche sizes and avalanche durations, the average size as a function of the avalanche duration, power spectrum, and the average avalanche shape. We observe that the measured P(s), P(T) and 〈s〉 vs. T avalanche distributions typically follow a cutoff-limited power-law behavior and can be respectively fitted as where τ, α and 1/σνz are the scaling exponents, s 0 and T 0 indicate the position of the cutoff where the function deviates from the power-law behavior, and n s , n T , and n ave are the fitting parameters related to the shape of the cutoff function. In particular, we verify that the exponents are independent of the threshold level, at least for a reasonable range of values. We observe that the measured S(f) also follows a power-law behavior at the high frequency range of the spectrum, which can be described by 55 Although the power spectrum has not been considered for the fitting procedure, we confirm the theoretical prediction that the same scaling exponent can be employed to describe the power-law relationship between 〈s〉 and T, as well as the scaling regime of the power spectrum at high frequencies. We go beyond scaling exponents and also focus on the average avalanche shape, a sharper tool for comparison between theory and experiments 1 . Here, we obtain both, the average temporal avalanche shape, considering the avalanches of a given duration T and averaging the signal at each time step t, and the average avalanche shape for
8,075.2
2018-01-30T00:00:00.000
[ "Materials Science" ]
MOLE 2.0: advanced approach for analysis of biomacromolecular channels Background Channels and pores in biomacromolecules (proteins, nucleic acids and their complexes) play significant biological roles, e.g., in molecular recognition and enzyme substrate specificity. Results We present an advanced software tool entitled MOLE 2.0, which has been designed to analyze molecular channels and pores. Benchmark tests against other available software tools showed that MOLE 2.0 is by comparison quicker, more robust and more versatile. As a new feature, MOLE 2.0 estimates physicochemical properties of the identified channels, i.e., hydropathy, hydrophobicity, polarity, charge, and mutability. We also assessed the variability in physicochemical properties of eighty X-ray structures of two members of the cytochrome P450 superfamily. Conclusion Estimated physicochemical properties of the identified channels in the selected biomacromolecules corresponded well with the known functions of the respective channels. Thus, the predicted physicochemical properties may provide useful information about the potential functions of identified channels. The MOLE 2.0 software is available at http://mole.chemi.muni.cz. Background The number of known three-dimensional (3D) structures of biomacromolecules (proteins, nucleic acids and their complexes) has increased rapidly over recent years, enabling relationships between structure and function to be analyzed at an atomic level. The functions of biomacromolecules usually depend on interactions with other biomacromolecules as well as ions and small molecules, such as water, messenger and endogenous compounds, pollutants and drugs, which can occupy "otherwise empty spaces" in biomacromolecular structures [1]. Thus, information about the nature of empty spaces in a biomacromolecule can provide valuable insights into its functions. Biomacromolecular empty spaces can be classified as pockets, cavities, voids, channels (tunnels) or pores ( Figure 1). A pocket usually refers to a shallow depression on a biomacromolecular surface, whereas a cavity describes a deeper pocket or cleft. If the cavity is encapsulated inside a biomolecule (having no connection to a water accessible surface), it is called a void. A channel or tunnel is a pathway inside a cavity connecting an internal point (typically the deepest apex) with an exterior. A pore is considered here as a channel that passes through the biomacromolecule from one point on the surface to another. The present work focused on pores and channels because they have been shown to play significant roles in many biologically relevant systems. For example, internal pores of ion channels maintain a highly selective ionic balance between the cell interior and exterior, [2][3][4][5][6] photosystem II channels are involved in photosynthesis, [7,8] ribosomal polypeptide exit channels allow nascent peptides to leave the ribosome during translation, [9] and active site access/egress channels enable substrate/ product to enter/leave the occluded active sites of various enzymes (e.g., cytochrome P450, [10][11][12][13][14][15] acetylcholinesterase, [16][17][18] etc.). Information about the nature of active site access paths can also be utilized in biotechnology applications aimed at designing more effective and selective enzymes [19][20][21]. Unquestionably, identification and characterization of channels are fundamental to understanding numerous biologically relevant processes and serve as a starting point for rational drug design, protein engineering and biotechnological applications. Over the last few years, numerous computational approaches have been developed for detection and characterization of empty spaces in biomacromolecules, particularly proteins [22]. The main strategies used in the developed algorithms can be grouped into four classes [23]. The first class comprises grid-based methods, which project biomacromolecular structures onto a 3D grid, process the void grid voxels and connect them into pockets or tunnels. These methods are used in numerous software tools, such as POCKET [24], LIGSITE [25,26], dxTuber [27], HOLLOW [28], 3V [29], CAVER 1.x [30] and CHUNNEL [31]. Sphere-filling methods belong to a second class. These methods carpet biomacromolecules with spheres layer by layer. A cluster of carpeting spheres is considered a pocket. This method is implemented in PASS [32] and SURFNET [33]. The third class involves slice and optimization methods. These methods split a biomacromolecular structure into slices along a start vector defined by the user and then optimization methods are used to determine the largest sphere. These approaches are implemented in the software HOLE [34] and PoreWalker [35]. The fourth class represents methods utilizing Voronoi diagrams, in which the shortest path is searched from a starting point to the biomacromolecular surface. This approach was used in the previous version of MOLE 1.x [19] and it is also utilized in other software tools, e.g., MolAxis [36,37], CAVER 2.0 [38] and CAVER 3.0 [39]. Here, we present an advanced and fully automatic software tool, MOLE 2.0, based on a new, fast and robust algorithm for finding channels and pores. MOLE 2.0 provides an improved approach for channel identification. The algorithm introduces several preprocessing steps that result in increased speed (up to several times faster), accuracy (more relevant channels are identified) and robustness. New capabilities include the computation of pores and better identification of channel start points. It contains extended options for starting point selection and allows improved computation of channel profiles together with estimation of their basic physicochemical properties. The implemented automatic filtering of obtained channels facilitates selection of the relevant channels. MOLE 2.0 offers an innovative user experience, as it can be used effectively even without knowledge of the underlying algorithms whilst at the same time allows the tunnel detection algorithm to be tweaked interactively, such that the results are immediately available for inspection and comparison. MOLE 2.0 also introduces a new, intuitive and userfriendly interface. MOLE 2.0 can be used as a stand-alone application or as a plugin for the widely used software PyMOL [40]. Some functionality is also available in a platform-independent manner via the web-based application MOLEonline 2.0 [41]. MOLE 2.0 algorithm The algorithm for finding channels implemented in MOLE 2.0 involves seven steps: i) computation of the Delaunay triangulation/Voronoi diagram of the atomic centers, ii) construction of the molecular surface, iii) identification of cavities, iv) identification of possible channel start points, v) identification of possible channel end points, vi) localization of channels, and vii) filtering of the localized channels ( Figure 2). Step i: computing the delaunay triangulation/voronoi diagram In the first step, the Delaunay triangulation of the atomic centers is computed using an incremental algorithm that utilizes pre-sorted input points according to the Hilbert curve [19,42]. The Voronoi diagram is then constructed as the dual of the Delaunay triangulation. The Voronoi diagram can be represented as a graph with vertices corresponding to the circumcenters of the Delaunay tetrahedrons and edges present if two tetrahedrons share a common side (i.e., share exactly three vertices). Steps ii and iii: approximating the molecular surface and identifying cavities The molecular surface is approximated by iterative removal of boundary tetrahedrons from the outermost layers (i.e., tetrahedrons found at the interface between the molecule and the external environment). Boundary tetrahedrons produced by the triangulation are removed in this step if they are sufficiently large to fit a sphere with a given probe radius (tetrahedron T fits a sphere S with probe radius r if the center C of sphere S can be placed inside the tetrahedron and the distance to all vertices of T is greater than or equal to the sum of r, with the van der Waals radius of an atom corresponding to the given vertex). Next, tetrahedrons that are too small to fit a sphere with interior radius are removed. Remaining tetrahedrons form one or more connected components. We call the components that contain at least one tetrahedron on the molecular surface cavity diagrams. It should be noted here that the cavity diagram is a purely geometrical concept to help identify regions of space (volume) that can contain tunnels and only very loosely corresponds to the cavities shown in Figure 1B). Steps iv and v: identifying possible start and end points of channels The algorithm includes two ways to specify potential channel start and end points: Computed: Start and end points are defined as the centers of the deepest tetrahedrons in each cavity. The depth of the tetrahedron is defined as the number of Voronoi edges from the closest boundary tetrahedron. User-defined: Specified by a 3D point (that can also be defined as a centroid of several residues). Next, cavities that have at least one tetrahedron with a centroid within the origin radius from the userspecified point are found. Finally, for each such cavity, the start point is selected as the circumsphere center of the tetrahedron closest to the original point. Potential channel end points are placed in the centers of certain boundary tetrahedrons in such a way that the distance between two end points is at least the cover radius. This is achieved by picking the largest boundary tetrahedron and marking it as an exit, then removing all boundary tetrahedrons within the cover radius. This process is repeated until all non-exit boundary tetrahedrons are removed. Step vi: computing channels Once the potential start and end points have been identified, channels are computed as the shortest paths between all pairs of start and end points in the same cavity diagram. To achieve this, Dijkstra's algorithm is used with edge weights given by the following formula: where l(e) is the length of the edge, d(e) is the distance of the edge to the closest atom van der Waals sphere and ε is a small number to avoid division by zero [19]. At this stage, each channel is represented by a sequence of tetrahedrons. The next step is to approximate the channel centerline by a natural cubic spline of the circumsphere centers of the tetrahedrons. Additionally, a "radius spline" is computed that determines the centerline distance to the closest atom van der Waals sphere. Step vii: filtering of channels The above-described steps usually generate a large number of channels. However, many of these channels are either too narrow (i.e., have a bottleneck with a small radius) to be considered relevant or are duplicated (i.e., too similar to each other). To obtain the most relevant channels, the algorithm contains a filter with two criteria. The first criterion deals with bottlenecks using parameters that define the maximum bottleneck length and minimum bottleneck radius. These two parameters ensure that there is enough room for a ligand to pass through each region of the tunnel. The second criterion is necessary because channels generated using steps (i-vi) of the algorithm often have very similar centerlines that only deviate towards the ends of the channels near the molecular surface. Therefore, for practical purposes, these channels can be considered identical. To remove duplicate channels, a parameter called the cutoff ratio is introduced. The centerlines of each pair of tunnels are compared, and if two channels "share" at least the cutoff ratio percentage of the centerline, the longer one is removed. Lining and physicochemical properties of identified channels The channel lining amino acids residues are the residues that surround the centerline of the channel. The centerline is uniformly divided into layers, and each layer is defined by the residues lining it. A new layer starts whenever there is a change in the list of residues lining the tunnel along its length. The lining of the channel is then described as a sequence of layer lining residues. For each layer, the length (distance between the first and last atom of the layer projected to the tunnel centerline) and radius (bottleneck) are computed. Additionally, the orientation of each residue is determined to check whether the residue faces the tunnel with its backbone or side-chain moiety. Basic physicochemical properties of protein channels are computed from the set of lining amino acids residues. In MOLE 2.0, the charge according to the amino acid sidechain type (Arg, Lys +1e; Glu, Asp −1e), hydropathy [43], hydrophobicity [44], mutability [45] and polarity [46] are computed. The properties are calculated for the unique residues surrounding the channel by averaging tabulated values (Additional file 1: Table S1) for every amino acid residue that has a side chain oriented towards the tunnel. The only exception is charge, which is calculated as the sum of the charges of individual amino acid side chains. For amino acids that have their main chains oriented towards the tunnel, tabulated values for glycine (Gly) are used to compute the hydrophobicity and hydropathy, and the value for asparagine (Asn) is used to evaluate polarity. Amino acids residues that have their main chains lining the channel are not considered when computing mutability. MOLE 2.0 also enables calculation of the weighted physicochemical properties (except the charge) of the channel. The weighted properties are evaluated by applying the above methods separately for each layer and then computing the weighted average, where the weight is given by the length of the layer. We note that the calculated physicochemical properties should be interpreted with care, because the used calculation comes from an assumption that the side chains making the channel wall determine the internal environment of the channel. Merging channels to pores The MOLE 2.0 algorithm can compute pores by merging channels. There are three modes for computing pores. The first automatic mode evaluates pores as "channels" between all pairs of end points in a given cavity. In the second mode channels are computed among a set of user-selected end points. Finally, the third mode first computes channels from a user-defined start point and then merges them to form a pore. This mode also imposes a so called "pore criterion" that stipulates that the end points of the pore must be further away than the average length of the channels that formed the pore. In all modes, pores that are too similar are removed using the same criteria as for channels. Complexity of the algorithm The worst-case complexity of the algorithm is (N 2 log N), where N is the number of atoms in the molecule. However, in most practical cases, the complexity is O(M log M), where M is the number of vertices in the Voronoi diagram. In the worst case, M = N 2 . However, as shown by Dwyer et al. [47], in most cases M = O(N). Thus, as a result of the use of the incremental algorithm and Hilbert curve ordering, the complexity of calculating the Delaunay triangulation of most molecular structures is O (N log N). Finally, the complexity of all the remaining steps of the algorithm is at most O(M log M). MOLE 2.0 ( Figure 3) supports protein files in PDB format. Once the protein is loaded, the GUI provides a full interactive 3D rendering of the protein and the option to tune individual parameters of the channel computation. The GUI displays information about the identified cavities and once channels or pores are computed, a detailed view of them can be displayed that provides information about the channel's profile, lining and physicochemical properties ( Figure 4). Information on the channels can be exported in several formats, including XML, CSV, PDB and PyMOL for enhanced visualization. The command line version of MOLE 2.0 requires the user to specify the input parameters in an XML file. The output can be obtained in XML format as well as a PDB or PyMOL script together with 3D representations of channels that can be loaded to Jmol [48] (http://www. jmol.org). The complete documentation can be found on the web page http://mole.chemi.muni.cz. Case study: properties of channels of cytochrome P450s BM3 and P450cam Channels were calculated using MOLE 2.0 with parameters set as follows: minimal bottleneck radius 1.25 Å, probe radius 3 Å, surface cover radius 10 Å and origin radius 5 Å. The heme cofactor was used as the start point in all structures, while all other non-protein ("HETATM") groups were ignored. The PDB database contains a relatively large number of X-ray structures of the two selected cytochrome P450s: 43 structures with 54 chains for P450cam (CAM) and 37 structures with 80 chains for P450BM3 (BM3). All crystal structures were divided into monomers and superimposed using the PyMOL 0.99rc program [40]. The identified channels were sorted into specific families according to the nomenclature of Wade and coworkers [15]: channels were included in a particular family if they had at least one point that trespassed a 4 Å wide cube in space assigned to a specific area for that channel family (i.e., through the B/C loop for channel 2e). Only the shortest channel in each channel family was selected for each protein structure. Other similar channels were designated as duplicates. The remaining channels were visually checked and meandering channels were also removed. Duplicates were also excluded from the comparison of physicochemical properties. Results and discussion Benchmarking study MOLE 2.0 was compared with four other software tools: MOLE 1.4 [19], MolAxis [36], CAVER 2.0 [38] and CAVER 3.0 [39] (beta version). The main features of the software tools are listed in Table 1. By comparison, MOLE 2.0 provides the richest set of input and output features and has the advantage that both command line and graphical user interfaces are available. The need for a start point is made easier by the fact that MOLE 2.0 enables active sites annotated in the Catalytic Site Atlas (CSA, http://www.ebi.ac.uk/ thornton-srv/databases/CSA/) [49] to be used as well as automatic identification of start points in a given structure. Data generated by MOLE 2.0 can be exported to PyMOL [40], which is a popular visualization software, and conveniently, MOLE 2.0 can also be called directly from PyMOL via a plug-in module. In the MOLE 2.0 GUI, a user can select and change the channel end points, which may facilitate the detection of complex channels and pores. The calculation of channels can be customized through nine parameters, whose default values enable automatic identification of channels in many common protein structures. Hence, MOLE 2.0 can be readily used by a new user but provides sufficient flexibility for an advanced user. Besides setup of these parameters, users can adjust the surface of a molecule and filtering of detected channels. It should be noted that MOLE 2.0 is the only software currently available that allows a user to compute cavities and estimate physicochemical properties of identified channels. The performance of all the considered software tools was compared on a set of thirteen diverse biomacromolecules containing several channels or pores: two RNAs, three The software tools were used to identify channels with a radius of at least 1.25 Å along most of their length. Because some channels may be "partially closed" by an amino acid side chain, we also considered channels with a radius less than 1.25 Å provided this narrowing was not longer than 3 Å. Such channels may still be biologically active because they allow at least adaptive penetration of a water molecule (radius~1.4 Å) upon dynamical changes. If two channels shared more than 70% of their length, only the shortest one was reported. This feature eliminated very similar (duplicate) channels. Full details of the setup of all the software tools and post-processing of results are provided in the Additional file 1. We used the same start points for all the software tools (in Additional file 1: Table S2). Both versions of MOLE (2.0 and 1.4) together with MolAxis were able to process the largest molecular system considered in the benchmarking, i.e., the large ribosomal subunit containing almost 100,000 atoms. Consistently, MOLE 2.0 displayed the shortest processing times for both small and large systems. For small systems, MOLE 2.0 gave similar processing times to those of MolAxis (one order of magnitude faster than the CAVER tools), whereas for large systems, MOLE 2.0 was one order of magnitude faster than MolAxis and the CAVER tools were not able to calculate the largest system (large ribosomal subunit 1JJ2) ( Figure 6 and Additional file 1: Table S3). Such enhancement of processing times may be a considerable advantage if a large number of structures need to be processed (e.g., in analyses of structures from molecular dynamics simulations). MOLE 2.0 found channels in all the tested molecules, whereas the other software tools did not detect any channels in some cases: MOLE 1.4 and MolAxis in three cases, CAVER 2.0 in six cases and CAVER 3.0 in five cases ( Figure 5 and Additional file 1: Table S4). All software tools predicted a rather similar set of channels. The software tools that had end points localized directly on the convex hull (e.g., MOLE 1.4, CAVER 2.0) predicted longer channels with large radii where the probe left the biomacromolecular surface (this behavior could be easily recognized from the "bulky ends" of the identified channels outside the structure). In the case of gramicidin D, which forms a transmembrane pore, MolAxis and CAVER 2.0 predicted a clearly incorrect set of channels, whereas the other tools identified appropriate channels inside the pore. It should be noted that MOLE 2.0 has a new feature of automatic identification of pores in a biomacromolecular structure, which makes it easier to characterize pores and avoids the need for manually merging two (or more) channels into a single pore (a process that cannot be overlooked if one wants to analyze pores with software tools primarily designed for the analysis of channels rather than pores). For several of the molecules containing biologically important channels/pores with known functionality and properties, we evaluated the physicochemical properties by MOLE 2.0 and related them to the known function of the channel/pore (Figure 7 and Table 2). Gramicidin D (1GRM) is known to form a polar pore in membranes ( Figure 7A), [50] which was also reflected in the physicochemical properties identified using MOLE 2.0 as the polar part of the pore surface was predicted to be 100%. However, the predicted polarity of the pore was not high. The ribosomal polypeptide (1JJ2) exit channel directs a nascent protein from the proteosynthetic center to the outside of the ribosome [9]. MOLE 2.0 showed that the channel ( Figure 7B) is highly polar and lined by amino acids side chains bearing positive charges (7 arginines). In addition, the channel is also lined by 16 RNA backbone phosphate groups. This clearly suggests a fragmental charge along the channels, which is necessary to prevent the nascent peptide from sticking to the channel wall inside the ribosome. In the cytochrome c oxidase (1M56), MOLE 2.0 identified two channels with different polarities ( Figure 7C), which may be involved in the transfer process required for the proper functioning of this enzyme [51]. The central pore ( Figure 7D) of the nicotinic acetycholine receptor (2BG9) was suggested to be lined by 18 negatively charged amino acids, which explains the experimentally observed selectivity for cation permeation [52]. The final analyzed channel was present in carbonic anhydrase (3EYX), which can utilize inorganic carbon sources CO 2 and HCO 3 − [53]. MOLE 2.0 predicted that the channel ( Figure 7E) is highly polar, in agreement with expectations. Taken together, the above findings indicate that physicochemical properties may provide useful information about the nature of the channel and its biological function. However, the predicted physicochemical properties may be highly sensitive to the choice of X-ray structure, as discussed later. Case study: properties of channels in cytochrome P450 BM3 and P450cam Cytochrome P450s (P450) are heme-containing monoxygenases the active sites of which are deeply buried inside their structures [11,54] and are connected to the exterior by access channels [15]. Hence, channels are considered to play an important role in the metabolism of P450 substrates [12]. Two bacterial cytochrome P450 enzymes -P450cam (CAM, which is also known as CYP101) [55] and P450 BM3 (BM3, which is also known Figure 6 Performance of software tools. Time taken for the channel calculation with respect of the number of atoms in a biomacromolecule (cf. Additional file 1: Table S3). as CYP102) [56]-have been extensively studied by X-ray diffraction in both ligand-free and ligand-bound states; to date, more than 80 structures have been published. Thus, both cytochrome P450s are suitable systems for testing the performance of MOLE 2.0 in predicting the physicochemical properties of channels. Channel families More channels were identified in BM3 than in CAM structures. As each independent chain within an asymmetric unit can have different channels [57], it is worthwhile testing all chains within a crystal structure for channel identification. Therefore, we analyzed all 80 chains within the 37 BM3 crystal structures and 54 chains within the 43 CAM crystal structures. It should be noted that CAM can be found in either closed or open states, which differ in the conformation of the F/G loop. Channels were found (using the setup described in the Methods section) only in the open CAM structures (i.e., only in 5 crystal structures: 1K2O, 1PHA, 1QMQ, 1RE9 and 1RF9). CYP structures contain several different types of active site access channels, which have been classified according to their position in relation to conserved secondary structures in the cytochrome P450 fold by Wade and coworkers [15]. There are two specifically named channels, which are considered to enable the exchange of water molecules between the active site and the enzyme exterior, i.e., the water channel neighboring the Bhelix, which is the only channel leading to the CYP proximal side [12], and the solvent channel between the β4 sheet, F and I helices. Other channels are labeled by numerals and only those that are present either in CAM or BM3 structures are noted here. Channels close to the B/C and F/G loops belong to the 2× family-channel 2a is located close to the β1 sheet, F/G and B/B' loops and it has been suggested to be the main access channel of CAM [58,59]; channel 2f neighbors channel 2a and the solvent channel and it is located between the β5 sheet and F/G loop; channel 2b also neighbors channel 2a and is located between the B/C loop, β1 and β3 sheets; channel 2c neighbors channel 2a and is located close to the B/C loop, G and I helices; channel 2ac connects channels 2a and 2c and is located between the B/C and F/G loops; channel 2d is located between the N-terminus and A helix (Figure 8). Variability of results We identified 209 channels along with 73 duplicates within the 80 BM3 chains. Such a large number of channels allowed us to analyze the variability in geometrical or physicochemical properties of the identified channels between individual X-ray structures of a specific protein. The variability was evaluated as the standard deviation calculated for each channel type (W, S, 2a, 2b, 2c, 2ac, 2d, 2f ). Then, the total standard deviation of a given property was calculated as a channel-number weighted average of the channels' individual standard deviations. We also calculated the relative variability as the total standard deviation divided by the channel-number weighted mean value of a given property. a the nonpolar channel in Figure 7C (blue), b the polar channel in Figure 7C (red), c MOLE 2.0 counts the charge on amino acids only, whereas the ribosome channel is also lined by 16 phosphates. The channel length variation was usually between 10% and 20% of the average channel length, i.e., around 5 Å in the case of BM3. The bottleneck radius showed a deviation of about ± 0.23 Å (less than 15%). The variability in the distance of bottlenecks from the start point was rather large, i.e., up to 8 Å (53%). This is not surprising because the position of bottlenecks is sensitive to the actual structure of the channel (and conformation of the lining amino acids side chains), i.e., it depends on the choice of X-ray structure [14]. The large variability in the position of bottlenecks has been also identified in molecular dynamics simulations [60]. Based on the large variability of this parameter, we do not recommend that this parameter is viewed as a robust feature of any channel found in only one crystal structure. The charge along a channel exhibited a deviation in the order of 0.6 e (about 21%). The hydropathy index of amino acids ranges between hydrophilic (−4.5) and hydrophobic (4.5). The variation of this value was in the order of 0.5 (less than 9%). The hydrophobicity index is a similar measure to the hydropathy index but has a smaller range of values between hydrophilic (−1.14) and hydrophobic amino acids (1.81). It exhibited a lower variation than the hydropathy index of about 0.14. However, its relative error was similar (less than 9%). It also seemed to be more consistent between systems as values for the same types of channels did not differ much between both proteins. Polarity values range from 0 for nonpolar amino acids through values of about 2 for polar amino acids towards values around 50 for charged amino acids. Polarity can therefore easily distinguish between polar channels and channels lined with charged amino acids. For instance, the solvent channel in BM3 was predicted to have a similar charge to that of channel 2f (−0.7 vs. -0.4). However, the solvent channel showed a significantly higher polarity index (9.4 vs. 2.0 for channel 2f ). This indicates that the solvent channel is lined with more highly charged residues that cancel each other out, whereas channel 2f is mostly lined with nonpolar and polar residues. The variation of the polarity was in the order of ± 2.5. The relative error was about 47%. However, this value should be interpreted with care owing to the low polarity of the analyzed channels (the channel number weighted mean value was only 6.4 out of a possible range of 0-50). Mutability values range from the lowest mutability of 44 for Cys to a value of 177 for the most easily interchangeable Ser. The variation of mutability was in the order of ±3 and the relative error was the lowest of all the indices mentioned (less than 4%). The results showed that the geometrical properties and physicochemical properties of the found channels typically varied by less than 20% except for the distance of bottlenecks from the starting point. Properties of CAM and BM3 channels From a geometrical perspective, the most open channels were usually found within the open CAM structures, particularly 2a channels, which have a bottleneck radius larger than 2.6 Å. Channels belonging to the 2× family (mainly channels 2a, 2f, and in the case of BM3, channel 2b) were predicted to have bottleneck radii large enough to allow substrates/products to pass (> 2 Å) in both the CAM and BM3 structures, i.e., comparable or even larger than the solvent channel bottleneck radius (> 1.4 Å, radius of water molecule). The most closed channel was the water channel. However this does not necessarily mean that small molecules cannot pass through it as it might partially open to allow molecules to enter due to bottleneck fluctuations, as shown previously for the 2b channel within the structure of mammalian cytochrome P450 2A6 [14]. It is also worth noting that the solvent channel was predicted to be~7 Å longer in CAM than in BM3, whereas other channels were typically longer in BM3. In contrast, the most open channels 2a and 2f in CAM were~12 Å shorter than in BM3. However, this was partly because we used a probe radius of 3 Å to construct the overall shape of the protein, and therefore we only detected channels below this radius. The water and solvent channels were clearly the most hydrophilic. The hydrophilicity also appeared to correlate with the polarity of the channels because the water and solvent channels were also predicted to be the most polar channels. The higher polarity index indicates that polar and charged amino acid residues line the solvent and water channels. On the other hand, the mutability index did not differ significantly between the individual channels. The mutability was also relatively high, which may indicate that the channels are lined with amino acids that can be relatively easily interchanged. This finding is in accord with the relatively low sequence homology between individual members of CYP family [60]. Ranking the channels according to their average hydrophobicity supported the hypothesis that the water and solvent channels are involved in water transfer into the active site [61], as the water channel was the most hydrophilic channel in both the CAM and BM3 structures, followed by the solvent channel (according to the hydropathy and hydrophobicity indices). BM3 was also predicted to contain the rather polar channel 2b. The more hydrophobic channels 2f and 2a were present in both the CAM and BM3 structures. Channels 2ac and 2d were more hydrophobic still. Finally, the most hydrophobic channel was channel 2c. However, the last three channels were found rather infrequently, i.e., only present in some BM3 structures (Additional file 1: Tables S5 and S6). Conclusions We present the advanced software tool MOLE 2.0 designed to analyze molecular channels and pores. We benchmarked MOLE 2.0 against similar software tools and showed that by comparison it is faster and capable of analyzing large and complex systems containing up to hundreds of thousands of atoms. As a new feature, MOLE 2.0 estimates physicochemical properties of the identified channels. We compared the estimated physicochemical properties with the known functions of selected biomacromolecular channels and concluded that the properties correlated with the functions. We also assessed the variability of physicochemical properties by analyzing a large number of X-ray structures of two members of the cytochrome P450 superfamily. We propose that the physicochemical properties may provide useful clues about the potential functions of identified channels. The software is available free of charge at http://mole.chemi.muni.cz. Table S1. Physicochemical properties of amino acids residues, setup of all software tools used for the benchmarking study. Table S2. Channel starting points used in the benchmarking study. Table S3. Duration of channel calculations for all biomacromolecules used in the benchmarking study. Table S4. Numbers of channels found in the analyzed molecules in the benchmarking study. Table S5. Comparison of geometrical and physicochemical properties of channels detected in CAM structures. Table S6. Comparison of geometrical and physicochemical properties of channels detected in BM3 structures. Availability and requirements Abbreviations BM3 Cytochrome P450 BM3; CAM Cytochrome P450cam; all amino acids are represented by their respective three-letter abbreviations.
7,870.8
2013-08-16T00:00:00.000
[ "Biology", "Computer Science", "Chemistry" ]
Defining the light emitting area for displays in the unipolar regime of highly efficient light emitting transistors Light-emitting field effect transistors (LEFETs) are an emerging class of multifunctional optoelectronic devices. It combines the light emitting function of an OLED with the switching function of a transistor in a single device architecture. The dual functionality of LEFETs has the potential applications in active matrix displays. However, the key problem of existing LEFETs thus far has been their low EQEs at high brightness, poor ON/OFF and poorly defined light emitting area - a thin emissive zone at the edge of the electrodes. Here we report heterostructure LEFETs based on solution processed unipolar charge transport and an emissive polymer that have an EQE of up to 1% at a brightness of 1350 cd/m2, ON/OFF ratio > 104 and a well-defined light emitting zone suitable for display pixel design. We show that a non-planar hole-injecting electrode combined with a semi-transparent electron-injecting electrode enables to achieve high EQE at high brightness and high ON/OFF ratio. Furthermore, we demonstrate that heterostructure LEFETs have a better frequency response (fcut-off = 2.6 kHz) compared to single layer LEFETs. The results presented here therefore are a major step along the pathway towards the realization of LEFETs for display applications. (close to either drain or source electrode). However, the key problem of existing unipolar heterostructure LEFETs thus far has been their low EQEs (,0.2%) at high brightness, and poorly defined light emitting area -a thin emissive zone at the edge of the electrodes. In this paper, we report unipolar LEFETs based on solution processed charge transport and emissive polymers with an EQE of up to 1% at a brightness of 1350 cd/m 2 , with a well-defined light-emitting zone suitable for display pixel design. We show that a non-planar source-drain electrode design strategy combined with a semi-transparent electron-injecting electrode enables maintenance of a high EQE at high brightness (by a factor of 10, compared to control LEFETs). In addition, full control over the dimensions of the light emitting area, and hence aperture ratio is achieved allowing for simple pixel design. Furthermore, we demonstrate that the LEFETs can operate at a frequency of 2.6 kHz and have a maximum aperture ratio of 24%. This work therefore represents a major step along the pathway towards the realization of LEFETs for display applications. Fig. 1a shows the structure of the pixelated non-planar light-emitting transistor device (which we term Pix-LET), and the active channel materials used in this study. The devices were fabricated on a highly n-doped conducting silicon wafer with a SiO 2 / poly(methylmethacrylate) (PMMA) gate dielectric layer. The light-emitting layer was Super Yellow (SY), which was chosen as its properties are widely reported and it is routinely used as test material for new architectural concepts in OLEDs and LEFETs. Solution processed poly(2,5-bis(3tetradecylthiophen-2-yl)thieno[3,2-b]thiophene) (PBTTT) was used as the hole transport layer. For a Pix-LET, the hole and electron injecting electrodes consisted of Au and a semitransparent CAC stack, respectively. For comparison with the Pix-LET architecture we fabricated two control light emitting transistors. The first control device consisted of a conventional non-planar light-emitting transistor (NPLET-Au/Ca) with Au/Ca as the source/drain electrodes. The second device had planar source and drain electrodes of Au and a semitransparent CAC stack electrode (LET-Au/CAC), respectively. All devices had a channel length of 100 mm and channel width of 16 mm. Full details of the fabrication and testing protocols are presented in the Methods section. Fig. 2a shows the electrical transfer characteristics of a typical Pix-LET. The relevant electrical output characteristics for the device are shown in Fig. S2. Under p-type voltage bias, the Pix-LET device demonstrates excellent linear and saturation regimes with current ON/OFF ratios of .10 4 with little hysteresis. The extracted hole mobility for the saturation regime obtained from the transfer characteristics was 0.004 cm 2 /Vs. The measured hole mobility is higher by a factor of ,10 2 than Super Yellow-only LEFETs 21 showing that hole transport occurs primarily at the PBTTT/PMMA dielectric interface. The electrical transfer and output characteristics of both control devices are compared in Fig 2a and S2, and it can be seen that both have similar transistor characteristics to that of the Pix-LET, with comparable mobility (See Table 1). The slightly higher current in the NPLET-Au/Ca devices indicates that the resistive nature of the Cs 2 CO 3 layer of the CAC stack that is in contact with the Super Yellow layer affects the electrical properties of the device. Fig 2b shows the brightness as a function of gate voltage, and Fig. 2c the corresponding EQE versus gate voltage for the Pix-LET and control devices. The EQE of the Pix-LET device increases with the brightness and reaches 1% at 1350 cd/m 2 . This EQE is an order of magnitude higher than the best performing previously reported LEFETs operating in the unipolar regime and importantly is also achieved at higher brightnesses [20][21] . The EQE for both the control devices were also measured and are shown in Fig 2c and Table 1. It can be seen that both devices have lower EQEs when compared to the performance of the Pix-LET. The measured EQE for the control devices (see Table 1) were 0.09% at 1400 cd/m 2 and 0.45% at 1000 cd/m 2 for the NPLET-Au/Ca and LET-Au/CAC, respectively. For the Pix-LET device, bright yellowgreen light was visible to the eye with the emission zone defined by the size of the CAC electrode (Fig 3a). In contrast, the light emission zone from the control LET-Au/CAC device was only partially under the CAC electrode (see Fig 3b), and for the NPLET-Au/Ca device (Fig 3c) emission was only observed at the edge of the Ca electrode. Furthermore the light-emitting zone for the Pix-LET and LET-Au/ CAC devices remained underneath the electron-injecting electrode (CAC) and did not spread in the transistor channel. The measured aperture ratio of the Pix-LET device at Vg 5 2150 V and Vds was 24%, which is close to that of a conventional AMOLED pixel (,34%) 9 . The measured aperture ratios for the control NPLET-Au/Ca and LET-Au/CAC devices were significantly lower at 2.5% and 15%, respectively. The operating mechanism of the Pix-LET device along with energy levels of the different materials is shown in Fig S3. Under p-type bias holes are injected directly into the PBTTT layer [ionization potential (IP) , 5.1 eV] 20-21 and subsequently into SY (IP 55.3 eV) [20][21] . Under these conditions holes are the major carrier species in the active channel. The thin CAC stack (work function of Cs 2 CO 3 /Ag , 2.3 eV) 21 injects the electrons into the SY layer (EA 5 2.9) 24 . Due to the low electron mobility of the SY film the injected electrons accumulate near the SY/CAC electrode interface and this results in a much higher density of exciton formation and hence light emission directly under the CAC electrode. The higher EQE in the Pix-LET is mainly due to the semi-transparent electrode, which allows greater light output (see Fig S4). Furthermore, the non-planar device geometry in the Pix-LET reduces the contact resistance for the holes and forces the carriers to pass through the emissive layer 20,24 leading to a maximum radiative recombination efficiency of ,38%. The calculated maximum recombination efficiency of the control NPLET-Au/Ca and LET-Au/CAC were ,3% and 17%, respectively (see supplementary Table S2). To obtain a more complete picture of the light emitting area and underlying physics, we measured magnified optical images as a function of gate voltage and drain current (see Figs S5 and S6) for the Pix-LET. The emission zone in the Pix-LET starts from the outside edge of the CAC electrode and fully spreads inwards until emission occurs from the entire CAC electrode at high current density. These results suggest that: i) hole density increases and extends spatially near the semitransparent CAC electrode as shown in Fig. S3; ii) the CAC electrode enhances electron injection and block the holes. In the Pix-LET device the electron-injecting electrode consists of a resistive 6 nm Cs 2 CO 3 layer, which is an insulator. Hence, the CAC electrode reduces the electrical benefits, i.e. the contact resistance of the metallic non-planar geometry 20,24 (the hole mobility of the Pix-LET is lower by a factor of 10 than the NPLET-Au/Ca device). However, the non-planar geometry with Cs 2 CO 3 still provides slightly higher electrical characteristics compared to the planar geometry. This means, that the function of the Cs 2 CO 3 electrode at the interface with the SY in the Pix-LET is to block holes and improve electron injection; iii) the blocked hole density spread underneath the CAC electrode leads to recombination directly under the CAC electrode; iv) the high EQE of the Pix-LET means there is better hole and electron density balance and efficient recombination (due to the non-planar geometry) compared to control devices. For display pixel applications, a well-defined and spatially stable light emitting area is necessary. To avoid changes in the emission zone with differing drain current, an appropriate dimension of the CAC electrode must be chosen to fix the light-emitting area and hence the aperture ratio for pixel design. This can be easily achieved by setting the CAC electrode dimension equals to the width of the emission zone at light turn on voltage. To evaluate the frequency response of the Pix-LET device, we measured the light intensity as a function of gate modulation fre- quency (at a fixed DC source-drain voltage). The light output intensity of the Pix-LET device appears almost flat up to 2 kHz (see Fig. 4). At a higher gate frequency, the light intensity drops significantly, leading to a cut-off frequency of <2.6 kHz at 23 dB. For direct comparison, we have also measured the cut-off frequency from the equivalent OLED and single layer LEFET structures 21 . The cut-off frequency for the single layer LEFET and OLED were 76 Hz and 60 kHz respectively. We define the cut-off frequency as the modulation frequency of the gate voltage at which the light output of the system decreases to 23 dB. The 23 dB point frequency is related to the charge carrier transit (t tr ) time 27 150 V and f-3dB 5 76 Hz) we obtain an FET mobility of ,9 3 10 25 cm 2 /Vs, which is in agreement with steady-state source-drain current measurements 21 . However, the difference in the mobility values from the OLED and LEFET transients is two orders of magnitude. This difference in charge carrier mobilities is due to the charge carrier density. The charge carrier density (Q) in the LEFET channel can be tuned and is a product of the gate voltage (V) and gate capacitance (C) by Q 5 CV. Thus, the nature of the trap states and trap filling in the bulk (diodes) and at the interface (transistors) is different [29][30][31] . In the case of the PBTTT/SY bilayer LEFETs using the parameter set L 5 100 mm, Vds 5 150 V, and f2 3dB 5 2.6 kHz, we obtain a PBTTT FET mobility of 3 3 10 23 cm 2 / Vs, which is again in agreement with measured steady-state mobility. These results suggest that the cut-off frequency for the Pix-LET devices is independent of the emissive layer and mainly dependent on the charge transport material and the channel length. In summary, we have demonstrated a new display pixel design based on bilayer LEFET devices with a transparent drain electrode, which facilitates charge injection and better light out coupling leading to a high external quantum efficiency at usable brightnesses. The device architecture enables decoupling of the low frequency and switching performance of the transistor from the electrical limitations of the emissive material. Our results suggest that the dimension of the CAC electrode and the channel length can be used to set the light-emitting area and hence the aperture ratio for pixel designs. Although the operating voltages of the demonstrated Pix-LETs are still high, these could be reduced by implementing a number of approaches including reducing the channel length and increasing the gate capacitance by employing high k dielectrics or electrolyte gating [32][33] . The results are a significant advance towards the ultimate goal of solution processed LEFETs and printed organic semiconductors for display applications. Methods LEFET fabrication and testing. The hetero-structure LEFETs were fabricated using 300 nm of SiO 2 and 150 nm of PMMA (Mw , 150000) as the gate dielectric layer on a highly n-doped silicon wafer as shown in Fig. 1 (a, b and c). Substrates were annealed at 150uC for 30 mins after PMMA deposition and then the hole transport layer of PBTTT (75 nm) was spun on top of PMMA at 1500 rpm for 45 second followed by 2000 rpm for 15 seconds as described earlier 21 . Super Yellow (120 nm) was spincoated on top of the PBTTT layer from a solution of 7 mg/ml in toluene. All the thicknesses were determined by a Veeco Dektak 150 profilometer. Two shadow masks were used in combination for defining the source and drain electrodes, which were deposited by thermal evaporation in high vacuum to form interdigitated holeinjecting and electron-injecting contacts (see Fig. S1). For the Pix-LET and NPLET-Au/Ca, the hole-injecting electrode ''Au'' was deposited directly on the top of the PBTTT layer to form a non-planar contact geometry but for the LET-Au/CAC device the Au electrode was deposited on top of the SY film. The electron injecting, semitransparent Cs 2 CO 3 /Ag/Cs 2 CO 3 (CAC) stack electrode was deposited on top of the emissive films through successive evaporations of Cs 2 CO 3 , Ag, and Cs 2 CO 3 at pressure of ,10 26 mbar as shown in Fig. 1 (a, b and c) and Fig S1. Thicknesses of 6510516 (nm) in the CAC stack were achieved at the evaporation rates of 0.5 Au/s, 1 Au/s and 0.5 Au/s, respectively. The CAC stacks had average sheet resistances of , 8 V/%. The sheet resistance for the CAC film was measured using a four-point probe meter from Keithlink while the transmittances were recorded using a UV-vis-NIR spectrophotometer (Cary 5000). For the NPLET-Ca/Au device an 80 nm Ca electrode was evaporated for electron injection instead of CAC. Electrical and optical characterization of the devices was achieved using an Agilent B1500A Semiconductor Device Analyzer and an SA-6 Semi-Auto Probe station with a calibrated photomultiplier tube (pmt) positioned over the device. The source-drain current in the transistor channel and photocurrent in the pmt were recorded to determine the device parameters. The charge carrier mobility and threshold voltage were calculated from the transfer characteristics in the saturation regime, using equation (3). where Ids is the source-drain current, W is the channel width, L is the channel length, m is the field-effect mobility, C i is the geometric capacitance of the dielectric, Vg is the gate voltage, and V th is the threshold voltage. The capacitance of the SiO 2 /PMMA dielectric layer was estimated by adding the capacitance of the two layers in series. The brightness of the devices was calculated from the photocurrent measured with the pmt by comparing with an OLED of known brightness and light emission area, and then corrected according to the measured emission area of the LEFET. A digital camera connected to an optical microscope was used to image the device emission area. The image was then analyzed by taking an intensity profile across the emission region to calculate the width of the emission zone. This was estimated by taking the full-width at half-maximum of the image intensity profile. The EQE was calculated (assuming Lambertian emission) using the brightness, source-drain current and emission spectrum of the device as previously reported [20][21][22][23][24] . Averages were taken for at least 5 devices. Errors given are the standard deviation of the results. SY OLED fabrication. Glass substrates with pre-etched ITO were purchased from Xinyan Technology Ltd and cleaned by using a soft cloth in a 90uC warm Alconox (detergent) solution. Cleaning was followed by sequential ultrasonication in Alconox, de-ionized water, acetone, and 2-propanol for 15 mins each. After drying the substrates under a nitrogen flow, a poly(3,4-ethylenedioxythiophene):poly(styrene sulfonate) (PEDOT:PSS) (Baytron P VPAl4083) film was spin-coated at 5000 rpm. The resulting 30 nm thick layer was baked at 125uC for 30 minutes in air. All the device edges were cleaned with a wet cloth to prevent current leakage. A solution of Super Yellow was prepared in toluene at 50uC at a concentration of 7 mg/ml. Super Yellow films were prepared by spin-coating at a spin speed of 3000 rpm. The thickness was ,100 nm as determined by a Veeco Dektak 150 profilometer. Finally, 6 nm of barium followed by 100 nm of aluminium was thermally evaporated under a vacuum of 10 26 mbar to complete the devices. The resulting device area were 0.2 cm 2 with 6 devices per substrate. OLED frequency test. The OLED voltage was modulated using an Agilent 33250A function generator connected to a voltage amplifier. The OLED light signal was measured using a GaP detector (Thorlab) and an SR530 lock-in amplifier. The OLED was biased at 12 V resulting in a current of 13 mA/cm 2 . LEFET frequency test. The gate voltage was modulated using Agilent 33250A function generator connected to a voltage amplifier. The source drain electrodes were biased using an Agilent B1500A Semiconductor Device Analyzer. The output light was measured using a photo-multiplier tube and a Hamamatsu C6438 current amplifier. The signal was acquired using a LeCroy Waverunner A6200 oscilloscope at load resistance of 50 Ohms.
4,099.2
2015-03-06T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
One Health Surveillance of Antimicrobial Resistance Phenotypes in Selected Communities in Thailand Integrated surveillance of antimicrobial resistance (AMR) using the One Health approach that includes humans, animals, food, and the environment has been recommended by responsible international organizations. The objective of this study was to determine the prevalence of AMR phenotypes in Escherichia coli and Klebsiella species isolated from humans, pigs, chickens, and wild rodents in five communities in northern Thailand. Rectal swabs from 269 pigs and 318 chickens; intestinal contents of 196 wild rodents; and stool samples from 69 pig farmers, 155 chicken farmers, and 61 non-farmers were cultured for E. coli and Klebsiella species, which were then tested for resistance to ceftriaxone, colistin, and meropenem. The prevalence of ceftriaxone-resistant E. coli and Klebsiella species in pigs, chickens, rodents, pig farmers, chicken farmers, and non-farmers was 64.3%, 12.9%, 4.1%, 55.1%, 38.7%, and 36.1%, respectively. Colistin resistance in pigs, chickens, rodents, pig farmers, chicken farmers, and non-farmers was 41.3%, 9.8%, 4.6%, 34.8%, 31.6%, and 24.6%, respectively. Meropenem resistance was not detected. The observed high prevalence of AMR, especially colistin resistance, in study food animals/humans is worrisome. Further studies to identify factors that contribute to AMR, strengthened reinforcement of existing regulations on antimicrobial use, and more appropriate interventions to minimize AMR in communities are urgently needed. Introduction Antimicrobial resistance (AMR) is a major evolving global health problem that is associated with high morbidity, high mortality, and substantial economic loss [1,2]. Several studies of AMR burden in humans conducted in Thailand found and reported enormous AMR-related health and economic burdens [3][4][5][6]. The World Health Organization (WHO) in collaboration with the World Organisation for Animal Health (OIE) and the Food and Agriculture Organization of the United Nations (FAO) endorsed and launched in 2015 a global action to combat AMR [7]. The foundational hypothesis of this action plan was that AMR affects sectors beyond human health, including animal health, agriculture, food security, and economic development. In response, a strategy called One Health was developed that includes all affected sectors and disciplines and that aims to reduce the prevalence of AMR via an integrated and unified approach among stakeholders, with the ultimate aim of sustainably balancing and optimizing the health of people, food animals, and ecosystems. One of the five strategic objectives of this action plan is to strengthen the knowledge and evidence base via surveillance and research. Particularly important gaps in knowledge that need to be filled include information specific to the incidence, the prevalence across pathogens, and the geographical patterns of AMR; understanding how resistance develops and spreads; understanding how resistance circulates within and between humans and food animals, and through food, water, and the environment; the ability to rapidly characterize newly emerged resistance in microorganisms and elucidate the underlying mechanisms; and understanding the social science and behavior of antibiotic use in all sectors that are in any way responsible for any aspect of antibiotic use. Integrated surveillance of AMR using the One Health approach, which includes humans, animals, food, and the environment, has been recommended by the WHO, OIE, and FAO [8]. By way of example, a target of AMR surveillance is the monitoring of the prevalence of extended-spectrum beta-lactamase (ESBL)-producing Escherichia coli across the human, food animal, and environmental sectors. Importantly, the AMR global action plan allows countries to modify their integrated AMR surveillance to include other cross-cutting pathogens and other resistance mechanisms, expand implementation of the program to different cities and provinces in that country to obtain added information/evidence regarding the spread of AMR in different sectors, and facilitate the implementation of holistic interventions to contain AMR. The aim of this study was to determine the prevalence of AMR phenotypes for common or important antimicrobial agents in E. coli and Klebsiella species isolated from humans, food animals, and wild rodents living in/raised in/harvested from the same community among five selected study communities located in a province in northern Thailand. Materials and Methods The protocol for this study was approved by the Ethics Committee of the Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand, for human study (COA no. MUTM-2018-035-01) and by the Scientific Research Committee of Kasetsart University, Bangkok, Thailand, for animal study (COA no. ACKU 62-VTN-010). Study Site and Duration The study was conducted during 2018 and 2019 in five districts of a province located in the northern region of Thailand. Animals The pigs and chickens included in the study were raised on 1 of the 127 different privately owned farms (77 chicken farms and 50 pig farms) that the farm owners agreed to participate in the study. The largest pig farm included had 552 pigs and the largest chicken farm had 950 chickens. The researchers randomly selected 4 to 10 adult pigs per pig farm and 3 to 10 adult chickens per chicken farm for a total of at least 260 pigs and 280 chickens from all pig farms and all chicken farms, respectively. The study protocol estimated that at least 100 wild rodents living on or closely around the study farms would be trapped, sacrificed, and analyzed in this study. Humans The included pig farmers and chicken farmers were aged 16-70 years, and they raised chickens and/or pigs full-time or part-time. These farmers worked at the pig or chicken farms that were included in our study, and they resided in 1 of the 5 study communities. This study randomly selected 1 to 2 pig farmers per pig farm and 1 to 2 chicken farmers per chicken farm for an estimated total of at least 60 enrolled pig farmers and 80 enrolled chicken farmers. At least 60 non-farmers aged 16-70 years who had no contact with farm animals but who lived in the same study communities as the farmers were also included in the study. Written informed consent to participate in the study and to have their stool samples collected was obtained from all human subjects. Collection of the Study Samples A stool sample from each pig and chicken was collected via rectal swab, and the swab was maintained in a Cary-Blair transport medium tube (BOENMED ® Boen Healthcare Co., Ltd.; Suzhou, China). All trapped wild rodents were sacrificed, after which intestinal content of each wild rodent was swabbed and the intestinal content swab was put into a Cary-Blair transport medium tube. Stool samples collected from farmers and non-farmers were stored in small plastic containers without preservatives. Rectal swab samples of pigs and chickens and intestinal content swab samples of wild rodents were kept at room temperature whereas stool samples of humans were kept in box containing ice. The collected samples were transported to the microbiology laboratory of the Division of Infectious Diseases and Tropical Medicine of the Department of Medicine, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok, Thailand, within 3 days of collection. Microbiological Study of the Collected Samples The target bacteria in this study were E. coli and Klebsiella species. Each rectal swab collected from pigs and chicken, and the intestinal content swab collected from wild rodent, was inoculated in 5 mL of tryptic soy broth (TSB) (Sigma-Aldrich Corporation, St. Louis, MO, USA) and incubated at 35 • C for 16-24 h or overnight. After incubation, 100 µL of each TSB culture was inoculated onto a MacConkey agar plate supplemented with ceftriaxone (2 µg/mL) and onto a MacConkey agar plate supplemented with colistin (1 µg/mL) to detect antibiotic resistant Gram-negative bacteria. Each stool sample collected from humans was taken in the amount of an inoculation loop (approximately 10 µL) and streaked on the aforementioned antibiotic-supplemented agar plates. Suspected colonies of Enterobacterales (lactose fermenter or pink colony) grown on these agar plates were subjected to manual biochemical tests, including the triple sugar iron test, lysine iron agar slants test, indole test, motility test, ornithine decarboxylase test, urease test, and malonate test, which were locally prepared in the laboratory using the purchased reagents/materials (Oxoid Ltd.; Hampshire, UK or BBL/Difco Diagnostic, Becton Dickinson; Sparks, MD, USA), and the oxidase test (BBL/Difco Diagnostic, Becton Dickinson; Sparks, MD, USA), to identify E. coli and Klebsiella species. Antimicrobial Susceptibility Test of E. coli and Klebsiella Species Isolates The minimum inhibitory concentration (MIC) of ceftriaxone, colistin, and meropenem against E. coli and Klebsiella species isolates was determined by the agar dilution method according to the Clinical Laboratory Standards Institute (CLSI) guidelines, and an MIC of ceftriaxone, colistin, and meropenem of ≥4 µg/mL was considered resistance to these three drugs [9]. Escherichia coli ATCC25922 was used as a control strain. Data Analysis Descriptive statistics were used to analyze and describe the data. The data, all of which were categorical, were compared using chi-square test. The results of those analyses are presented as numbers and percentages. SPSS Statistics version 16.0 (SPSS, Inc., Chicago, IL, USA) was used to perform all data analyses, and a p-value less than 0.05 was considered statistically significant for all tests. Results Among the 1068 samples that were collected from pigs (n = 269), chickens (n = 318), wild rodents (n = 196), pig farmers (n = 69), chicken farmers (n = 155), and non-farmers (n = 61), there were a total of 875 E. coli isolates (89.0%) and 108 Klebsiella species isolates (11.0%). The prevalence of ceftriaxone, colistin, and meropenem resistance in E. coli and Klebsiella species isolated from the samples collected from each of the six different sources is shown in Table 1. The overall prevalence of ceftriaxone resistance in E. coli and Klebsiella species in all samples collected from study animals and humans was significantly higher than that of colistin resistance (32.0% vs. 22.4%, respectively; p < 0.01). The overall prevalence of ceftriaxone-resistant and colistin-resistant E. coli and Klebsiella species isolated from the samples collected from all included animals was 28.4% and 19.3%, respectively (p < 0.01). The overall prevalence of ceftriaxone-resistant and colistin-resistant E. coli and Klebsiella species isolated from the samples collected from all included humans was 42.8% and 30.9%, respectively (p < 0.01). The prevalence of ceftriaxone resistance and colistin resistance in E. coli and Klebsiella species was highest among animals in the samples collected from pigs, and highest among humans in the samples collected from pig farmers. The prevalence of ceftriaxone resistance in E. coli and Klebsiella species was significantly higher in pigs than in chickens (64.3% vs. 12.9%, respectively; p < 0.01) and wild rodents (64.3% vs. 4.1%, respectively; p < 0.01). The prevalence of ceftriaxone resistance in E. coli and Klebsiella species was also significantly higher in chickens than in wild rodents (12.9% vs. 4.1%, respectively; p < 0.01). The prevalence of ceftriaxone resistance in E. coli and Klebsiella species was higher in pig farmers than in chicken farmers (55.1% vs. 38.7%, respectively; p = 0.03) and non-farmers (55.1% vs. 36.1%, respectively; p = 0.04); however, the prevalence of ceftriaxone resistance in E. coli and Klebsiella species in chicken farmers was not significantly higher than that in non-farmers (38.7% vs. 36.1%, respectively; p = 0.84). The prevalence of colistin resistance in E. coli and Klebsiella species was significantly higher in pigs than in chickens (41.3% vs. 9.8%, respectively; p < 0.01) and wild rodents (41.3% vs. 4.6%, respectively; p < 0.01); however, the prevalence of colistin resistance in E. coli and Klebsiella species in chickens was not significantly higher than that in wild rodents (9.8% vs. 4.6%, respectively; p = 0.05). The prevalence of colistin resistance in E. coli and Klebsiella species in pig farmers was not significantly higher than that in chicken farmers (34.8% vs. 31.6%, respectively; p = 0.75) or non-farmers (34.8% vs. 24.6%, respectively; p = 0.28). Moreover, the prevalence of colistin resistance in E. coli and Klebsiella species in chicken farmers was not significantly higher than that in non-farmers (31.6% vs. 24.6%, respectively; p = 0.39). No E. coli and Klebsiella species isolates were resistant to meropenem. The prevalence of ceftriaxone, colistin, and meropenem resistance in E. coli isolated from the samples collected from each of the six different sources is shown in Table 2. The overall prevalence of ceftriaxone resistance in E. coli in all samples collected from study animals and humans was significantly higher than that of colistin resistance (30.5% vs. 21.7%, respectively; p < 0.01). The overall prevalence of ceftriaxone-resistant and colistinresistant E. coli isolated from the samples collected from all included animals was 27.1% and 18.8%, respectively (p < 0.01). The overall prevalence of ceftriaxone-resistant and colistin-resistant E. coli isolated from the samples collected from all included humans was 40.0% and 29.8%, respectively (p = 0.01). The prevalence of ceftriaxone resistance and colistin resistance in E. coli was highest among animals in the samples collected from pigs, and highest among humans in the samples collected from pig farmers. The prevalence of ceftriaxone resistance in E. coli was significantly higher in pigs than in chickens (62.5% vs. 11.3%, respectively; p < 0.01) and wild rodents (62.5% vs. 4.1%, respectively; p < 0.01). The prevalence of ceftriaxone resistance in E. coli was also significantly higher in chickens than in wild rodents (11.3% vs. 4.1%, respectively; p < 0.01). The prevalence of ceftriaxone resistance in E. coli was higher in pig farmers than in chicken farmers (53.6% vs. 37.4%, respectively; p = 0.03) and non-farmers (53.6% vs. 31.2%, respectively; p = 0.02); however, the prevalence of ceftriaxone resistance in E. coli in chicken farmers was not significantly higher than that in non-farmers (37.4% vs. 31.2%, respectively; p = 0.48). The prevalence of colistin resistance in E. coli was significantly higher in pigs than in chickens (40.5% vs. 9.1%, respectively; p < 0.01) and wild rodents (40.5% vs. 4.6%, respectively; p < 0.01); however, the prevalence of colistin resistance in E. coli in chickens was not significantly higher than that in wild rodents (9.1% vs. 4.6%, respectively; p = 0.08). The prevalence of colistin resistance in E. coli in pig farmers was not significantly higher than that in chicken farmers (33.3% vs. 31.0%, respectively; p = 0.84) or non-farmers (33.3% vs. 23.0%, respectively; p = 0.27). Moreover, the prevalence of colistin resistance in E. coli in chicken farmers was not significantly higher than that in non-farmers (31.0% vs. 23.0%, respectively; p = 0.31). The prevalence of ceftriaxone, colistin, and meropenem resistance in Klebsiella species isolated from the samples collected from each of the six different sources is shown in Table 3. Since the numbers of the samples of ceftriaxone-resistant and colistin-resistant Klebsiella species isolated from each type of sample were much smaller than those of E. coli, the results on comparison of the prevalence of ceftriaxone resistance and colistin resistance in Klebsiella species were unreliable. Therefore, no comparison of the prevalence of ceftriaxone resistance and colistin resistance in Klebsiella species among the various sources of the samples was undertaken. Table 3. Prevalence of ceftriaxone, colistin, and meropenem resistance in Klebsiella species isolated from samples collected from various animal and human sources. Discussion What qualifies this research as a One Health surveillance of AMR study is the fact that samples were collected from food animals (pigs and chickens), humans (pig farmers, chicken farmers, and non-farmers), and wild rodents (as representatives of and to evaluate the environment). This study focused on two types of bacteria (E. coli and Klebsiella species) and three targeted antibiotics (ceftriaxone, meropenem, and colistin). Escherichia coli and Klebsiella species were selected because they are the most common types of bacteria in the family Enterobacterales that are colonized in the gastrointestinal tract of humans and animals, and infections due to E. coli and Klebsiella species are very common communityacquired infections in humans. Extended-spectrum cephalosporin (ceftriaxone)-resistant (or ESBL-producing) and carbapenem (meropenem)-resistant Enterobacterales were categorized by the WHO in 2017 as a "critical priority" for antibiotic-resistant bacteria and among the list of "priority pathogens" of antibiotic resistant bacteria that are considered to pose the greatest threat to human health. Colistin resistance was also included in our study due to the emergence of a plasmid-mediated gene (mcr-1) that encodes a protein that causes Enterobacterales to become resistant to colistin [10]. This colistin-resistance mechanism was first discovered in China in 2015, and this resistance gene can easily be transmitted among bacteria in animals, humans, and the environment. A growing concern is that mcr-1-producing colistin-resistant Enterobacterales have now been reported from many countries around the world [11]. The MICs of the antibiotics targeted in this study were determined to identify the phenotype of AMR because the MIC of an antibiotic against a particular bacterium is considered a standard indicator of antibiotic susceptibility. Although surveillance of phenotypic antibiotic-resistant bacteria, especially extendedspectrum cephalosporin (ceftriaxone)-resistant or ESBL-producing E. coli isolated from healthy people, farmers, patients, food animals, pets, food, water from natural sources, wastewater, sewage, rats, flies, and cockroaches in both community and hospital settings in Thailand, has been previously reported, the samples included in those studies were usually collected from different geographic locations, from different sources, and at different times [12][13][14][15][16][17][18][19][20][21][22][23][24][25][26]. Therefore, the prevalence of phenotypic antibiotic-resistant bacteria from the various sources reported in those studies cannot be directly compared. In response, the current study was conducted to obtain and analyze samples collected from food animals, humans, and wild rodents that reside in the same selected communities during the same period so that the prevalence of phenotypic antibiotic-resistant bacteria from various sources could be directly compared. Ceftriaxone-resistant E. coli and Klebsiella species in pigs and chickens were common in this study, but the prevalence of ceftriaxone resistance was lower than the rates reported from a previous study conducted in Thailand [14]. In contrast, the prevalence of colistin resistance observed in our study was higher than the rates reported previously from Thailand [25] and Laos PDR [27]. The observed differences between and among studies may be due to differences in study location and the times of the studies. The ceftriaxone and colistin resistance in pigs and chickens observed in this study was highly likely to be associated with antimicrobial use (AMU) because empty bottles of antibiotics and unused antibiotics including penicillin, amoxicillin, norfloxacin, enrofloxacin, tetracycline, colistin, lincomycin, gentamicin, tiamulin, sulfonamides, and ceftriaxone were observed at many farms during sample collection. This observation was the cause of some alarm since several regulations specific to the use of antimicrobials in food animals were published by the Ministry of Agriculture and Cooperatives in Thailand some years ago, which state that all classes of antimicrobials cannot be used to promote food animal growth, that many classes of anti-bacterial agents cannot be used to control or prevent infection in food animals, and that the use of colistin must be limited to treating infection in food animals and to an alternative regimen that cannot exceed several days. The reasons that must explain the observed misuse of antibiotics on these animal farms likely include low levels of knowledge about antimicrobial use, neutral or negative rather than positive attitudes regarding the appropriate use of antimicrobials, and poor practices in using appropriate antimicrobials by chicken and pig farm owners/managers/workers in several provinces in Thailand [28]. Consumption of antibiotics in food animals was reported to be associated with the emergence of antibiotic resistance in bacteria, especially colonized bacteria in the gastrointestinal tract of food animals [24,29,30]. The more frequent antimicrobial resistance observed in pigs compared to chickens might be explained by the much larger volume of medicated feed produced by feed mills for pigs (1055 tons) compared to that produced for chickens (18 tons) in 2019 in Thailand [31]. Therefore, an in-depth study of AMU on the farms included in the present study to investigate the association between AMU and the observed AMR is necessary. The rate of ceftriaxone resistance in wild rodents in this study was much lower than that found in rats that were trapped in open markets in a province located in the central region of Thailand [23]. The rats in open markets were reported to commonly consume food and to be in contact with sewage contaminated with antibiotic-resistant bacteria [23,26]. In contrast, the wild rodents trapped and analyzed in this study were more likely to consume foods found in rice fields and their natural environment. It can also be safely postulated that the rodents in rural communities live in a cleaner environment that is less contaminated with antibiotic-resistant bacteria. The average rate of ceftriaxone-resistant E. coli and Klebsiella species among farmers and non-farmers (42.1%) in the present study was comparable to the rates of fecal carriage of ceftriaxone-resistant Enterobacterales reported from several studies on Thai people [12][13][14][15]22]. The observed higher prevalence of ceftriaxone-resistant E. coli and Klebsiella species among farmers compared to non-farmers in this study was concordant with findings from previous studies, since being a farmer has been found to be a risk factor for fecal carriage of ceftriaxoneresistant Enterobacterales [14,22]. Non-farmers might have ceftriaxone-resistant E. coli and Klebsiella species in their gastrointestinal tract, potentially as a result of taking antibiotics, from the consumption of foods contaminated with antibiotic-resistant bacteria and/or antibiotic residues, or from exposure to environments contaminated with antibiotic-resistant bacteria, since the contamination of ceftriaxone-resistant Enterobacterales in many fresh foods and selected community environments in Thailand has been found to be common [23,26]. Farmers might have ceftriaxone-resistant E. coli and Klebsiella species in their gastrointestinal tract transmitted from food animals in addition to the aforementioned possible sources in non-farmers. The average rate of colistin-resistant E. coli and Klebsiella species among farmers and non-farmers (30.9%) observed in this study was higher than that reported from a previous study conducted in Thailand [14]. The farmers in the current study likely acquired colistin-resistant bacteria from their food animals because many pigs and some chickens at our study farms also had colistin-resistant E. coli and Klebsiella species. However, the reasons that explain the presence of colistin-resistant E. coli and Klebsiella species in non-farmers in communities remain unclear. Although 48% of hospitalized patients who received parenteral colistin developed colonization of colistin-resistant E. coli and Klebsiella species in their gastrointestinal tract, colistin for systemic use has not been available in these communities for a decade, and the contamination rate of colistin-resistant Enterobacterales in fresh foods and selected community environments was found to be extremely low [23,26]. The phenotypes of the ceftriaxone-and colistin-resistant bacteria isolated from animals and humans reported in this study cannot be used to conclude that the same phenotypes of particular antibiotic-resistant bacteria isolated from animals and humans are linked. Therefore, the genotypes of the ceftriaxone-and colistin-resistant bacteria isolated from animals and humans in these study communities need to be analyzed in molecular studies, such as with whole-genome sequencing or polymerase chain reaction, to identify the mechanisms of resistance to antibiotics and to determine the magnitude of the linkage or the similarity of antibiotic-resistant bacteria isolated from different sources, since evidence has been reported that whole bacteria and mobile genetic elements could be transferred from food animals to humans [32]. However, the facilities to perform such molecular studies are not available for routine surveillance of AMR in these communities. Further studies on antibioticresistant bacteria and antibiotic residue contamination in foods and the environment in these study communities should also be conducted. It is a positive that no meropenem resistance was detected among all strains of E. coli and Klebsiella species isolated from animals and humans in these study communities. Meropenem is a broad-spectrum parenteral antibiotic in the carbapenem group of drugs, which is used to treat infections caused by multi-drug resistant organisms, such as Gramnegative bacteria in hospitalized patients. Carbapenem-resistant, Gram-negative bacteria are usually isolated from hospitalized patients and from the hospital environment, and the prevalence of carbapenem-resistant, Gram-negative bacteria has been increasing over the past decade in hospital settings in Thailand [17]. Carbapenem-resistant, Gram-negative bacterial infection is very difficult to treat, and the mortality rate is high. Since oral carbapenem is not available in Thailand, it is not used in humans or food animals in communities. This lack of availability and use of carbapenems in communities explains the absence of meropenem resistance in all isolates of the E. coli and Klebsiella species grown from the samples collected in community settings in this study. The observed high prevalence of AMR from One Health surveillance of AMR phenotypes, especially colistin resistance, in food animals and humans in these study communities is worrisome; however, these data can now be used as baseline data for AMR in these study communities. These findings emphasize the urgent need for continued studies to determine the factors that contribute to colistin resistance, the need to reinforce existing regulations specific to antimicrobial use, and the need to implement more appropriate interventions, such as improving understanding of antimicrobial use among farm personnel to reduce colistin resistance in the community. Moreover, repeated One Health surveillance of AMR phenotypes should be periodically performed after the aforementioned measures are effectively implemented to evaluate their effectiveness in decreasing AMR phenotypes in these study communities. Funding: This study was supported by the grants from the Thailand Center of Excellence for Life Sciences (TCELS-Grant number is TC25/61) and the National Research Council of Thailand (NRCT-Grant number is NRCT 2565) to VT and DS. The funders had no involvement in the design or execution of the study, the analysis of the study results, the preparation of the manuscript, or the decision to publish the study results. Institutional Review Board Statement: The protocol for this study was approved by the Ethics Committee of the Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand, for human study (COA no. MUTM-2018-035-01) and by the Scientific Research Committee of Kasetsart University, Bangkok, Thailand, for animal study (COA no. ACKU 62-VTN-010). Informed Consent Statement: Written informed consent to participate in the study and to have stool samples collected was obtained from all participating human subjects. Data Availability Statement: The study dataset used in this study is available from the corresponding author upon reasonable request.
5,979.4
2022-04-21T00:00:00.000
[ "Biology", "Agricultural And Food Sciences" ]
Implementation and Validation of Constrained Density Functional Theory Forces in the CP2K Package Constrained density functional theory (CDFT) is a powerful tool for the prediction of electron transfer parameters in condensed phase simulations at a reasonable computational cost. In this work we present an extension to CDFT in the popular mixed Gaussian/plane wave electronic structure package CP2K, implementing the additional force terms arising from a constraint based on Hirshfeld charge partitioning. This improves upon the existing Becke partitioning scheme, which is prone to give unphysical atomic charges. We verify this implementation for a variety of systems: electron transfer in (H2O)2+ in a vacuum, electron tunnelling between oxygen vacancy centers in solid MgO, and electron self-exchange in aqueous Ru2+–Ru3+. We find good agreement with previous plane-wave CDFT results for the same systems, but at a significantly lower computational cost, and we discuss the general reliability of condensed phase CDFT calculations. : Verification of analytical forces against forces calculated from centred finite differences of the total energy for a helium dimer, both with (right) and without (left) periodic boundary conditions. For the system under periodic boundary conditions, the helium dimer interacts at the same distance except with their periodic image and therefore the resultant force is the same. An isosurface of the weight function is shown on the bottom left of each figure. CDFT geometry optimisation: MgO For completeness, we provide all the electronic couplings and reorganisation energies tabulated below. Values calculated using Becke charge partitioning are also included. CDFT geometry optimisation: 2D Pyrene COF The usual process for calculating the reorganisation energy of organic molecules would be in the gas phase, however CDFT provides an ability to constrain the excess charge to a single unit in a periodic crystal and therefore account for the full outer-sphere reorganisation energy. We show an example of hole transfer in a 2D pyrene based covalent organic framework (COF), where the subsequent geometry optimisation diverges and leads to an increase in energy of 54 eV. Several different constraint definitions have been tested: the charge difference between two units shown below, the charge difference between one unit and the rest of the system, and an absolute charge constraint over one unit. These, in addition to constraints defined including or excluding the acetylene linkers, all lead to a large IASD and divergent geometry optimisation. This can be attributed to the effect of a too strong constraint, that the polaron in these materials is band-like 3 and attempting to constrain the excess charge to a single unit is an inappropriate choice of constraint as the underlying functional is not able to correctly describe the resulting electronic state. Figure 3: CDFT weight function defined as the charge difference between two units in a 3x3 2D pyrene cof (left), increase in energy per atom as a function of geometry optimisation step (middle) and increase in Integrated Absolute Spin Density as a function of geometry optimisation step (right). The large initial IASD of 1.46 and subsequent increase is an indication of fractional charge transfer, which generally occurs when the DFT functional is unable to appropriately describe the charge localised state. CDFT geometry optimisation: Pentacene crystal Similar to the pyrene-cof, it would be useful to calculate the reorganisation energy for a pentacene crystal accounting for the full outer-sphere reorganisation energy. Different constraints have been tested, constraining either the absolute charge or the charge difference between two pentacene molecules. In all cases, the IASD is large and the geometry optimisation diverges with an increase in energy of 30 eV after 13 geometry optimisation steps. Figure 4: CDFT weight function defined as the charge difference between two pentacene molecules in a 3x2x1 pentacene cyrstal (left), increase in energy per atom as a function of geometry optimisation step (middle) and increase in Integrated Absolute Spin Density as a function of geometry optimisation step (right). The large initial IASD of 1.55 and subsequent increase is an indication of fractional charge transfer, which generally occurs when the DFT functional is unable to appropriately describe the charge localised state. Note that the repetition of the CDFT weight function (left) above and below the pentacene molecules is a visualisation artefact due to the non-cubic unit cell. CDFT geometry optimisation: Pentacene in vacuum Given the surprising results where the condensed phase CDFT geometry optimisation of pentacene fails, it is useful to confirm that CDFT works well for the pentacene dimer in vacuum. Here we confirm the exponential decay of the electronic coupling for both holes and electrons in a π-stacked pentacene dimer, despite their large IASD. This is consistent with the results published for the HAB11 dataset. 4-6 We further check the CDFT geometry optimisation of an excess hole for the π-stacked pentacene dimer in vacuum. For this example while the IASD increases, the total energy of the system decreases and the CDFT geometry optimisation succeeds with a reorganisation energy of 0.44 eV. This highlights a particular sensitivity of CDFT to condensed phase calculations, and suggests that the IASD is not always a reliable indicator for the breakdown of CDFT. It is possible however that CDFT only works for this system by fortuitous cancellation of errors, as described by the work of Van Voorhis and co-workers. 7 Figure 5: Decrease in total energy as a function of geometry optimisation step (left) and increase in Integrated Absolute Spin Density as a function of geometry optimisation step (right). Despite the large initial IASD of 1.34 and subsequent increase, the total energy decreases and the geometry optimisation converges. CDFT-MD: H 2 O + -H 2 O in vacuum Alternative Table 2 from main text, with added column providing bond lengths and angles from CDFT-MD simulation of water dimer in vacuum. While not statistically converged, these values agree well with the geometry optimised results. Implementation of Hirshfeld CDFT An implementation of CDFT forces using Hirshfeld partitioning of the electron density is now available in CP2K version 10. Below, we present CDFT-MD energy conservation including data for a constraint convergence of 1 × 10 −2 e for condensed phase systems, and the time taken for an average CDFT-MD step relative to an equivalent DFT-MD step as a function of the constraint con- vergence. An important observation is that while hybrid DFT is more expensive than GGA DFT, the additional cost of CDFT is lower at the hybrid DFT level. This is likely a result of the under-binding of excess charge at the GGA level, which makes convergence of localised charges with CDFT more challenging and therefore the number of SCF steps is increased.
1,521.8
2022-06-14T00:00:00.000
[ "Chemistry", "Physics" ]
Quantifying and understanding the triboelectric series of inorganic non-metallic materials Contact-electrification is a universal effect for all existing materials, but it still lacks a quantitative materials database to systematically understand its scientific mechanisms. Using an established measurement method, this study quantifies the triboelectric charge densities of nearly 30 inorganic nonmetallic materials. From the matrix of their triboelectric charge densities and band structures, it is found that the triboelectric output is strongly related to the work functions of the materials. Our study verifies that contact-electrification is an electronic quantum transition effect under ambient conditions. The basic driving force for contact-electrification is that electrons seek to fill the lowest available states once two materials are forced to reach atomically close distance so that electron transitions are possible through strongly overlapping electron wave functions. We hope that the quantified series could serve as a textbook standard and a fundamental database for scientific research, practical manufacturing, and engineering. T he contact-electrification (CE) effect is a universal phenomenon that occurs for all materials, which refers to two materials that are electrically charged after physical contact. However, CE is generally referred to as triboelectrification (TE) in conventional terms. In fact, TE is a convolution of CE and tribology, while CE is a physical effect that occurs only due to the contact of two materials without rubbing against each other, and tribology refers to mechanical rubbing between materials that always involves debris and friction 1 . The key parameters for CE, the surface charge density, the polarity, and the strength of the charges, are strongly dependent on the materials [2][3][4][5] . The triboelectric series describes materials' tendency to generate triboelectric charges. The currently existing forms of triboelectric series are mostly measured in a qualitative method in the order of the polarity of charge production. Recently, a standard method 6 has been established that allows this material "gene" of triboelectric charge density (TECD) to be quantitatively measured by contacting a tested material with a liquid metal using the output of a triboelectric nanogenerator (TENG) under fixed conditions. A table has been set for over 55 different types of organic polymer films. In comparison, inorganic materials have different atomic structures and band structures from polymers; therefore, it is necessary to quantify the triboelectric series for a wide range of common solid inorganic materials and study their triboelectric series in order to establish a fundamental understanding about their underlying mechanisms. One of the oldest unresolved problems in physics is the mechanism of CE 7,8 . Many studies have been done on the analysis of the amount of the generated charges, including the correlation of charge amount with chemical nature 2 , electrochemical reactions 9 , work function 10 , ion densities 11 , thermionic emission 9 , triboemission 12,13 , charge affinity 14 , surface conditions and circumstances 15 , and flexoelectricity 16 . These studies focus on certain samples and quantitative data measured under various environmental conditions. The sample difference and the variance in the measurement conditions would cause large errors, and the mechanism studies based on a small dataset may not be reliable enough to derive a general understanding of the phenomenon. A systematic analysis based on a high-quality quantified database acquired in a universal standard method with a large volume of samples would provide more accurate data and facilitate a comprehensive understanding of the relationship between CE and the materials' intrinsic properties. Here, we applied a standard method to quantify the triboelectric series for a wide range of inorganic non-metallic materials. Nearly 30 common inorganic materials have been measured, and the triboelectric series is listed by ranking the TECDs. By comparing the work functions of these materials, we find that the polarity of the triboelectric charges and the amount of charge transfer are closely related to their work functions. The triboelectric effect between inorganic materials and a metal is mainly caused by electronic quantum mechanical transitions between surface states, and the driving force of CE is electrons seeking to fill the lowest available states. The only required condition for CE is that the two materials are forced into the atomically close distance so that electronic transitions are possible between strongly overlapping wave functions. Results The principles of measurement and experimental setup. Nonmetallic inorganics are mostly synthesized at high temperature, they are hard materials with high surface roughness, and it is a challenge to make an accurate measurement of the TECD between solid-solid interfaces due to poor intimacy with inaccurate atomicscale contact. To avoid this limitation, we measured the TECD of the tested materials with liquid metal (mercury) as the contacting counterpart as we used for organic polymer materials 6 . The basic principle for measuring the TECD relies on the mechanism of TENG, which is shown in Fig. 1a-d. Details about the measurement technique and the experimental design as well as the standard experimental conditions have been reported previously 6 . The measurement method relies on the principle of TENG in contactseparation mode (Fig. 1b) 3,17 . When the two materials are separated, the negative surface charges would induce positive charges at the copper electrode side (Fig. 1c). When the gap distance reaches an appropriate distance d 1 , charges fully transfer to balance the potential difference (Fig. 1d). When the tested material is pushed back in contact with liquid mercury, the charges flow back (Fig. 1e). The TECD is derived from the amount of charge flow between the two electrodes. The tested materials were purchased from vendors or synthesized through a pressing and sintering process in our lab (Supplementary Table 1). The tested materials were carefully cleaned with isopropyl alcohol by cleanroom wipers and dried by an air gun. Then, the specimens were deposited by a layer of Ti (15 nm) and a thick layer of Cu (above 300 nm) at the back as an electrode, and have a margin size of 2 mm to avoid a short circuit when the sample contacts with mercury. The measured TECD. One group of typical signals measured for mica-mercury are shown in Fig. 2. The open-circuit voltage reached up to 145.4 V (Fig. 2a). A total of 69.6 nC electrons (Fig. 2b) flowed between the two electrodes. For each type of material, at least three samples were measured to minimize the measurement errors. The results were recorded after the measured value reached its saturation level. This will eliminate the initial surface charges on the samples. Figure 2c shows the output of three samples of mica measured at different times, and the measured values have good repeatability (Fig. 2d) and stability. The TECD refers to the transferred triboelectric charges per unit area of the CE surface. Nearly 30 kinds of common inorganic non-metallic materials were measured, and their triboelectric series is presented in Fig. 3. The quantified triboelectric series shows the materials' capabilities to obtain or release electrons during the CE with the liquid metal. We have introduced a normalized TECD α in our previous study where σ is the measured TECD of material. Here, we keep using the same standard for these inorganic materials for reference, so that the values are comparable. The average TECD values and the normalized TECDs α of the measured materials are both listed in Table 1. The more negative the α value is, the more negative charges it will get from mercury, and vice versa. If two materials have a large difference of α values, they will produce higher triboelectric charges when rubbed together ( Supplementary Fig. 1). In contrast, the less difference of α values, the fewer charges exchange between them. The triboelectric series is validated by cross-checking (Supplementary Figs. 2 and 3). Mechanism of CE for inorganic non-metallic materials. The standard measurement quantifies the TECD of various materials, the obtained values are only dependent on the materials. It remains to be systematically investigated, such as why different materials have a different amount of charges transferred; why some materials will become positively charged, but others were negatively charged after contact and separation with the same material; why the polarity of charge can be switched when they were contacted with different materials. Here, we compare the TECD values with the relative work functions of the two contacting materials. In this study, all inorganic non-metallic materials were contacted with mercury. The work function of mercury is ; Hg ¼ 4:475 eV 11 . The work functions of the tested materials are listed in Supplementary Table 2. The work functions of inorganic non-metallic materials are determined by materials themselves, but can be modified by crystallographic orientation, surface termination and reconstruction, and surface roughness, and so on. Therefore, some materials have a wide range of work functions in the literature. As shown in Fig. 4, as the work functions of materials decrease, the TECD values increase from −62.66 to 61.80 μC cm −2 . The work function is related to the minimum thermodynamic energy needed to remove an electron from a solid to a point just outside the solid surface. Our results show that electron transfer is the main origin of CE between solids and metal 18 . In addition, the polarity of the CE charges is determined by the relative work functions of materials. When the work function of the tested material A is smaller than the work function of mercury, ; A <; Hg , the tested materials will be positively charged after intimate contact with mercury; when the work functions of tested material B are close to the work function of mercury, ; B ; Hg , the tested material B will be little electrically charged; when the work functions of tested material C are larger than the work function of mercury, ; C >; Hg , the tested materials will be negatively charged. The TECDs of tested materials are strongly dependent on the work function difference. If the two materials have a larger difference of work b-e Schematic diagram of the mechanism for measuring the surface charge density. b Charges transferred between the two materials owing to the contact-electrification effect. There is no potential difference between the two materials when they are fully contacted with each other. c When the two materials are separated, the positive charges in mercury flow into the copper side in order to keep the electrostatic equilibrium. d When the gap goes beyond a specific distance L, there is no current flow between two electrodes. e When the material is in contact with mercury again, the positive charges flow from copper to mercury due to the induction of the negative charges on the surface of the inorganic material. functions, they will have more electrons transferred. These results show that electron transfer during CE is related to the band structure and energy level distribution. The electrons flow from the side that has high energy states to the side having low energy states. The quantum mechanical transition model is proposed to explain the CE of inorganic non-metallic materials. Suppose we have a material A, which has a higher Fermi level than the Fermi level of the metal. The disruption of the periodic-potential function results in a distribution of allowed electric energy states within the bandgap, shown schematically in Fig. 5a, along with the discrete energy states in the bulk material. When the material is brought into intimate contact with the metal, the Fermi levels must be aligned (Fig. 5b), which causes the energy bands to bend and the surface states to shift as well. Normally, the energy states below the Fermi level of material A-E FA are filled with electrons and the energy states above E FA are mostly empty if the temperature is relatively low. Therefore, the electrons at the surface states above E FA will flow into the metal, thus the metal gets negatively charged, and the originally neutralized material A becomes positively charged for losing electrons. The electrons that flowed from semiconductors or insulators to metals are mainly from the surface energy states. If the work functions of two materials (B and metal) are equal, there will be little electron transfer (Fig. 5c, d); therefore, it would have no electrification. When the work function of tested material C is lower than the work function of the metal (Fig. 5e), the Fermi levels tend to level, surface energy states shift down, and electrons flow reversely from metal to fill the empty surface states in material C to reach the aligned Fermi level (Fig. 5f). Thus, the tested material will be negatively charged and the metal becomes positively charged. If two materials have a large difference of work functions, there are many discrete allowed surface states that electrons are able to transit; the surface is able to carry more charges after contact or friction. If the difference is low, few discrete surface states exist for electrons transition; the surface will be less charged. The surface charge density can be changed by contact with different For inorganic non-metallic materials, the dielectric constant is an important parameter. We have analyzed the relationship between dielectric constant and TECD. From the Gauss theorem, if we ignore the edge effect, the ideal induced short circuit transferred charge in the inorganic material-mercury TENG process is given by 6,17 : where ε 1 is the dielectric permittivity of the inorganic material, d 1 is the thickness, x(t) is the separation distance over time t, and σ c is the surface charge density. From Eq. (1), under the measured conditions, d 1 x(t), and the part of d 1 ε 0 ε 1 can be ignored. Therefore, the dielectric constant will not influence the charge transfer Q SC and the surface charge density σ c . As expected, the relation of TECD and dielectric constant of these materials is shown in Fig. 4; the measured TECDs are not affected by the dielectric constant of materials. Discussion A quantum mechanical transition always describes an electron jumping from one state to another on the nanoscale, while CE between solids is a macroscopic quantum transition phenomenon. Materials have a large scale of surface states to store or lose electrons, and charge transfer between two triboelectric materials is based on the capacitive model, so it can reach a significantly high voltage (>100 V) 19 , which is different from the contact potential (mostly <1 V) 20 . The quantum transition model between the surface energy states explains how electrons are accumulated or released at the surfaces of inorganic dielectric materials and how the surface becomes charged, while the contact potential model only explains carrier diffusion inside semiconductors 24 . The surface modification technologies, including impurity and doping elements, surface termination and reconstruction 21 , surface roughness 22 , and curvature effect 23 can tune the TECD. Based on the proposed model, it is suggested that the fundamental driving force of CE is that electrons fill the lowest available energy levels if there is little barrier. When the two materials have reached atomically close distance, electron transition is possible between strongly overlapping electron wave functions 25,26 . The work functions are determined by the compositions of compounds, chemical valence state, electronegativity 15 , crystallographic orientation 27 , temperature 19 , defects 28,29 , and so on. Accordingly, the calculation of work functions can be used as a comparison of a materials' property of TE and to estimate their triboelectric output. In addition, the work functions can be modified to improve the TE for enhancing the triboelectric effect for energy harvesting [30][31][32][33] and sensing 34,35 , or reduce the electrical discharge due to CE to improve safety. In summary, we have quantitatively measured the triboelectric series of some common inorganic non-metallic materials under defined conditions. The TECD data obtained depends only on the nature of the material. This serves as a basic data source for investigating the relevant mechanism of CE, and a textbook standard for many practical applications such as energy harvesting and self-powered sensing. The study verifies that the electron transfer is the origin of CE for solids, and that CE between solids is a macroscopic quantum mechanical transition effect that electrons transit between the surface states. The driving force for CE is that electrons tend to fill the lowest available surface states. Furthermore, the TE output could be roughly estimated and compared by the calculation of work functions, and ajusted by the modification of the material's work function through a variety of methods. For MgSiO 3 , the high-purity MgO (99.5%) and SiO 2 (99.5%) powders were baked at 80°C for 5 h to remove hygroscopic moisture and mixed in an ethanol medium by ball milling for 8 h according to the stoichiometric formula. The slurry was dried at 110°C for 10 h and the dried powder was calcined at 1100°C for 3 h, and then ball-milled in an ethanol medium for 8 h. After drying again, the obtained powders were granulated with polyvinyl alcohol as a binder and pressed into green disks with a diameter of 2 in. and a thickness of 1 mm under a pressure of 30 MPa. Next, the green disks were heated at 600°C for 3 h to remove the binder, and then sintered at 1400°C for 2 h. After the obtained ceramic disks were polished on both sides, the gold electrode was sputtered on one side. Other , and TiO 2 , the samples are directly prepared by solid-phase sintering method using commercial powders as the raw materials. Taking zinc oxide as an example, the high-purity ZnO powders (99.5%) were granulated with polyvinyl alcohol as a binder and pressed into green disks with a diameter of 2 in. and a thickness of 1 mm under a pressure of 30 MPa. Next, the green disks were heated at 600°C for 3 h to remove the binder, and then sintered at 1200°C for 1.5 h. After the obtained ceramic disks were polished on both sides, the gold electrode was sputtered on one side. Samples, such as AlN, Al 2 O 3 , BeO, mica, float glass, borosilicate glass, PZT-5, SiC, ZrO 2 , BN, clear very high-temperature glass ceramic, and ultra-hightemperature quartz glass, were directly purchased from different companies, which were also listed in the Supplementary Table 1. The materials were washed with isopropyl alcohol, cleaned with cleanroom wipers, and dried by an air gun. Then, the materials were deposited with a layer of Ti (10 nm) and a thick layer of copper (above 300 nm) with a margin size of 2 mm by E-beam evaporator (Denton Explorer). The measurement of TECDs. The samples were placed on the linear motor and moved up and down automatically with the help of the linear motor control program and system. For some inorganic compounds, the TECDs are relatively small; the turbulent caused by the motion of tested samples would cause some noise because of the friction between the platinum wire and mercury. Therefore, . Thus, the original neutrally charged dielectric A turns to have positive charges on the surfaces due to the electrons lose. c, d When a dielectric B is brought into contact with the metal, the Fermi levels are balanced, the surface energy states equal. There are no quantum transitions between the two materials. e When a dielectric C contacts the metal, electrons on the surface of the metal flow into the dielectric C to seek the lowest energy levels. f The energy bands shift to align the Fermi levels. Electrons flow from metal to dielectric C to fill the empty surface states due to the difference of energy levels (as shown in the green box). The original neutrally charged dielectric C turns to carry negative charges on the surfaces by obtaining electrons. the platinum wire was then designed to go through the bottom of the Petri dish and fully immersed in the liquid metal, and sealed by epoxy glue. In this way, there is no contact and separation between them; therefore, the noise is minimized. The sample's surfaces were carefully adjusted to ensure the precisely right contact between the tested material and the liquid mercury. The position and angles were adjusted by a linear motor, a high load lab jack (Newport 281), and a two-axis tilt and rotation platform (Newport P100-P). The short-circuit charge Q SC and open-circuit voltage V OC of the samples were measured by a Keithley 6514 electrometer in a glove box with an ultra-pure nitrogen environment (Airgas, 99.999%). The environmental condition was fixed at 20 ± 1°C, 1 atm with an additional pressure of 1-1.5 in. height of H 2 O and 0.43% relative humidity. In addition, samples were kept in the glove box overnight to eliminate the water vapor on the surface of the samples. Data availability The datasets generated during and/or analyzed during the current study are available from the corresponding author. The source data underlying Figs. 2a-d, 3, and 4a-b are provided as a Source Data file.
4,853.8
2020-04-29T00:00:00.000
[ "Materials Science", "Physics" ]
Galactic cosmic radiation exposure causes multifaceted neurocognitive impairments Technological advancements have facilitated the implementation of realistic, terrestrial-based complex 33-beam galactic cosmic radiation simulations (GCR Sim) to now probe central nervous system functionality. This work expands considerably on prior, simplified GCR simulations, yielding new insights into responses of male and female mice exposed to 40–50 cGy acute or chronic radiations relevant to deep space travel. Results of the object in updated location task suggested that exposure to acute or chronic GCR Sim induced persistent impairments in hippocampus-dependent memory formation and reconsolidation in female mice that did not manifest robustly in irradiated male mice. Interestingly, irradiated male mice, but not females, were impaired in novel object recognition and chronically irradiated males exhibited increased aggressive behavior on the tube dominance test. Electrophysiology studies used to evaluate synaptic plasticity in the hippocampal CA1 region revealed significant reductions in long-term potentiation after each irradiation paradigm in both sexes. Interestingly, network-level disruptions did not translate to altered intrinsic electrophysiological properties of CA1 pyramidal cells, whereas acute exposures caused modest drops in excitatory synaptic signaling in males. Ultrastructural analyses of CA1 synapses found smaller postsynaptic densities in larger spines of chronically exposed mice compared to controls and acutely exposed mice. Myelination was also affected by GCR Sim with acutely exposed mice exhibiting an increase in the percent of myelinated axons; however, the myelin sheathes on small calibur (< 0.3 mm) and larger (> 0.5 mm) axons were thinner when compared to controls. Present findings might have been predicted based on previous studies using single and mixed beam exposures and provide further evidence that space-relevant radiation exposures disrupt critical cognitive processes and underlying neuronal network-level plasticity, albeit not to the extent that might have been previously predicted. Supplementary Information The online version contains supplementary material available at 10.1007/s00018-022-04666-8. Introduction On Earth and in low Earth orbit astronauts are protected from radiation exposure by the Earth's magnetosphere. However, as we now look to carry out deep space exploration to the Moon and to Mars significant concerns remain regarding the detrimental health effects of exposure to galactic cosmic radiation (GCR). The most alarming of these concerns may be the effects of space radiation exposure on the central nervous system (CNS) [1]. GCR is comprised primarily of protons and helium nuclei, with the addition of highly energetic heavy nuclei known as HZE particles [high (H) atomic number (Z) and energy (E)]. Current shielding cannot prevent these charged particles from penetrating the hulls of spacecraft and exposing the human body. Estimates suggest that astronauts will be exposed to approximately 13 cGy of GCR during each year of a mission, with the bulk of the exposure occurring en route to Mars [1,2]. A large body of research using rodent models supports the hypothesis that exposure to these energetic charged particles elicit impairments in learning and memory and elevate anxiety and depression [3][4][5][6][7][8]. However, one caveat associated with those studies is that they have typically evaluated the effect of a single ion or up to 6 ion sequential exposures that do not represent the full complexity of the multiple ions and energies that define the actual GCR spectrum. While simplified GCR simulations (GCR Sim) using 5-6 beams including protons, 28 Si, 4 He, 16 O, 56 Fe provide more realistic scenarios of the space radiation environment, they still fall short of representing the complex mixture of particles to which astronauts will be exposed during Mars missions. Until recently, technological limitations have prevented evaluation of more space-relevant combinations of radiation exposures on CNS function. However, previous work from our laboratory and from the laboratories of others identified risks to the CNS following exposure to simplified 5-or 6beam GCR simulations [3,4,9]. To more accurately recapitulate the space radiation environment, NASA has now developed the capability to deliver a complex GCR Sim that includes 33 distinct beams, predominated by protons and 4 He particles of various energies, interspersed with infrequent beams of different heavy ions [10]. Another shortcoming in the majority of past studies analyzing the impact of GCR exposures on the CNS was that the radiation doses were delivered at excessive total doses and dose rates. While the dose rates with which the complex GCR Sim beams are delivered in our current study are still greater than those that will be encountered during deep space travel, very low daily doses were delivered, allowing for a much more realistic overall representation of the CNS effects expected to occur due to radiation exposures during deep space travel. There has been one prior report utilizing this optimized complex GCR Sim [11] that found minimal change in exploratory and object recognition-based tasks in male mice, but did find deficits in sociability behaviors. While these studies pointed to GCR induced cognitive decrements, mechanistic studies were not conducted and the radiation countermeasures used were not impactful on outcomes. Previous work from our laboratory and from the laboratory of others identified risks to the CNS following exposure to simplified 5-or 6-beam GCR simulations [3,4,9]. The goal of our current study was to use this same realistic 33-beam GCR Sim exposure model as that used by Kiffer and colleagues [11] at a more space-relevant total dose, delivered either chronically or acutely, to gain an improved understanding of the CNS impairments that astronauts may face during deep space exploration. Furthermore, we conducted an extensive longitudinal behavioral battery with both male and female mice, with follow-up electrophysiology and structural determinants of radiation injury in male mice to garner a deeper mechanistic insight of space radiation injury to the brain. Importantly, the focus of these studies was not designed to evaluate dose response or dose rate effects, nor differences between the acute and chronic GCR Sim paradigms, but rather, to elucidate whether these complex mixtures of ions and ion energies could disrupt critical cognitive processes and underlying neuronal network plasticity. However, these impairments may be less severe than those observed in previous acute space radiation studies of the CNS, suggesting that the hazards of deep space radiation exposures to astronaut CNS capabilities may not be as detrimental as might have been predicted by studies evaluating more simplified irradiation paradigms. Nonetheless, clear evidence of cognitive deficits arising in both male and female animals, regardless of the time course of GCR Sim exposure, still implicates certain CNS risks associated with the space radiation environment. The focus of these studies was to elucidate whether complex mixtures of particles and particle energies could disrupt critical cognitive processes and underlying neuronal network plasticity and whether those disruptions were more or less severe than those observed in previous space radiation studies of the CNS. Present findings provide a more realistic context for establishing whether the hazards of deep space radiation exposures represent a threat to astronaut performance during a Mars mission. Animals and irradiations All animal experimentation procedures described in this study are in accordance with the guidelines provided by NIH and approved by all Institutional Animal Care and Use Committees (IACUC) and performed within institutional guidelines. Single cohorts of 178 wild-type male and 91 female mice (C57BL/6J, JAX, Bar Harbor ME) were acclimatized and aged in the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (Upton, NY) for a minimum of 2 months prior to initiation of the study. The mice were group-housed under standard conditions (20 °C ± 1 °C; 70% ± 10% humidity; 12 h:12 h light and dark cycle) and provided ad libitum access to food and water. Mice were irradiated using the NASA developed 33-beam GCR simulation (GCR Sim) protocol [10], starting at 6 months of age, during the NSRL experimental cycle 19B. Male and female mice were each randomly divided into 3 experimental groups: sham irradiated controls, and acutely or chronically irradiated using the 33-beam GCR Sim protocol (59-60 male mice and 30-31 female mice for each Page 3 of 17 29 irradiation paradigm). The 33 charged particle species were delivered in rapid succession to simulate the spectrum of radiations experienced during a deep space mission while inside a spacecraft [2,12,13] and were delivered with the order, energies and doses as described by Simonsen and colleagues [10]. The NSRL physics staff performed all radiation dosimetry and confirmed spatial beam uniformity. Chronically irradiated animals received a GCR Sim dose of 2.08 cGy/day, 6 days a week for 4 weeks, for a total of 24 irradiation days and a total accumulated dose of ~ 50 cGy. Acutely irradiated animals received a single total GCR Sim dose of ~ 40 cGy over a duration of ~ 2 h on same day that the chronically irradiated mice received their final exposure. Further details of the animal irradiations, numbers of mice per treatment and sex, and experimental flow are given in Fig. 1. Cognitive testing To determine the effects of chronic and acute GCR on cognitive function, mice were subjected to a range of behavioral tests. Concurrent behavioral testing of control and irradiated mice occurred across 3 months, beginning 2 months after the conclusion of GCR exposures. Data analysis was conducted independently and blindly and is presented as the average of all trials scored for each task. All behavioral testing was conducted following previously published and carefully controlled protocols (SI; 14). Extracellular field recordings and whole cell electrophysiology Hippocampal slices for extracellular field recording and whole cell electrophysiology experiments were prepared as previously described and are detailed in the SI [3,15]. Extracellular field recordings were performed for both male and female mice at 5 months after the completion of irradiations. Based on the labor-intensive nature of whole cell electrophysiology, only male mice were evaluated 2-4 months after the completion of irradiation using a separate group of mice. Quantitative analysis of synapses and myelin Brains from male mice were dissected into the CA1 region of the hippocampus, the medial prefrontal cortex (mPFC) and the corpus callosum at 5 months after the completion of irradiations. Brain regions were sectioned and prepared for electron microscopy (EM) experiments as previously described and are detailed in the SI [16,17]. As for the whole cell electrophysiology, only male mice were evaluated by EM as informed by the more robust effect of GCR Sim exposure on extracellular field recordings for male mice Fig. 1 Study design. Single cohorts of 178 wild-type and 91 female C57BL/6J were randomly divided into 3 experimental groups: sham irradiated controls, and acutely or chronically irradiated using the 33-beam GCR Sim protocol [59-60 male mice and 30-31 female mice for each irradiation paradigm]. Chronically irradiated animals received a GCR Sim dose of 2.08 cGy/day, 6 days/week for 4 weeks (24 irradiation days; total accumulated dose ~ 50 cGy). Acutely irradiated animals received a single total GCR Sim dose on same day that the chronically irradiated mice received their final exposure (~ 40 cGy over ~ 2 h). Within 5 days post-irradiation mice were shipped to their respective institutions (i.e., Harvard, UC Irvine, Stanford). Animals were acclimated at least 2 months prior to behavioral, electrophysiological or structural analyses. (IRR, irradiation; OFT, open-field testing; EM, electron microscopy; USUHS, Uniformed Services University of Health Sciences; LTP, long-term potentiation; OUL, object in updated location; NOR, novel object recognition; LDB, light-dark box; FE, fear extinction) given the equally labor-intensive nature of structural EM analyses. Statistical analyses Statistical analyses for behavioral, extracellular field recording and EM experiments were carried out using GraphPad Prism (v8) software. For the OUL, NOR, LDB and SIT assays, following confirmation of normal Gaussian distribution of behavior data one-way ANOVAs were used to assess significance between control and irradiated groups, and when overall group effects were found to be statistically significant, a Bonferroni's post hoc test was used to compare chronic and acute GCR groups against the control group. For these behavior tests, an outlier was defined as a mouse whose behavior was outside of 2 standard deviations of the mean and was excluded from the analysis. Unless stated otherwise, behavior data were expressed as mean ± SEM and all analyses considered a value of P < 0.05 to be statistically significant. For tube dominance, an unpaired Student's t-test was conducted. Extracellular field recording measurements were analyzed using one-way ANOVA followed by a Dunnett's multi-comparison test or two-way ANOVA followed by a Bonferroni's post hoc test. For synapse characterization and percent myelinated axons, a one-way ANOVA were performed followed by Bonferroni's multiple comparison test. To account for the nested data produced by whole cell electrophysiology experiments, differences between treatment groups were evaluated using a linear mixed-effect model (LMM) regression analysis approach [18]. LMMs were fit in R using the lme4 package [19], where outcome measures were analyzed against treatment fixed effects and a random effect combining the nested variation from multiple cell recordings per animal. A Satterthwaite-based F-test performed with the pbkrtest package [20] was used to evaluate the main effect of treatment against a null LMM fit lacking the treatment term, followed by Tukey's HSD post hoc testing. The Satterthwaite method provides effective degrees of freedom accounting for variances within the LMM fit. Calculation of estimation statistics-based confidence intervals was performed with the DABEST package in Python [21]. Cumming estimation plots include a 5000 resampling, bias-corrected and accelerated bootstrap analysis to determine the nonparametric confidence interval of differences between groups. We quantified effect sizes with an unbiased Cohen's d test. ANOVA measures were used for AP frequency measurements that spanned many intervals. Statistical analysis was performed in Python or R. Unless stated otherwise, results were expressed as mean ± SEM and all analyses considered a value of P < 0.05 to be statistically significant. To account for the nested data produced by g-ratio quantification, differences between treatment groups were evaluated using a linear mixed-effect model (LMM) regression analysis approach. LMMs were fit in R 4.1.2 [22] using the lme4 [19] and lmerTest [23] packages, where outcome measures were analyzed against treatment fixed effects and a random effect for animal ID, representing the nested variation from multiple synapse or axon measurements per animal. Significant interaction effects were decomposed by comparison of estimated marginal means with the demeans package in R [24]. Results were expressed as mean ± SEM and all analyses considered a value of P < 0.05 to be statistically significant. GCR Sim exposure results in sex-specific impairments in cognitive domains and anxiety paradigms To evaluate the effects of chronic or acute 33-beam complex GCR Sim exposures, delivered across either acute or chronic time courses, we employed our extensive behavioral testing platform beginning 2 months after the completion of irradiations with completion of testing occurring at 5 months post-irradiation. Both male and female mice were used to determine whether these distinct exposure paradigms result in sex-specific impairments in cognition and anxiety-like behavior. To this end, we utilized behavioral tasks with a specific emphasis on the hippocampus-medial prefrontal cortex (mPFC) neural circuits. Collectively, our data show that chronic and acute GCR exposures differentially affect female and male mice on particular behavioral tests including spontaneous exploration, anxiety test assays and social interactions. Further, electrophysiological recordings using male mice identify cellular-level alterations in hippocampal neuron function and associated disruptions in network-level synaptic plasticity. The object in updated location (OUL) task is a memory updating paradigm that assesses both the original memory and the updated information in a single test session [25]. Further, the OUL task uses incidental learning that takes advantage of the innate preference of rodents for novelty. Following initial habituation to the arena, mice learned the locations of 2 identical objects in a familiar context ( Fig. 2A; training session, days 1-3). The following day, during the update session (day 4), all animals were exposed to one familiar fixed object location (A 1 ) and one identical object moved to a new updated location (A 3 ). Control animals were shown to have successfully acquired the original object location memory (OLM) during the update session, recognizing the A 3 location as the novelty, as did the acutely irradiated male mice (Fig. 2B). However, acute and chronically irradiated females (F (2,38) = 4.48; P = 0.018) and the chronically irradiated males exhibited a lack of preference for the object in the novel updated A 3 location (F (2,41) = 9.001; P = 0.0006). The day after the update session, all groups were given a test session (day 5) (Fig. 2C). At test, memory for the updated information was examined via comparison between exploration of the object in the novel location (A 4 ) to exploration of the fixed location (A 1 ), the initial location (A 2 ) and the updated location (A 3 ). As cognitively intact mice prefer novelty, intact memory for the original training session or the updated information is demonstrated by preferential exploration of the object in the novel location (A 4 ) compared to each of the other objects as indicated by a higher score on the discrimination index (DI; see methods). Female GCR Sim-exposed mice retained a similar preference as control animals for the object in the novel A 4 location relative to the fixed A 1 location object ( Fig. 2C; top left; F (2,43) = 1.53; P = 0.23). However, both chronically and acutely irradiated females exhibited impaired pattern separation and increased memory interference relative to control mice, where they were unable to demonstrate an appreciation for novelty of the A 4 location object as compared to the object that had been in the A 2 location during initial training ( Fig. 2C; middle left; F (2,43) = 6.36; P = 0.0038). Also compared to the control animals, the chronically irradiated female mice were more impaired in differentiation of the A 3 location object from the update session relative to the novel A 4 location ( Fig. 2C; bottom left; F (2,42) = 3.077; P = 0.057). Even though the chronically irradiated male mice showed impairment on the update session (day 4; Fig. 2B), during the test session (day 5; Fig. 2C; right) all male mice were able to similarly recognize the novelty of the object in location A 4 relative to the objects in the fixed A 1 (F (2,43) = 0.45; P = 0.64) and initial A 2 (F (2,42) = 0.76; P = 0.47) locations, as well as the updated object in location A 3 (F (2,43) = 1.20; P = 0.31). These observations suggest that in our study GCR exposure induces impairments in hippocampal memory and pattern Fig. 2 Exposure to simulated GCR elicits impairments in memory formation and updating. A Experimental design. All objects were identical aside from location. B Female mice exposed to chronic or acute GCR Sim exhibited significantly lower discrimination indices (DI) relative to controls during the update session, demonstrating no preference for the object in the updated location (A 3 ) as compared to the fixed location object (A 1 ). Only chronically irradiated male mice were impaired in update session performance. C During the test session, female and male mice exposed to either GCR Sim paradigm retained the memory of the fixed location (A 1 ) relative to the novel location (A 4 ), exhibiting DI scores similar to control animals (top panels). The irradiated male mice also retained the memory of the initial location (A 2 ) relative to the novel location (A 4 ), but both chronically and acutely irradiated females showed significantly lower DIs relative to control animals (middle panels). Similarly, irradiated male mice retained the memory of the updated location (A 3 ) relative to the novel location (A 4 ), but the chronically irradiated female mice showed significantly lower DIs relative to control animals. Data are mean ± SEM (N = 13-16 per group); P values derived from oneway ANOVA followed by a Bonferroni's multiple comparison test. *P < 0.05, **P < 0.01 ▸ separation in female mice that do not ultimately manifest in the irradiated male mice. Following the OUL task, animals were tested sequentially on novel object recognition (NOR), the light-dark box (LDB) test and social interaction test (SIT). The NOR task depends on both the hippocampus and perirhinal cortex to test the animal's ability to discriminate novelty [26]. In the NOR task, female mice exposed to acute GCR exhibited a trend toward a significantly impaired ability to discriminate the novel object compared to controls ( Fig. 3A; left; oneway ANOVA: F (2,43) = 2.67; P = 0.081), while chronic GCR females performed similarly to controls (P = 0.46). However, male mice exposed to either chronic or acute GCR Sim both exhibited significantly diminished novel object discrimination as compared to controls ( Fig. 3A; right; one-way ANOVA: F (2,38) = 4.67; P = 0.015). Radiation exposures have also been found to alter mood [27,28], with low doses of acute charged particle radiation causing increased anxiety-like behavior in mice [3,7]. To investigate whether chronic or acute exposure to GCR Sim also triggers anxiety-like behavior, mice were administered the LDB test that is based on the tendency of anxious rodents to more actively avoid open brightly lit areas and exhibit reluctance to explore open environments. Such anxiety-like behaviors can manifest as reduced numbers of transitions between the dark and light compartments of the LDB testing arena or alternatively, as frantic darting behavior between the 2 compartments [29]. During LDB testing, irradiated female and male mice spent similar percent test times in the light compartment as their respective controls ( Fig. 3B; left; female: one-way ANOVA: F (2,42) = 1.81; P = 0.18; male: one-way ANOVA: F (2,44) = 2.16; P = 0.13). Female GCR Sim-exposed mice also performed similarly to controls with regards to the number of light-dark compartment transitions made ( Fig. 3B; right; one-way ANOVA: F (2,44) = 2.06; P = 0.14). However, male mice exposed to chronic GCR Sim, but not acute GCR Sim, more frequently transitioned between the light and dark compartments compared to controls ( Fig. 3B; one-way ANOVA: F (2,44) = 5.09; P = 0.01) that could suggest increased anxiety-like behavior, although the lack of significant differences among groups for time spent in each chamber (Fig. 3B) and between male control and chronic GCR Sim-exposed mice on the OFT suggests otherwise (Supp. Figure 1B). Next, we performed the SIT to examine social interaction behaviors that are found to depend on brain structures including the hippocampal and mPFC circuits [30]. Within a barrier-free arena each group-housed experimental mouse A Novel Object Recognition (NOR) testing indicated that chronically and acutely exposed males show significantly reduced discrimination index scores relative to controls, indicating no preference for the novel object. Acutely irradiated GCR females showed a trend toward reduced discrimination index scores compared to controls. B Using time spent in the light compartment of the light-dark box test (LDB) as a measure of anxiety-like behavior, none of the GCR-exposed animals were affected (left panel). Evaluation of numbers of transitions between the light and dark compartments on the LDB could suggest that chronic GCR males showed elevated number of transitions possibly suggesting increased frantic, anxietylike behavior although analysis of time spent in each compartment do not support this conclusion (right panel). C During social interaction test (SIT), only acutely irradiated males showed reductions in social interactions with a novel mouse (left panel), while chronically irradiated females exhibited avoidance behavior (right panel). D Male mice exposed to chronic GCR Sim showed an increase in trials won compared with unirradiated control mice when tested on the Tube Dominance behavioral task, suggesting an increase in aggressive behavior. For NOR, LDB and SIT, data are the mean ± SEM (N = 13-16 mice/ group); P values derived from one-way ANOVA followed by a Bonferroni's multiple comparison test. For the Tube Dominance test, data are the mean ± SEM (N = 8 per group); P values derived from unpaired t-test. *P < 0.05, **P < 0.01 was allowed to interact with a novel mouse. The total time the experimental mouse spent interacting with the novel mouse or actively avoiding social interactions initiated by the novel mouse were recorded following established protocols [31]. Neither chronic nor acute GCR-exposed female mice showed impairments in the total amount of time spent in social interactions compared to control animals ( Fig. 3C; one-way ANOVA: F (2,44) = 2.41; P = 0.10). However, the female mice exposed to chronic GCR Sim, but not acute GCR Sim, spent significantly more time actively avoiding a novel mouse during the 10 min trials compared with unirradiated control mice ( Fig. 3C; one-way ANOVA: F (2,44) = 3.21; P = 0.050). Conversely, male mice exposed to acute GCR Sim, but not chronic GCR Sim, spent significantly less time interacting with a novel mouse, compared to controls ( Fig. 3C; one-way ANOVA: F (2,44) = 7.21; P = 0.0020). No group differences were observed when comparing active avoidance behavior among the irradiated male cohorts ( Fig. 3C; one-way ANOVA: F (2,44) = 1.67; P = 0.20). These data suggest that while exposure to GCR Sim alters some aspects of social interaction behaviors in both female and male mice the exact nature of the impact varies among the different groups. Lastly, we observed no GCR Siminduced changes in associative fear memory in either male or female mice (Supp. Figure 2). In a separate cohort of male mice, we examined social hierarchy or aggression using the tube dominance test, where a novel mouse and an experimental mouse are placed facing each other at opposite ends of a narrow tube and meet in the middle. This test was performed at ~ 4.5 months after irradiations were completed. The mouse that forces its opponent out of its way is designated the 'winner' and more socially dominant [32]. The mouse that retreats from the tube is designated the 'loser' and shows traits of being more subordinate. In the tube dominance test, male mice exposed to chronic GCR Sim were observed to achieve substantially more wins compared to control animals, indicating radiation-induced elevations in dominant or aggressive behavior. (Fig. 3D; t-test: t (14) = 2.41; P = 0.031). Further studies in the same cohort of male mice showed no additional deficits in spatial memory or general locomotion (Fig. S1). Hippocampal synaptic plasticity is diminished following GCR Sim exposure Synaptic plasticity mechanisms effectuate activity-dependent dynamic rebalancing within the intricately interconnected hippocampal network of excitatory neurons and diverse GABAergic interneurons. Repeated activation of synaptic inputs from CA3 to CA1 pyramidal neurons through high frequency Schaffer collateral stimulation has long been known to produce prolonged enhancement of synaptic activity, known as long-term potentiation (LTP), thought to represent a cellular basis of memory [33]. Given the observed deficits in cognitive processes involving hippocampal and cortical networks in mice exposed to either acute or chronic GCR Sim, we followed up our behavioral testing by determining whether hippocampal LTP became perturbed in GCR Sim-exposed mice at 5 months after irradiations. The delivery of theta burst stimulation (TBS) to the Schaffer collateral produced a robust and immediate increase in LTP, measured as the relative change in the slope of evoked field excitatory postsynaptic potentials (fEPSPs) generated by CA1 apical dendrites (Fig. 4A). fEPSP slope then gradually decayed to a stable level of potentiation in brain slices from all treatment groups of female and male mice. LTP levels in these hippocampal slices were found to be consistent with our prior reports [4,8]. The level of potentiation in fEPSP slope maintained 50-60 min post-TBS was significantly reduced in the hippocampus of the chronically, but not acutely, GCR Sim-exposed female mice ( Fig. 4B; left; one-way ANOVA: F (2,33) = 9.26; P = 0.00060; Bonferroni post hoc: P = 0.00030, P = 0.062, respectively) as compared to controls. Male mice exhibited reduced mean potentiation in both chronically and acutely GCR Sim-exposed mice ( Fig. 4B; right; one-way ANOVA: F (2,33) = 18.85; P < 0.0001; Bonferroni post hoc: P < 0.0001, P = 0.0039, respectively) relative to controls. These findings support the hypothesis that exposure to simulated GCR adversely impacts network mechanisms of synaptic plasticity that underlie critical learning and memory processes. Measures of baseline synaptic transmission were found to be unaltered in either chronic or acute GCR Sim-exposed mice relative to control animals. Specifically, the slope of the input/output curves for evoking fEPSPs in chronic and acute GCR Sim hippocampi were no different from in control mice ( Fig. 4C; female: one-way ANOVA: F (2,27) = 0.051; P = 0.95; males: one-way ANOVA: F (2,27) = 0.010; P = 0.99). Accordingly, there were no significant changes between treatment groups in the presynaptic plasticity of transmitter release, as measured in a paired pulse facilitation assay ( Fig. 4D; female: two-way ANOVA: F (2,33) = 1.014; P = 0.37; males: one-way ANOVA: F (2,33) = 0.50; P = 0.61). Together, these results show that GCR Sim exposure impairs LTP in the hippocampus. GCR Sim exposure elicits limited alterations in a CA1 synaptic signaling As GCR Sim irradiation disrupts both hippocampus-related behaviors and hippocampal network plasticity mechanisms, we next investigated the manner in which accurately modeled space radiation exposures may have perturbed the electrophysiological properties of individual neurons. Previously, we and others have observed that acute single-ion [7,34,35] or multiple-ion charged particle irradiation [3], as well as chronic neutron irradiation [8], is capable of disrupting hippocampal neuron signaling. Therefore, we sought to determine whether irradiation with acute or chronic GCR Sim paradigms that more accurately recapitulate the space radiation environment likewise altered the electrophysiological properties of hippocampal neurons. We initially assessed whether either chronic or acute GCR Sim exposures produced changes in the intrinsic electrophysiological properties of hippocampal pyramidal neurons within the CA1 superficial layer at 2-4 months post-irradiation of the male mice (Fig. 5). Neither GCR Sim irradiation paradigm altered the resting membrane potential (RMP) of pyramidal neurons (F (2,8.17) = 0.050, P = 0.95, linear mixed-effect modeling (LMM) (Fig. 5A). Assessing neuronal responses to a range of brief current injections allowed for measurement of other cell-intrinsic properties (Fig. 5B). We observed no radiation-induced changes in either the input resistance ( Fig. 5C; F (2,31) = 0.37, P = 0.70, LMM) or hyperpolarization sag amplitude responses to -100 pA current injections ( Fig. 5D; F (2,8.87) = 0.0020, P = 0.10, LMM) of CA1 pyramidal neurons. Measuring neuronal excitability based on how readily action potential (AP) firing could be evoked, GCR Sim exposure was found to alter neither the rheobase current required to first evoke an AP ( Fig. 5E; F (2,31) = 0.050, P = 0.96, LMM) nor AP firing frequency across a range of current injections ( Fig. 5F; F (2,859) = 2.19, P = 0.11, two-way ANOVA). There was also no clear impact of GCR Sim exposure on the characteristics of Fig. 4 Hippocampal long-term synaptic plasticity is perturbed by GCR Sim exposure. Extracellular field recordings of CA1 dorsal hippocampus apical dendrite responses to Schaffer collateral stimulation at 5 months following completion of chronic and acute GCR Sim exposures. A Following a stable 20 min baseline recording, a single train of theta burst stimulation (TBS; arrow) was applied, and then, recordings were continued for an additional 60 min. The time course shows that TBS-induced long-term potentiation (LTP) was markedly reduced in slices from chronically irradiated female and both groups of GCR Sim-exposed male mice compared with slices from respective control animals. Representative traces collected during baseline (inset; black line) and 60 min post-TBS (red line). Scale bars indicate 0.4 mV/5 ms. B Chronically GCR Sim-exposed female mice showed a marked reduction in LTP at 60 min post-TBS relative to control mice (left). Field excitatory postsynaptic potential (fEPSP) slope was significantly reduced 60 min post-TBS in slices from both chronically and acutely GCR-exposed male mice (right). C The relationships between stimulation current and fEPSP slope were not detectably different between groups. D Transmitter release kinetics, as assessed with paired pulse facilitation (PPF), were also comparable among all animals. Data are mean ± SEM (total N = 6 mice per group; 1 slice/ hemisphere per mouse); P values for mean potentiation and fEPSP slope derived from one-way ANOVA followed by a Bonferroni's multiple comparison test. P values for PPF derived from two-way ANOVA. **P < 0.01, ***P < 0.001. ****P < 0.0001 individual APs, such as the threshold potential for AP initiation ( Fig. 5G; F (2,7.24) = 0.66, P = 0.55, LMM). Additional unaltered intrinsic electrophysiological properties and statistical parameters for the above measurements are included in Supplemental Table 1. Overall, we did not observe any changes to the intrinsic properties of CA1 pyramidal neurons following either chronic or acute GCR Sim exposures of male mice. While neither acute nor chronic GCR Sim exposures disrupted the intrinsic properties of CA1 pyramidal neurons, we have previously observed changes in hippocampal synaptic connectivity following single-ion GCR, multiple-ion GCR and chronic neutron irradiation paradigms [3,7,8,17,35]. Furthermore, low doses of single-ion GCR are known to disrupt dendritic spine morphology within the hippocampus [5,6,36]. Thus, we next performed electrophysiological There was no alteration in the input resistance (C), sag during a − 100 pA hyperpolarizing current injection (D) or rheobase current required to evoke an action potential (E) between groups. F Action potential (AP) frequency remained equivalent across a range of current injections and G the threshold potential for action potential initiation remained unchanged. Data are Control: 5 animals, 13 cells; Chronic: 5 animals, 12 cells; acute: 4 animals, 9 cells. A, C-E, G Cumming estimation plots show raw data on the top axis and a bootstrapped sampling distribution on the bottom axis; black dots depict the mean difference between groups and the 95% confidence interval is indicated by the ends of the vertical black bars. F Data are mean ± SEM. P values derived from linear mixed-effect model regression or twoway ANOVA recordings of the spontaneous excitatory and inhibitory postsynaptic activity received by CA1 pyramidal neurons, to examine any changes in response to either chronic or acute GCR Sim exposures in male mice (Fig. 6). Recording the spontaneous excitatory postsynaptic currents (sEPSC) received by CA1 pyramidal neurons, we found that irradiation altered sEPSC frequency (Fig. 6A, B; F ( (Fig. 6C, D; F (2,9.36) = 0.10, P = 0.91). Likewise, spontaneous inhibitory postsynaptic current (sIPSC) frequencies (Fig. 6E, F; F (2,6.35) = 1.50, P = 0.29) and amplitudes (Fig. 6G, H; F (2,11.17) = 0.49, P = 0.62) remained similar to control levels after acute or chronic GCR Sim irradiation. Other unaltered measurements of synaptic signaling properties and statistical parameters for these endpoints are included in Supplemental Table 2. Overall, we could identify no changes in the electrophysiological properties of CA1 pyramidal neurons in response to chronic GCR Sim exposures in male mice, whereas acute irradiation appears more capable of disrupting hippocampal excitatory inputs. GCR Sim exposure resulted in alterations in large synapse complexity Given the effect of GCR exposure on behavior, hippocampal plasticity and electrophysiology, we examined synapse density and morphology in the stratum radiatum of the CA1 region of the hippocampus as well as layer II/III of the medial prefrontal cortex (mPFC) of male mice at the ultrastructural level using quantitative EM (Supp. Figure 3). To answer the question of whether changes in the different types of synapses may contribute to in the behavioral and electrophysiological changes observed, we differentiated all synapses into either perforated or non-perforated synapses and assessed postsynaptic density (PSD) length, and head diameter (Fig. 7). Perforated and non-perforated synapses constitute separate synaptic populations from early in development [37]. Non-perforated or simple synapses have a unique and continuous PSD. Perforated synapses are morphologically characterized by a discontinuity in the PSD (Fig. 7D, black arrows). They are found on larger spines and are stable synapses implicated in memory-related plasticity [38]. They are also proposed to be a structural correlate of enhanced synaptic efficacy as spines with perforated synapses have There were no significant changes in total synapse density, perforated synapse density or non-perforated synapse density compared to controls. A total synapse density, B perforated synapse density and C non-perforated synapse density. D Representative electron micrograph depicting non-perforated synapses (white asterisks), perforated synapses (arrow heads) and measurements of PSD length (white line) and head diameter (red line). Scale bar = 500 nm. E-H HD measurements for all synapses, perforated synapses, non-perforated synapses < 0.4 mm and non-perforated synapses > 0.4 mm, respectively. I PSD length in all synapses showed smaller PSDs in chronic GCR mice. J Perforated synapse PSD lengths were significantly reduced following chronic GCR exposure. K PSD lengths in non-perforated synapses < 0.4 mm in HD. L Non-perforated synapses > 0.4 mm in HD were significantly reduces in chronically exposed GCR mice. N = 5 mice per group. Data are mean ± SEM, one-way ANOVA *P < 0.05, **P < 0.01 increased synaptic strength due to the membrane expansion, insertion of new receptors into the two PSDs and in the synaptic membrane and the creation of two independent release sites, with their own release probabilities [39][40][41]. We did not observe any significant group differences in total, perforated or non-perforated synapse density (Fig. 7A-C). Analysis of total synapses and perforated synapses revealed no significant change in HD between irradiated mice and controls (Fig. 7E, F). Our previous studies have classified mouse spines with head diameters < 0.4 μm as smaller, thin spines and > 0.4 μm as larger, mushroom spines [16,17]. When we apply this parameter to non-perforated synapses, we found no differences in HD between GCR-exposed mice and controls (Fig. 7G, H). Analysis of PSD length showed a significant decrease in chronic GCR-exposed mice ( Fig. 7I; F (2,12) = 8.41; P = 0.005). Further analyses of perforated synapse PSD length also found significant differences ( Fig. 7J; F (2,12) = 11.75; P = 0.001) and pairwise comparisons showed significant differences between control vs. chronic conditions (t (12) = 3.51; P = 0.013). Once the data for nonperforated synapses was separated based on HD < 0.4 μm and > 0.4 μm, we found significant results in < 0.4 μm HD synapses ( Fig. 7L; F (2,12) = 3.92; P = 0.049). Interestingly, examination of PSD lengths on non-perforated synapses with > 0.4 μm HD showed significant effects of chronic GCR exposure ( Fig. 7K; F (2,12) = 10.27; P = 0.002) with the pairwise comparison showing significant differences in control vs. chronic (t (12) = 2.86; P = 0.043). These data indicate that larger, mushroom-like spines, have smaller PSD lengths as a result of chronic GCR sim exposure. GCR Sim exposure results in alterations in myelination in the corpus callosum We next asked whether acute and chronic irradiation altered myelination (Fig. 8A). We quantified the percentage of myelinated axons as well as the g-ratio of axons (the ratio of inner to outer diameter of the myelinated axon) providing a morphometric analysis of the axons. Acute GCR Sim irradiation resulted in a significant increase in the percent of myelinated axons compared to controls ( Fig. 8B; F (2,12) = 7.67; P = 0.007 one-way ANOVA). There was no difference between chronic exposure and controls. Comparison of total g-ratios also did not show any differences between all groups of mice (Fig. 8C). When we separated axons by size, we found that the smaller caliber axons, < 0.3 μm in diameter, and large calibur axons, > 0.5 μm in diameter, showed an increase in g-ratio indicative of thinner myelin ) GCR-exposed mice. Scale bar = 2 mm. B There was an increase in the percent of myelinated axons following acute irradiation with no differences between chronic irradiation and controls. C There is no significant difference in overall g-ratios in chronic or acute irradiated mice compared to controls. D Acute irradiation results in less myelin in the smallest (< 0.3 mm) and largest axons (> 0.5 mm). N = 5 mice per group. P values derived from linear mixed-effect model regression. *P < 0.05 sheaths in acute irradiated mice (Fig. 8D, F (4,~617.14) = 2.80, P = 0.026; < 0.3 μm axons control vs acute t (~93.5) = − 2.23; P = 0.029 and for the large bin, t (~103.9) = − 2.64; P = 0.009, LMM). There was no difference between chronically irradiated mice and controls. Discussion As NASA and others move forward with plans for missions to the Moon and then Mars, it is of critical importance that the potential health risks associated with deep space radiation exposures are well understood. A large body of literature has clearly established that single-ion charged particle exposures elicit significant CNS impairments [1,42]. Recent studies of simplified GCR Sim exposures using 5-6 beams including protons, 28 Si, 4 He, 16 O, 56 Fe have moved in the direction of more realistically modeling the space radiation environment and the subsequent detrimental CNS responses [3,4,9]. The general consensus of these CNS studies, using the best representative simulations of space radiations that are available, indicate that neurocognitive complications starting at approximately one month post-exposure and do not resolve. Further, these studies suggest that the functional changes in cognition do not track with the microdosimetric properties of the absorbed doses for doses at least ≤ 50 cGy. While these studies are informative, they still fall short of accurately representing the complex mixture of ions and energies of radiation to which an astronaut would be exposed during long-duration spaceflight, such as a Mars mission. The complex GCR Sim paradigm developed by the NASA Space Radiation Laboratory includes 33 distinct charged particle beams, predominated by protons and 4 He particles of various energies interspersed with infrequent beams of representative heavy ions [10]. Past single-ion studies have provided important insights into the biological risks of certain ion exposures; however, the 33-beam acute GCR Sim exposure captures more representative CNS hazards that arise when exposed to space-relevant radiation fields. While an acute 33-beam GCR Sim paradigm has been utilized by Kiffer and colleagues to evaluate behavioral changes in male mice [11], our study includes both sexes and a detailed quantification of synaptic plasticity and structure. Furthermore, our chronic GCR Sim exposure paradigm represents the most realistic ground-based model available to date. As such, our study advances our understanding of how the CNS responds to a complex mixture of energetic charged particles, by investigating effects across a range of cellular and neural networks and behavioral functions. To determine how the GCR Sim exposures affect the CNS and to link animal behavior to neuronal networks previous displaying sensitivity to other space radiation models, we employed cognitive testing paradigms that primarily interrogate hippocampus-medial prefrontal cortex network functions. To determine how GCR Sim exposures impact long-term hippocampus-dependent memories and memory updating in female and male mice, we used the OUL testing paradigm in addition to novel object recognition and fear extinction testing. The OUL task is designed to be more sensitive than a novel place recognition assay [25]. Interrogating hippocampal function with the OUL task can simultaneously assess multiple memory traces, thereby having stronger cross-species correlates with humans than more simplified rodent behavior testing paradigms [42]. OUL starts with novel place recognition but includes an additional object relocation phase that elevates task rigor to determine how overlapping associative memories can be segregated. Therefore, a key strength is that the OUL task discriminates between updates or interference to existing memories, rather than the de novo formation of new associations. As such, the OUL assay incorporates two overlapping events that require dentate gyrus-dependent pattern separation [43] and tests whether new learning will occur despite this prior interference. Our previous studies of male mice exposed to simplified GCR Sim demonstrated normal memory acquisition but significant impairments in memory updating [4]. While the chronically irradiated male mice performed poorly during their update session, they and the acutely exposed male mice were successful during the test day, accessing original memories and performing at control levels. Conversely, the female mice exposed to chronic or acute GCR Sim had intact original memories, but failed to update the learned information regarding the most novel object location presented on the final testing day, suggesting that GCR exposure induces impairments in hippocampal memory reconsolidation and cognitive reserves in female mice that do not manifest to the same extent in the irradiated male cohorts. While the neural mechanisms that facilitate reconsolidation-based updating to modify memories have been investigated extensively [25,44], the circuits controlling these behaviors and how complex irradiation paradigms impact these circuits are far less understood. This confounds interpretation of present results obtained in other behavioral paradigms, as changes in the NOR, LDB and SIT testing paradigms manifested differently between acute and chronic exposures and between the sexes, pointing to the nuances of space radiation exposure on CNS functionality. The differential dependence of cortical and hippocampal circuitry to radiation-induced change and their sensitivity to inflammatory and hormone modulation may explain this equivocality between the sexes regarding their performance on particular testing paradigms. Past work has shown a resistance of female mice to space radiation-induced cognitive decline [45,46], while other work has not [47]. The current study did not evaluate the level of sex steroids in either male or female mice before or after the completion of radiation exposures, but sex differences and the role of sex hormones are an area clearly requiring additional study. Interestingly, male mice were found to exhibit increased aggressive-like behavior, which did not translate to changes in associative fear memory, the latter of which was used to evaluate changes in cognitive flexibility using a fear extinction task. Typically, fear memories are robust and persistent [48], involving rapid, strong associations, but due to their aversive nature can confound simultaneous access to original and updated memories [49]. Thus, it is noteworthy that female and male cohorts exposed to either of the two GCR Sim paradigms exhibited no impairments on the fear extinction memory test. Our past studies have shown significant impairments in this task which involves intact hippocampal function [8] in concert with the medial prefrontal cortex and amygdala [48]. While the reasons for this remain uncertain, the impact of GCR exposure on prelimbic and infralimbic circuitry of the mPFC that regulate fear expression and suppression, respectively, may be offset, exhibiting relative equal sensitivity to radiation-induced change. Memory impairments, such as we observe following both chronic and acute GCR Sim exposures are often associated with underlying disruptions of synaptic plasticity processes, including LTP [33,50]. Indeed, impaired LTP is observed in the hippocampus following several irradiation paradigms, including single-ion [51,52], 5-ion [4], chronic neutron [8] and our current GCR Sim results. While we observe behavioral deficits and impaired LTP persisting months following GCR Sim irradiation, compensatory mechanisms may eventually allow the brain to mollify radiation-induced damage. Indeed, following an acute 100 cGy 56 Fe irradiation mice initially show spatial memory and LTP deficits, yet by 6 months after exposure both traits show enhancements that remain long term [52]. Homeostatic plasticity mechanisms enable neuronal networks to cope with insults by regulating other cellular properties, such as ion channel expression, into alternative states that help stabilize overall activity. However, while plasticity mechanisms allow a network to tolerate disruptions, they may leave the network in a destabilized state that is more vulnerable to cognitive impairment and epilepsy following subsequent stresses [53,54]. In past experiments modeling irradiation with single particle types, including protons [34], 4 He [7] or chronic neutrons [8], we have generally observed changes in neuronal intrinsic properties that indicate a reduction in network excitability. However, as with both acute and chronic GCR Sim exposures, we do not detect any persistent changes in hippocampal intrinsic excitability following irradiation with a mixed beam of 5 ions [3]. The exact reasons why mixed radiation fields appear to produce fewer apparent changes in the functional properties of individual neurons is unclear but may be due to counteracting and/ or compensatory mechanisms responding to different ion energies and linear energy transfer. Elucidating these types of microdosimetric and/or compensatory responses was not the goal of the present study, where we focused on critical functional and neurobiological outcomes to GCR Sim exposures. While GCR Sim exposures did not change the neuronal intrinsic characteristics we assessed, we do observe that acute GCR Sim exposures disrupt synaptic signaling properties. Similar reductions in the net proportion of excitatory synaptic signaling received by neurons following acute particle irradiation include suppressed EPSC frequency following 4 He exposures [55] and increased amplitudes of IPSCs after proton [7] and 5-ion irradiation [3]. Such changes are in line with how acute charged particle exposures are known to damage structural elements, such as dendrites and dendritic spines, that are necessary for proper synaptic signaling [5,7,36]. What is less clear is why chronic GCR Sim exposures, which include positively charged protons, appear less likely to alter synaptic signaling properties. Although chronic neutron irradiation suppressed EPSC frequencies in CA1 neurons [8], no similar measurements have been performed following chronic charged particle exposures. Additionally, analysis of PSD indicated decreased length, particularly for mushroom spines, that might contribute to reduced synaptic efficiency. Similarly, while not conclusive or robust, changes in myelination thickness relative to axon size could indicate compromised axonal integrity [16]. Future investigations into the impacts of chronic particle irradiation on dendrites, dendritic spines and synaptic structures may help resolve these uncertainties. The lack of apparent radiation-induced alterations in functional neuronal properties of CA1 pyramidal neurons, outside of reduced sEPSC frequency following acute GCR Sim exposures, does not rule out that other neuronal populations are being more substantially disrupted. We have previously observed that the functional properties of other cell populations such as hippocampal cannabinoid type 1 receptor-expressing basket cells [56] and perirhinal cortex regular spiking principal cells [7] are altered by charged particle irradiation. There is also evidence that several other brain regions, including the hypothalamus, striatum and nucleus accumbens, are sensitive to GCR exposures [56]. Even within the hippocampus, radiation exposures are known to alter adult neurogenesis of hippocampal neurons [52,57]. However, these findings are mixed. Sahay and colleagues found that integration of newborn neurons was important for cognitive processes [58], but Whoolery et al., found that single-ion space radiation exposures improved pattern separation in a dentate gyrus-dependent touch screen task of male mice despite impaired neurogenesis [59]. Given the age of animals and doses used in our study, the overall impact of neurogenesis is questionable within this context, and not readily amenable to whole cell electrophysiology assays. Overall, our study is the first to examine how highfidelity simulations of space-relevant radiation exposures reveal risks that astronauts might encounter at multiple levels of central nervous system function. Until welldefined cohorts of animals are exposed to the deep space environment and space flight stressors that would be expected on a Mars mission, the ground-based studies by us and others remain the gold standard for assessing the impact of space-relevant radiation exposure on the CNS. Clearly, future studies will be required to further elucidate sex differences in the space radiation response of the CNS, as will the combined effect of irradiation with other mission-relevant stressors such as sleep disruption. While both acute and chronic GCR Sim exposures disrupt critical cognitive processes and underlying neuronal network plasticity, alteration to the functional properties of individual neurons appear to be less likely at more mission-relevant doses and dose rates. Given that the mechanisms underlying the persistent effects of space radiation exposures to brain function remain elusive, the hazards to astronaut CNS capabilities are unclear. Given current data, though, those risks may in fact be less than predicted by earlier studies evaluating less refined irradiation paradigms. However, clear evidence of cognitive deficits arising in both male and female animals regardless of the time course of GCR Sim exposure indicate that radiation risks need to be carefully considered when planning future human exploration of the Moon and Mars.
12,003.6
2023-01-01T00:00:00.000
[ "Environmental Science", "Medicine", "Physics" ]
High-frequency dynamics of active region moss as observed by IRIS The high temporal, spatial and spectral resolution of Interface Region Imaging Spectrograph (IRIS) has provided new insights into the understanding of different small-scale processes occurring at the chromospheric and transition region (TR) heights. We study the dynamics of high-frequency oscillations of active region (AR 2376) moss as recorded by simultaneous imaging and spectral data of IRIS. Wavelet transformation, power maps generated from slit-jaw images in the Si IV 1400\AA passband, and sit-and-stare spectroscopic observations of the Si IV 1403\AA spectral line reveal the presence of high-frequency oscillations with $\sim$1-2 minute periods in the bright moss regions. The presence of such low periodicities is further confirmed by intrinsic mode functions (IMFs) as obtained by the empirical mode decomposition (EMD) technique. We find evidence of the presence of slow waves and reconnection-like events, and together they cause the high-frequency oscillations in the bright moss regions. INTRODUCTION Understanding the processes responsible for the heating of the upper atmosphere is the central problem in solar physics. Though highly debated (see reviews, Klimchuk, 2006;Reale, 2010;Parnell and De Moortel, 2012), two widely accepted mechanisms for converting magnetic energy into thermal energy are impulsive heating by nano-flares (Parker, 1988) and heating by dissipation of waves (Arregui, 2015). The heating processes are generally proposed to occur on small spatial and temporal scales, which were difficult to observe with the typical resolution of the previous instruments. In the very recent past, the advent of instruments with better temporal resolutions, several evidences of high-frequency oscillations of sub-minute periodicities have been reported to be present from the chromosphere (Gupta and Tripathi, 2015;Shetye et al., 2016;Jafarzadeh et al., 2017;Ishikawa et al., 2017) up to the corona (Testa et al., 2013;McLaughlin, 2013, 2014;Pant et al., 2015;Samanta et al., 2016)) at sub-arcsec spatial scales. The small-scale quasi-periodic flows resulting from oscillatory magnetic reconnection as well as the presence of various Magnetohydrodynamic (MHD) waves produce such observed perturbations in imaging and spectroscopic observables. These periodic/quasi-periodic perturbations/oscillations observed at such finer scales in space and time can thus be regarded as the manifestations of the reoccurring dynamic heating processes present at similar spatial (sub-arcsec) and temporal (sub-minute) scales. Various MHD waves could be present simultaneously along with quasi-periodic flows or their presence could entirely be non-concurrent. The plausible mechanism/s for their origin might also be directly coupled in some cases or completely independent in others. For instances, Gupta and Tripathi (2015) detected short-period variability (30-90 s) within explosive events observed in TR by IRIS and related them to repetitive magnetic reconnection events. On the other hand, Jafarzadeh et al. (2017) observed high-frequency of periods 30-50 s in Ca II H bright-points in the chromosphere using the SUNRISE Filter Imager (SuFI; Gandorfer et al., 2011). They found the evidence of both compressible (sausage mode) and incompressible (kink mode) waves to be present in the magnetic bright-points. Shetye et al. (2016) reported transverse oscillations and intensity variations (∼ 20-60 s) in chromospheric spicular structures using the CRisp Imaging SpectroPolarimeter (CRISP; Scharmer et al., 2008) on the Swedish 1-m Solar Telescope. They argued that high-frequency helical kink motions are responsible for transverse oscillations and compressive sausage modes to result in intensity variations. They further found evidence of mode coupling between compressive sausage and non-compressive kink modes and speculated the presence of other spicules and flows possibly acting as the external drivers for the mode-coupling. Using the total solar eclipse observations of 11 July 2010 (Singh et al., 2011), Samanta et al. (2016 detected significant oscillations with periods ∼6-20 s in coronal structures. They attributed these highfrequency oscillations as a mixture of different MHD waves and quasi-periodic flows. Using the Highresolution Coronal Imager (Hi-C; Kobayashi et al. (2014)) data, Testa et al. (2013) observed variability on time-scales of 15-30 s to be present in the moss regions as observed in the upper TR, which they found to be mostly located at the foot-points of coronal loops. They regarded such oscillations as the signatures of heating events associated with reconnection occurring in overlying hot coronal loops, i.e., impulsive nano-flares. More recently, from the Chromospheric Lyα SpectroPolarimeter (CLASP; Kano et al., 2012) observations, Ishikawa et al. (2017) also reported short temporal variations in the solar chromosphere and TR emission of an active region with periodicities of ∼10-30 s. They attributed these intensity variations to waves or jets from the lower layers instead of nano-flares. McLaughlin (2013, 2014) analysed the same active region moss observations of Hi-C as by Testa et al. (2013) and observed the presence of transverse oscillations with periodicities of 50-70 s. Pant et al. (2015) also studied the same region in Hi-C observations and detected quasi-periodic flows as well as transverse oscillations with short periodicities (30-60 s) in braided structures of the moss. They indicated coupling between the sources of transverse oscillations and quasi-periodic flows, i.e., magnetic reconnection, such that they could be possibly driving each other. In the present work, we concentrate on the high-frequency (∼1-2 minute) dynamics of active region (AR 2376) moss as observed by IRIS. IRIS have provided an unprecedented view of the solar chromosphere and transition region with high temporal, spatial and spectral resolution. The joint imaging and spectroscopic observations of IRIS at high cadence provide us with a unique opportunity to have a detailed analysis of different characteristics and mechanisms involved in the generation of high-frequency oscillations in TR moss regions. DETAILS OF THE OBSERVATION IRIS observations of active region (AR 2376) moss, observed on 2015-07-05 from 05:16:15 UT to 07:16:23 UT is considered for the present analysis. Figure 1 shows the observation region on the solar disk, as outlined in the image taken in the 171Å pass-band of AIA (Atmospheric Imaging Assembly; Lemen et al., 2012) and slit-jaw image (SJI) in 1400Å at a particular instance observed by the IRIS. The bottom panel shows a typical light-curve at a particular location A (marked in the full FOV above) in the moss region in SJ 1400Å intensity. The nature of the variation of intensity clearly reveals the presence of small amplitude quasi-periodic variations along with comparatively larger amplitude variations. Centred at 146 , 207 , the imaging data (slit-jaw images or SJIs) have a field of view (FOV) of 119 × 119 . The SJIs are taken with a cadence of 13 seconds and have spatial resolution ≈ 0.33 . The simultaneous large sit-and stare spectroscopic data has a cadence of 3.3 seconds with the slit-width of 0.35 and pixel size along the solar-Y axis to be 0.1664 with slit length of 119 . Every observation in this data-set has an exposure time of 2 seconds. The high cadence of these data-sets provides us a unique opportunity to investigate the high-frequency dynamics in this region with high significance level. We use IRIS SJIs centred at the Si IV 1400Å passband which samples emission from the transition region (TR). For spectral analysis, we concentrate on the Si IV (1403Å) line formed at log 10 T ≈ 4.9K which is one of the prominent TR emission lines observed with the IRIS and is free from other line blends. For density diagnostics, we use the O IV (1401Å) TR line along with Si IV (1403Å) (Keenan et al., 2002;Young et al., 2018). The calibrated level 2 data of IRIS is used in the study. Dark current subtraction, flat-field correction, and geometrical correction have been taken into account in the level 2 data. We employ wavelet analysis (Torrence and Compo, 1998) and empirical mode decomposition (EMD; Huang et al., 1998) techniques in order to detect and characterize the high-frequency oscillations in slit-jaw (SJ) intensity (section 3.1) and different spectral properties i.e., total intensity, peak intensity, Doppler velocity, and Doppler width (sections 3.2). Imaging Analysis from Si IV 1400Å SJIs Wavelet analysis is performed at each pixel location of SJ FOV to obtain the period of SJ intensity variability over the observed moss region. As shown in Figure 1, a typical light curve corresponding to a single pixel location for the entire duration reveals presence of quasi-periodic small and large amplitude intensity fluctuations. Figure 2 (a) shows a representative example (selected at random) of wavelet analysis results corresponding to the pixel location marked as A in SJ FOV ( Figure 1) for a duration of 20 minutes. It should be noted that in most of results we show the wavelet and EMD analysis corresponding to 20 minutes interval only so that the temporal variations in intensity can be studied more carefully, particularly as we are interested in the shorter periodicities. The top panel in Figure 2 (a) shows the variation of SJ intensity with time. The middle-panel shows the background (trend) subtracted intensity which is further used to obtain wavelet power spectrum (lower panels). The background (trend) is obtained by taking the 10-point running average of the intensity variation. The bottom left panel displays a wavelet power spectrum (color inverted) with 99% significance levels and the bottom right panel displays a global wavelet power spectrum (wavelet power spectrum summed over time) with 99% global significance. The power spectra obtained reveals the presence of short-period variability in the SJ intensity light-curve, with a distinct power peak at a period of 1.6 min. It is important to note that even without considering the background trend, we obtain a power peak at the same period in wavelet spectra but with low significance level. Empirical mode decomposition (EMD) is also employed at a few locations in the SJ FOV. Figure 2 shows the different intrinsic mode functions (IMFs) obtained from EMD for the same SJ light-curve as shown in Figure 2 (a). Here only the first four IMFs are shown as the further IMFs contain the larger background trends. The dominant period (P) mentioned in the figure for each IMF is calculated using fast-Fourier transform (FFT). The period of the first four IMFs for the particular example shown in Figure 2 (b) are 0.74 min, 1.25 min, 2.85 min and 3.99 min. The EMD analysis reinforces the detection of the presence of short periodicities (1-2 min) in the moss region as obtained by wavelet analysis. The presence of periodicities < 1 min can also be noted from the Figure 2, though these are below the significance level of 99% as shown in wavelet power spectra. Such oscillations have very small amplitudes, are present even for shorter-duration and could be damping fast. Hence, the oscillations with periods < 1 min may carry smaller amount of energy and may not be so important as those with periods > 1 min which may be distributed over larger spatial and temporal extents. To focus on the distribution of power as calculated from the wavelet method, we obtain the power maps of SJ intensity over the full FOV in 1-2 min and 2-4 min period intervals (Figure 3) by considering the entire duration of the observation. The entire duration of the observations is chosen to understand the global dynamics of the active region moss. On comparison of power maps with the SJ images ( Figure 3) and AIA images (Figure 1), it can be observed that the significant power of high-frequency (1-2 min) as well as low-frequency (2-4 min) oscillations is generally present only in bright regions of the moss. Figure 3 also shows the time-average SJI with the power contours of of 1-2 min variability in red and 2-4 min in yellow. The power contours enclose the locations with the value of significant power to be more than 100, in respective period range. The finer and smaller spatial extents of the contours at various locations over the field of view suggest that these oscillations possess high power in the localized regions within the bright moss. Moreover, the comparison of power between short (1-2 min) and long (2-4 min) periodicities, as showcased in Figure 3, reveals that the power in 1-2 min variability is, in general, less than that in 2-4 min. Spectral Analysis from Si IV 1403Å emission line To characterize periodicities present in the spectrograph data, we produce power maps of the spectral parameters obtained by fitting a single Gaussian to the Si IV 1403Å emission spectra using wavelet analysis. The analysis was performed over the entire duration of the observations. At few instances, we interpolate the spectral parameters where a Gaussian fitting could not be performed due to poor signal to noise. The power maps clearly showcase the significant power along the slit, predominantly present in the period range of 0.83 to 2.36 min corresponding to pixel locations of the bright moss regions (wherever the slit Now we shift our focus to shorter time intervals where data gaps due to poor signal-to-noise are absent. This allow us to investigate the correlation between different spectral parameters using wavelet and EMD analysis. Further, taking the intervals of 20 minutes is sufficient because we are primarily interested in shorter periods like, 1-2 minutes. Figure 5 (a) shows the wavelet maps of total intensity variation for a duration of 20 minutes at a particular location along the slit (marked as B in the SJ FOV in Figure 1). Total intensity signifies the summed intensity over the wavelength range. Figure 6 (a) shows the wavelet maps of Doppler velocity at the same location B and same time-interval as shown for total intensity in Figure 5. Note that the location B is very close to location A so that a comparison can be made with the periodicities found at location A using SJI. Moreover, the same time-interval is shown in Figures 2, 5 and 6 for better illustration. Figure 7 shows the variation of peak intensity, Doppler width, total intensity and Doppler velocity of Si IV 1403Å line at location B along with the spectral line-profile at a particular instance. The observational uncertainties are shown in the left panel over the observed line-profile. These errors are taken into account while fitting the Gaussian profile (green solid curve). The fitting errors of the respective spectral parameters are shown in the adjacent light curves in orange. It can be clearly observed that the errors in the spectral parameters are much less than the amplitude of oscillations. For instance, the average magnitude of error over the Doppler velocity light curve shown in Figure 6 and 7 is 0.5 km/s, whereas the amplitude of oscillation of its IMFs (as shown in Figure 6) is more than 1 km/s in most of the cases. The oscillations in the spectral parameters are well above the error values in general and thus significant. An animation of the Figure 7 is available in the online version which shows the evolution of the spectral line profile with time. The background trends for the spectral parameter light curves (in Figure 5 and 6) are obtained by considering the 35-point running average of the light-curves. The dominant power peaks are observed to be present 1.5 min for total intensity, 1.7 min for peak intensity, 1.7 min for Doppler velocity, and 1.5 min for Doppler width in the respective power spectra. Here again, the presence of periodicities of < 1 min can be seen in the wavelet. It can be clearly observed that such oscillations are present for very short durations and thus of not much significance over the longer durations. Also, such short periodicities could be due to the presence of noise which is picked up by wavelet at higher-frequencies. The EMD technique is applied over the spectral variations in order to segregate the different periodicities present in their light curves. Figure 5 (b) and 6 (b) respectively shows the first four IMFs and their periods (P) of total intensity and Doppler velocity variation for the duration of 20 minutes at the location B. The first four IMFs (IMF0, IMF1, IMF2, and IMF3) are observed to contain the short-period variabilities (0.2-2 min). The successive IMFs are observed to have periodicities of more than 2 min and hence not discussed in the present analysis. To perform a statistical study of correlation and phase-relationship between Doppler velocity and total intensity, we study 40 different light-curves (cases), each of duration 20 minutes. These cases are selected to be located in the close neighbourhood of the power contours of 1-2 min periodicities (red contours in the average SJ image in Figure 3). The locations of the selected cases are marked in black along the slit in the SJ image in Figure 3. Few specific time-intervals are considered at these locations in order to have further study about phase-relationship between Doppler velocity and total intensity. Figure 8 shows the histograms of the period of oscillations for different IMFs of total intensity and Doppler velocity with the mean periods listed in the figure. As reflected by the value of mean periods, we will further regard the IMF0 to be associated with the periodicity of ∼0.17 min, IMF1 with ∼0.40 min, IMF2 with ∼0.72 min and IMF3 with ∼1.26 min. The power maps in Figure 4 shows the absence of significant power in the periods below 0.6 min. Henceforth, for the further analysis about phase-relationship, we consider only the third and fourth IMFs, i.e., IMF2 and IMF3. The phase-relation between Doppler velocity and total intensity at the short-periodicities is studied by correlating their respective IMFs for the 40 cases. Figure 9 shows the histograms of the phase difference between Doppler velocity and total intensity by considering IMF2 and IMF3. The sign convention for the values of phase-shifts considered here is such that the positive values phase-shift signifies the Doppler velocity to be leading with respect to the total intensity. The histogram of IMF2 reveal the presence of preferred phase-shifts at ∼ ±3T /8 (∼ ±3π/4) where T=0.72 min, is the time period of oscillation. The histogram of IMF3 shows the dominant phase-shifts at ∼ −T /2 (∼ −π) where T=1.26 min. IMF2 IMF3 T=0.72 min T=1.26 min Figure 9. Histograms showing the distribution of phase-difference between total intensity and Doppler velocity for IMF2 and IMF3 for the 40 selected cases. The presence of a dominant phase shift of ∼T/2 for periodicities of ∼1.26 min (IMF3) indicates the presence of reconnection events. As shown in Figure 7, the increase in the intensity is accompanied by the increase in the Doppler width and decrease in Doppler velocity (blue-shifted flows, De Pontieu et al., 2009;De Pontieu and McIntosh, 2010) at many instances throughout the light-curve. Few of such instances are shown by vertical dotted lines in Figure 7. In case of phase-shifts of ∼±T/2 or ∼±π, the reconnection process results in near simultaneous variation in the spectral parameters with the resultant mass flow projected towards the line-of-sight (blue-shifts or negative Doppler velocity). As the TR emission lines are red-shifted in general, the flows towards the line-of-sight (blue-shifted flows) will appear to decrease the Doppler speeds simultaneously with the increase in the intensity and width (phase shifts of ∼±T/2 or ∼±π). On the other hand, the flows away from the line-of-sight will increase the value of the Doppler speeds with an increase in the line intensity and width (∼ zero phase-shift). It can be observed from Figure 7 and also indicated by Figure 9 that the red-shifted flows (cases with zero phase difference) occur less frequently compared to blue-shifted flows (cases with phase shifts of ±T/2). As shown in Figure 7, the instances of large amplitude fluctuations, which mostly have phase shift of ∼T/2 between Doppler velocity and total intensity, can be regarded as the clear signatures of quasi-periodic outflows (towards the observer) resulting from the reconnection process. The other instances of small amplitude fluctuations can be due the presence of slow magneto-acoustic waves. Very recently, Hansteen et al. (2014) and Brooks et al. (2016) have reported the presence of transition region fine loops with the aid of IRIS observations and numerical simulations. Such small scale loops with loop lengths of ∼1 to 2 Mm can harbour slow standing waves with periods of ∼1 min in transition region. It is worth noting at this point that Wang et al. (2003); Taroyan et al. (2007); Taroyan and Bradshaw (2008) reported the presence of standing slow waves exclusively in hot coronal loops. In addition, Pant et al. (2017) reported the existence of standing slow waves in cool coronal loops (∼0.6 MK). In this work, we found evidence of the existence of slow waves in Si IV 1403Å emission line whose formation temperature is ∼60000 K. In an ideal case, the phase-shift of ∼±T/4 is attributed to the presence of standing slow waves in the solar atmosphere (Wang et al., 2003;Taroyan et al., 2007;Taroyan and Bradshaw, 2008;Moreels and Van Doorsselaere, 2013). Further, it should be noted that the intensity and velocity changes phase in time due to the heating and cooling of the plasma (Taroyan and Bradshaw, 2008) and due to presence of imperfect waveguides and drivers in reality, which deviates from the theoretical considerations (Keys et al., 2018). Thus the phase shift between intensity and velocity oscillations might differ in different regions and different time as showcased in Figure 10. Figure 10 shows the representative examples of the IMFs (IMF2 and IMF3) at the location B along the slit. The phase-shift between Doppler velocity and total intensity (φ) obtained using the correlation techniques is also mentioned in the respective panels. The comparison between of the respective IMFs of intensity and Doppler velocity fluctuations clearly depicts that the phase-shift between them changes continuously throughout the entire duration. This could be due to the intermittent nature of the flows and waves that might result in departure from the theoretically expected values of the phase-shifts. Hence we conjecture that the statistically dominant phase shift of ∼±3T/8 for periodicities of ∼0.72 min (IMF2) is due to the presence of small-scale flows along with slow standing waves in TR fine loops. This supports both wave and reconnection like scenario to be responsible for the periodicities of 1-2 min in moss regions, which is discussed in details in section 4. Figure 10. Representative examples of the IMFs at the location B along the slit, showing the respective comparisons between Doppler velocity and total intensity oscillations, and depicting that the phase-shift between them changes continuously. Density diagnostics from Si IV 1403Å and O IV 1401Å emission lines In order to obtain the information about density variations associated with the presence of waves and/or reconnection flows, in the moss regions, we attempt to estimate density along the slit using Si IV 1403Å (λ = 1402.77Å) and O IV 1401Å (λ = 1401.16Å) spectral lines from the IRIS spectra (as suggested by Young et al., 2018). They introduced an empirical correction factor to normalize Si IV/O IV line intensity ratios. As first mentioned by Dupree (1972), the observed intensities of lines from the lithium and sodium-like iso-electronic sequences are usually stronger than that expected by the emission measures from other sequences formed at the same temperature. Hence, such a correction factor is important to be applied to silicon line intensities. Table 2 of Young et al. (2018) gives the theoretical ratios of different lines after employing the correction factor (see QS DEM method as explained in Young et al., 2018;Young, 2018). We use the Si IV (1402.77)/O IV (1401.16) line ratio from Table 2 of Young et al. (2018) for the estimation of electron density at a temperature of log T /K = 4.88 (temperature of maximum ionization of Si IV). As the O IV 1401Å line is very weak in IRIS spectra, the spectra is averaged over 7 pixels along the slit. In such averaging, for instance, the data value of the first 7 spatial pixels are replaced by their average value, the next 7 pixels are replaced by their respective average data-value, and so on. Similarly, timeaveraging is also performed by considering 4 time steps along the temporal axis. In order to improve S/N, such averaging is performed only over O IV 1401Å spectra as Si IV 1403Å spectra contains significantly good signal. Figure 11 (a) and (b) shows the time-sequence maps of peak intensity along the slit for the Si IV 1403Å and O IV 1401Å line-profiles. A comparison between the two maps clearly shows that despite averaging the spectra (as explained above), we are able to obtain good S/N only for very few isolated O IV 1401Å line-profiles in order to perform a reliable Gaussian fit, hence the peak intensity values for the O IV 1401Å line are shown only for those isolated few pixels. (1401) ratio-density curve in solid black and the estimated density values are over-plotted in magenta. The density time-sequence map is also showcased in Figure 11(d). Note that we could estimate the density only at very few instances of some of the locations, as limited by the poor signal in O IV 1401Å spectra. It can be observed in Figure 11(d) that we cannot find considerable examples of continuous density signal along time for some significant amount of duration over the entire observation. It is completely unreliable to perform any time series analysis over such light-curves. It appears that there are definite changes in the density but to relate those changes with intensity and other line parameters for identification of the wave mode is beyond the quality of the current observations. Thus, we are still unable to obtain any results related to density oscillations with the present data. CONCLUSIONS In the present article, we study high-frequency dynamics of active region moss by using high spatially and spectrally resolved observations of IRIS, with the fast cadence of 13 seconds for imaging and 3.3 seconds for spectral data. The techniques of wavelet and EMD analysis are employed in conjunction to explore the characteristics of the high-frequency oscillations. We have observed the persistent presence of periodicities in the 1-2 min range in the Si IV 1400Å SJ intensity as well as in different spectral parameters (total intensity, peak intensity, Doppler velocity, and Doppler width) derived from the Si IV 1403Å emission line. The power maps deduced from the SJ intensity variations show the concentration of power in shortperiodicities generally in the bright regions of the moss. This result is in agreement with the study of Pant et al. (2015), where the authors reported high-frequency quasi-periodic oscillations concentrated over localised regions in the active region moss. However, no attempts were made to understand the nature of variability due to the lack of spectral data. That study was performed using the 193Å passband of Hi-C which is sensitive to coronal temperatures. In this work, we find similar signatures in TR. Additionally, the power maps of the spectral parameters also reveal the predominance of significant power in the 1-2 min period range. We study the phase difference between Doppler velocities and total intensity. Our study supports both wave and reconnection like scenario to be responsible for the periodicities of 1-2 min in moss regions. Studying the phase relationships, we can conclude that the periodicity of 1.26 min with dominant phase shifts of ∼ −T /2 (∼ −π) is predominantly due to the outflows resulting from the reconnection process. On the other hand, the periodicity of 0.72 min with dominant phase shifts of ∼ ±3T /8 (∼ ±3π/4) can be regarded as the collective signatures of the small-scale flows and slow standing modes existing within the transition region fine loops of lengths 1 to 2 Mm. Hence qualitatively, we can conjecture that the highfrequency oscillations of ∼1 min, observed in the bright moss regions are possibly due to the combination of slow magneto-acoustic waves and reconnection events. As explained in section 3.3, we cannot obtain any reliable results from the density variations, although we are able to estimate the average density of the moss regions but to reliably study the density variation much better quality of data is required. The high-frequency oscillations in the moss regions can be due to compressive waves. The key to distinguish between the different modes conclusively is to study the density variations which is not possible with present data because of low data-counts present in the O IV 1410Å emission line. Some new instruments, with better sensitivity in the FUV wavelengths, especially in the density sensitive lines, may provide new insight and will enable us to specifically detect the particular wave modes responsible for such oscillations. AUTHOR CONTRIBUTIONS VP identified the IRIS data. VP and DB planned the study. NN performed all analysis and wrote the manuscript. VP, DB, and TVD helped in analysing the results. All authors participated in the discussion.
6,629
2019-05-02T00:00:00.000
[ "Physics" ]
Room temperature self-assembled growth of vertically aligned columnar copper oxide nanocomposite thin films on unmatched substrates In this work, we report the self-assembled growth of vertically aligned columnar Cu2O + Cu4O3 nanocomposite thin films on glass and silicon substrates by reactive sputtering at room temperature. Microstructure analyses show that each phase in nanocomposite films has the columnar growth along the whole thickness, while each column exhibits the single phase characteristics. The local epitaxial growth behavior of Cu2O is thought to be responsible for such an unusual microstructure. The intermediate oxygen flow rate between those required to synthesize single phase Cu2O and Cu4O3 films produces some Cu2O nuclei, and then the local epitaxial growth provides a strong driving force to promote Cu2O nuclei to grow sequentially, giving rise to Cu2O columns along the whole thickness. Lower resistivity has been observed in such kind of nanocomposite thin films than that in single phase thin films, which may be due to the interface coupling between Cu2O and Cu4O3 columns. 1 Institut Jean Lamour, UMR 7198-CNRS, Université de Lorraine, Nancy, F-54000, France. 2 State Key Laboratory Cultivation Base for Nonmetal Composites and Functional Materials, Southwest University of Science and Technology, Mianyang, 621010, China. 3 Department for Materials Science, Functional Materials, Saarland University, Saarbrücken, D-66123, Germany. Correspondence and requests for materials should be addressed to J.F.P. (email<EMAIL_ADDRESS>Binary copper oxides (Cu 2 O, Cu 4 O 3 and CuO), as spontaneous p-type semiconductors, have been widely studied [23][24][25][26][27][28] . More recently, some surprising properties have been observed in the biphase copper oxide composite thin films. For instance, a lower resistivity has been observed in biphase sputtered Cu 2 O + Cu 4 O 3 thin films than in the single phase Cu 2 O or Cu 4 O 3 26 . In addition, the biphase Cu 2 O and Cu 4 O 3 thin films can enhance the photovoltaic activity significantly in a binary copper oxide (Cu-O) light absorber 27 . However, the origin of these peculiar properties remains unknown. In this work, we demonstrate the vertically aligned columnar microstructure of biphase Cu 2 O + Cu 4 O 3 nanocomposite thin films grown by reactive magnetron sputtering at room temperature on unmatched glass or silicon substrates. Finally, the unusual electrical properties of biphase thin films are discussed. Results The diffractograms of copper oxide thin films deposited with different oxygen flow rates are presented in Fig. 1(a). Two main diffraction peaks are always observed at approx. 36° or 42° in these oxygen flow rates. The first peak may be due to the diffraction of Cu 2 O (111) planes or Cu 4 O 3 (202) ones and the peak located close to 42° may be related to Cu 2 O (200) or Cu 4 O 3 (220), as the d values in Cu 2 O and Cu 4 O 3 are quite close in certain planes (see the supporting information). To obtain a more precise structural description of the films, micro-Raman spectrometry was used ( Fig. 1(b)). The film deposited with 14 sccm oxygen shows a typical Raman spectrum of Cu 2 O, where the T 2g peak is observed close to 520 cm −1 . The bands at 93, 147 and 216 cm −1 are related to defects, non-stoichiometry and resonant excitation in Cu 2 O 29 . A new band close to 531 cm −1 is evidenced when the oxygen flow rate is 15 sccm, which has been assigned to A 1g mode of Cu 4 O 3 29, 30 . Its intensity increases with the increase of oxygen flow rate while other bands related to Cu 2 O decrease progressively. Hence, these Raman spectra clearly evidence that the films deposited with 15-18 sccm of oxygen are biphase composite Cu 2 O + Cu 4 O 3 thin films, and that the fraction of Cu 4 O 3 can be controlled by adjusting the oxygen flow rate. To study the microstructure of the biphase thin films, transmission electron microscopy (TEM) analyses were carried out in cross-section firstly. The cross-sectional TEM images of biphase Cu 4 O 3 and Cu 2 O thin film deposited with 17 sccm O 2 are shown in Fig. 2. Electron diffraction pattern on a large area is presented in Fig. 2(a), which can hardly distinguish Cu 2 O and Cu 4 O 3 phases since their d values are close to each other (see the supporting information). Surprisingly, the dark and bright field images in Fig. 2(b,c) show notable columnar growth for this biphase film, and the columns start from the film/substrate interface to the top of the film, which is unusual in sputtered composite thin films. Such microstructure is quite similar with that in single phase Cu 2 O or Cu 4 O 3 thin films 29 . However, the column width of about 20-40 nm near the top of this biphase film is much smaller than that of 30-70 nm in single phase Cu 4 O 3 thin films 29 , indicating the existence of competing growth in this biphase thin film. Unfortunately, it is difficult to identify Cu 2 O and Cu 4 O 3 phases from dark field image by choosing the corresponding diffraction spots, as the d values of main diffraction spots are too close (see Fig. 2(a)). Furthermore, the microstructure at the initial growth region (close to the substrate) has been studied by high resolution TEM (HRTEM), as shown in Fig. 3. Even at the initial growth region, the biphase film still has the columnar microstructure, with the column width of about 10 nm. The fast Fourier transform (FFT) analyses along the column growth direction have been performed. Figure 3(b-d) show the FFT patterns of square regions named as 1, 2 and 3 in Fig. 3(a), respectively. It is clearly seen that d values of about 2.1 Å have always been observed along the column growth direction, as shown in Fig. 3(b-d). This d value of 2.1 Å could come from Cu 2 O (200) or Cu 4 O 3 (220), as the information in these patterns is not sufficient to determine the phase structures. To be pointed out here, the poor FFT patterns in Fig. 3(b-d) are typical ones in polycrystalline thin films, which originates from the characteristics of small column width and fiber texture. The thickness of the TEM foil is estimated to be about 50-70 nm by low loss electron energy loss spectroscopy (EELS), much larger than the column width near the substrate, which indicates that there are several columns along the TEM thin foil thickness direction. Besides, the fiber texture observed in pure phase Cu 2 O and Cu 4 O 3 thin films, may exist in this biphase thin film. Hence, several columns with some rotational degree of freedom around the fiber axis will result in poor diffraction spots. Whatever this diffraction spot belongs to Cu 2 O or Cu 4 O 3 , such analyses indicate that the columnar microstructure in the biphase thin film is formed at the beginning of the growth process, and the columns have almost the same growth orientation along the whole thin film thickness. To capture the microstructure of the biphase thin film unambiguously, TEM investigations have also been performed on foils prepared parallel to the film surface, i. e. from the top-view of the specimen. Electron diffraction patterns have been recorded from many grains, and typical patterns are shown in Fig. 4. Figure 4(a) is the bright field image and Fig. 4(b) is the corresponding dark field image, in which the estimated grain size of about 20-40 nm is consistent with the column width in cross-sectional micrographs. In Fig. 4(a,b), grains referred as # 1 and # 2 have been marked. The micro-diffraction patterns of grains #1 (see Fig. 4(c)) exhibits the single crystal diffraction characteristic of Cu 4 O 3 , clearly demonstrating this grain is single phase Cu 4 O 3 . The diffraction pattern of grain #2 is displayed in Fig. 4(d), which shows the characteristic of Cu 2 O as the main diffraction spots can be only indexed by cubic crystal structure, rather than tetragonal structure. As shown in Fig. 4(d), a little vestige of diffraction ring has also been observed, which could be due to the small grain size. Then, the convergent beam electron diffraction (CBED) has been performed using another microscope (Philips CM200). The CBED pattern clearly reveals the single phase of Cu 2 O grain as the pattern shows notable single crystal characteristic (see the supporting information). The CBED pattern of Cu 4 O 3 also confirms its pure phase for every grain (see the supporting information). Furthermore, the single phase characteristic of different grains has also been studied by HRTEM. As shown in Fig. 5(a), two grains labelled as #5 and #6 have been chosen to perform FFT analyses. The FFT pattern of #5 (see Fig. 5(c)) demonstrates this grain to be single phase cubic Cu 2 O, as tetragonal structure does not exhibit the six-fold symmetry. Figure 5(d) is the FFT pattern of grain #6, which is well indexed as tetragonal Cu 4 O 3 , indicating its single phase characteristic. Thus, the HRTEM analyses also verify that both Cu 2 O and Cu 4 O 3 grains are pure phase. The above TEM micrographs from cross-section and top-view indicate an unusual microstructure in biphase Cu 4 O 3 and Cu 2 O thin films where the two phases grow independently in columnar shape. It is worth noting that this kind of microstructure has clearly evidenced in biphase Cu 4 O 3 and Cu 2 O thin films with different oxygen flow rates of 16, 17 and 18 sccm. Such a microstructure is significantly different from the traditional concept that one phase is embedded into the second one that acts as matrix. Hence, the schematic microstructure of this biphase thin film is depicted in Fig. 6; for simplicity, we show an ordered arrangement of phases. As shown in Fig. 6, both phases just grow separately and independently with the columnar microstructure along the whole film thickness. This kind of unusual growth can be understood from the viewpoint of Cu 2 O local epitaxial growth (LEG) behavior previously reported 31 . In reactively sputtered growth of Cu 2 O thin films, the Cu 2 O seed layer has a strong driving force to promote the subsequent growth with the same growth orientation, independently of the deposition conditions 31 . Therefore, in this biphase thin film, the growth process can be assumed as follows: (1) due to intermediate oxygen flow rate between those required to grow single phase Cu 2 O and Cu 4 O 3 , some Cu 2 O nuclei are formed; (2) the strong driving force resulting from the local epitaxial growth induces a selective formation of Cu 2 O on the nuclei with the same structure; (3) the local decrease of the oxygen concentration induces a segregation of oxygen adatoms towards columns with higher oxygen concentration that crystallizes in the Cu 4 O 3 structure. Consequently, Cu 4 O 3 and Cu 2 O phases with columnar structures grow independently. As previously reported, the oxygen flow rate allows to tune the phase structure of copper oxide films 29 . The increase of the oxygen flow rate induces the deposition of Cu 2 O, Cu 4 O 3 and CuO. Moreover, between these single phases, biphase Cu 2 O + Cu 4 O 3 and Cu 4 O 3 + CuO films can also be synthesized. The structure and the microstructure of Cu 4 O 3 + CuO films have also been studied by XRD, Raman and TEM. Films deposited with 21 sccm O 2 are X-ray amorphous (Fig. 7(a)), but Raman analyses clearly evidence the existence of Cu 4 O 3 A 1g mode close to 531 cm −1 and CuO A g mode at about 288 cm −1 (Fig. 7(b)) 29,30 . Compared to Cu 2 O + Cu 4 O 3 biphase films, the Cu 4 O 3 + CuO ones show notably different microstructure. From the cross-sectional TEM images, the columnar growth in biphase Cu 4 O 3 + CuO thin film is not clear (see Fig. 8). Moreover, the top-view electron diffraction patterns can hardly identify the single phase features of grains. Hence, the vertically aligned columnar growth mechanism is not encountered in the biphase Cu 4 O 3 + CuO film anymore. This result can also be explained by taking the LEG effect into account. Indeed, the texture of CuO films is mainly governed by the oxygen partial pressure 29 . Thus, a local change of the oxygen concentration induces a change of the CuO preferred orientation that comes with a nucleation of a new grain without structural relationship with the previous one. Consequently, there is no LEG behavior in this oxide. In the case of Cu 4 O 3 phase , the [101] orientation deposited at 0.5 Pa does not allow the LEG effect. Considering the occurrence of LEG effect in Cu 2 O thin films, the vertically aligned columnar growth mechanism in biphase Cu 2 O + Cu 4 O 3 films can be well described. On the other hand, this growth mechanism is not encountered in biphase Cu 4 O 3 + CuO ones (no LEG effect in these two phases within the present growth conditions). Within this discussion, it is believed that this vertically aligned columnar growth observed in biphase Cu 2 O + Cu 4 O 3 thin films can also be extended to other materials with certain requirements summarized as below: • The system has to contain at least two stable or metastable phases, • Each phase has to be deposited in crystalline form within the deposition conditions, • The growth rate of each phase has to be similar. Within the Cu-O system, the growth rate of Cu 2 O is close to that of Cu 4 O 3 , while that of CuO is relatively low (poisoning effect of the target) 29, 32 , • At least one phase should be grown independently with a local epitaxial growth mechanism, • The chemical compositions of the phases must be close, in order to allow the segregation of one adsorbed element on the growing surface. The room temperature resistivity of copper oxide thin films as a function of oxygen flow rate is depicted in Fig. 9, which clearly reveals that the biphase thin film has lower resistivity than single phase films. This result is in agreement with that reported by Meyer et al. 26 . Since these thin films are deposited at room temperature and the mobility is extremely low, it is difficult to determine the carrier concentration by Hall effect measurements. For the single phase Cu 2 O or Cu 4 O 3 thin films, the room temperature resistivity decreases with the increase of oxygen flow rate (see Fig. 9), which could be qualitatively understood from the defect mechanism. Taking Cu 2 O as an example, copper vacancy ( ′ V Cu ) is the predominant defects to produce the hole carriers, while the formation energy of copper vacancy decreases in the oxygen rich conditions (higher oxygen flow rate) [33][34][35] . Then the lower resistivity of single phase Cu 2 O thin films with higher oxygen flow rate can be interpreted from its larger carrier concentration due to the reduction of copper vacancy formation energy. In the case of biphase Cu 2 O and Cu 4 O 3 thin film, the oxygen flow rate is higher than that required to synthesis the single phase Cu 2 O, thus the Cu 2 O columns may have higher carrier concentration. In contrast, the Cu 4 O 3 columns may have lower carrier concentration as the oxygen sub-stoichiometry. Consequently, the columns with different carrier concentration (high carrier concentration and low carrier concentration) arrange randomly, and their interface coupling may play a role in the establishment of lower resistivity. Further investigations are required to clarify this unusual phenomenon. Conclusions An unusual microstructure has been observed in biphase Cu 2 O + Cu 4 O 3 nanocomposite thin films grown on glass and silicon substrates by reactive sputtering at room temperature, where two phases grow separately and independently with vertically aligned columnar microstructure along the whole film thickness. Such a microstructure may relate to the local epitaxial growth of Cu 2 O. The intermediate oxygen flow rate between those required to grow pure phase Cu 2 O and Cu 4 O 3 thin films produce some Cu 2 O nuclei, and then the strong driving force resulting from the local epitaxial growth induces a selective formation of Cu 2 O on the nuclei with the same structure, giving rise to this kind of unusual vertically aligned columnar microstructure on unmatched substrates. Such peculiar microstructure can also be extended to other materials with certain requirements. This vertically aligned columnar Cu 2 O + Cu 4 O 3 nanocomposite thin film exhibits much lower resistivity than single phase thin films, which may be due to the strong interface coupling between Cu 2 O and Cu 4 O 3 columns. Methods Film growth. Copper oxide thin films were deposited on glass substrates (microscopy slides) and (100) silicon single crystal substrates by reactive pulsed-DC magnetron sputtering in Ar-O 2 reactive mixtures. The amorphous SiO 2 layer on silicon single crystal substrate was not removed, giving rise to the same characteristics of silicon and glass substrates. Thus, the substrates had no effect on the growth orientation and phase structure of thin films. No intentional heating was applied to the substrates, and the deposition temperature was close to room temperature. The argon flow rate was fixed at 25 sccm, while the oxygen flow rate varied in the range of 12-21 sccm with a step of 1 sccm. The accuracy of gas flow controller (Air Liquide) is +/− 0.1 sccm in this work. A pulsed-DC supply (Pinnacle + Advanced Energy) was used to sputter the copper target (50 mm diameter and 3 mm thick with a purity of 99.99%). The current applied to target was fixed to 0.3 A, the frequency and the off-time were 50 kHz and 4 µs, respectively. The distance between the substrate and the target was fixed at 60 mm. Characterizations. X-ray diffraction (XRD, Brucker D8 Advance with CuK α1 radiation (λ = 0.15406 nm) in Bragg Brentano configuration) and micro-Raman spectrometry (Horiba LabRAM HR using a 532 nm laser) were employed together to identify the phase structures. Transmission electron microscopy (TEM) investigations were performed by a JEOL ARM 200-Cold FEG (point resolution 0.19 nm) fitted with a GIF Quantum ER. For this purpose, the TEM cross-section and top-view specimens of composite thin films deposited on silicon substrates were prepared in a focused ion beam (FIB)-scanning electron microscope (SEM) dual beam system (FEI Helios 600) using the 'in situ' lift-out technique. Final thinning was done with low voltage milling (5 kV) to reduce any possible preparation artifacts. The convergent beam electron diffraction (CBED) analyses were done by another TEM (Philips CM200). Besides, the top-view microstructure was also studied by TEM specimens prepared by diamond tip cleave. Electrical resistivity measurements were performed at room temperature using the four-point probe method.
4,496.8
2017-09-11T00:00:00.000
[ "Materials Science", "Physics" ]
Unsupervised Method to Localize Masses in Mammograms Breast cancer is one of the most common and prevalent type of cancer that mainly affects the women population. chances of effective treatment increases with early diagnosis. Mammography is considered one of the effective and proven techniques for early diagnosis of breast cancer. Tissues around masses look identical in mammogram, which makes automatic detection process a very challenging task. They are indistinguishable from the surrounding parenchyma. In this paper, we present an efficient and automated approach to segment masses in mammograms. The proposed method uses hierarchical clustering to isolate the salient area, and then features are extracted to reject false detection. We applied our method on two popular publicly available datasets (mini-MIAS and DDSM). A total of 56 images from mini-mias database, and 76 images from DDSM were randomly selected. Results are explained in-terms of ROC (Receiver Operating Characteristics) curves and compared with the other techniques. Experimental results demonstrate the efficiency and advantages of the proposed system in automatic mass identification in mammograms. INTRODUCTION Breast cancer is the most common cause of cancer-related deaths among women worldwide. With more than 450, 000 deaths each year, breast cancer accounts for about 14% of all female cancer deaths ( [11]). Recent statistics says that * PhD candidate in Korea University Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from<EMAIL_ADDRESS>1 out of 10 women is affected by breast cancer in their lifetime. According to GLOBOCAN 2012, 1.7 million Women were diagnosed with breast cancer and there were 6.3 million women alive who had been diagnosed with breast cancer in the previous five years ( [3]). Although the breast cancer rate is increasing in many parts of the world, however the mortality rate is much higher in less developed countries, because of insufficient facilities available for diagnosis and treatment. Therefore, there is an urgent need of reliable and affordable approches for early diagnosis and treatment of breast cancer in less developed countries.It can have significant impact on cancer treatment, faster recovery and reducing mortality. Mammography is considered most effective technique as it can detect 85∼90% percent of all breast cancers ( [3]). A mass is an uncontrolled grown tumor and we classify them into malignant and benign by their size, shape and other features. As described earlier that early diagnosis is a key for effective treatment. Therefore the job radiologist becomes very important, who can interpret mammograms for early diagnosis. Mammogram does not have so much information imprinted on the film. Cancer diagnosis in this scenario becomes a subjective criteria. Radiologist opinion depends on their experience. [22] states that radiologist's diagnosis inter-observer variation rate is 65 ∼ 75% ). He can miss a significant proportion of abnormalities and in addition a large number of mass come out to be benign after biopsy ( [22]). [12] states that Computer aided diagnosis (CAD) systems is helpful for the radiologists in diagnosis. ( [26]) claims that detection accuracy improved by combining the expert knowledge with CAD scheme.We proposed an algorithm to address the previously described problem for breast cancer diagnosis. Proposed scheme is novel in the following ways: • Scope of the detection algorithm is wide. It can detect different type of cancers in malignant and benign categories. Proposed algorithm was tested on many ill-defined masses also. • A method is proposed to identify masses, irrespective of their size and shape. • We proposed an efficient and unsupervised approach to detect masses in mammogram images. It segments the breast region and finds the candidate regions of interests (ROIs). • Generalization of algorithm is tested by experimenting cross validation across two different datasets. The organization of paper is as follows. Section I presents introduction and significance of the work. Section II discusses previous and related work. Section III briefly describes the proposed method for preprocessing. Section IV analyses the results and finally, Section V concludes the article. RELATED WORK In order to develop computer aided breast cancer detection tools, researchers have used several approaches. [10] proposes a Particle Swarm Optimized Wavelet Neural Network (PSOWNN) based classification approach for detection of masses in digital mammograms. Their method is based on extracting Laws Texture Energy Measures from the mammograms and classifies the suspicious regions by PSOWNN. Their method does not have any noise removal algorithm and also they do not propose any intelligent method ROI detection. In ( [25], [24], authors used Latent Dirichlet Allocation (LDA) to mine the feature set of mammogram images. They presented the modified Morphological Component Analysis method to identify the mass region and then extracted morphological features. Finally, LDA is used to classify the masses. Simple Morphological approaches are sensitive to noise. They also did not presented any preprocessing for collection of ROIs. In [21], authors proposed the modified Fuzzy c-means clustering to cluster the masses, extracted morphological, textual and spatial features and classified the features using SVM (Support Vector Machine). Their method lacks the noise removal and intelligent ROI segmentation. [16] presented a set of tools to aid segmentation and detection of mammograms that contained mass. After the top-hat morphological operator, de-noising is applied. Image gray-level was enhanced by wavelet transform and wiener filter. Finally, segmentation method was employed using multiple thresholding, wavelet transform and genetic algorithm. They used manual process to reduce the false positives generated by genetic algorithm. Authors also did not do the automatic classification of the ROIs. [1] proposed a method for mass detection based on saliency map. After the creation of saliency map, a threshold is used to obtain the ROI. A number of features were extracted and classified by SVM. Automated detection of malignant masses in screening mammography has been discussed in [19]. It developed a technique that used presence of concentric layers which surrounds a focal area in the breast region, that has suspicious morphological characteristics and low relative incidence. Segmentation process in both of the earlier described algorithms are focused on the bright or salient parts of the image, which is always mis-leaded by the blood vessels resulting in the whole breast parenchyma as a ROI. [13] work is based on applying one-dimensional recursive median filter to different number of angles to each pixel. It becomes difficult to detect when structure of the mass and a normal glandular looks similar. It can only be detected if there were asymmetry between the left and right breasts. [14] proposed method is based on the analysis of ISOintensity contour groups to segment skeptical masses. False positives are then removed using features based on flow orientation in adaptive ribbons of pixels across the margins of masses. The procedure is tested on 56 images from the Mini-MIAS database and got a sensitivity at the rate of 81% at 2.2 false positives per image. Furthermore, based on gray- level co-occurrence matrices (GCM) and using features on a logistic regression method, the classification of masses were performed as benign or malignant using five texture features. An accuracy of 0.79 is achieved as a result of this classification, with 19 benign and 13 malignant lesions. Authors used the hard thresholds to get the contours of objects in the image. Contour are very sensitive to noise resulting in increase of false positives and bad segmentation. Algorithm will fail to detect the mass if the boundary is ill-defined or even the mammogram is much denser. [5] proposed a method for diagnosis of breast lesions (diagnosis). Masses using the wavelet transform to obtain a multi-resolution representation of the original image at each resolution, a set of features is extracted which serves as input to a binary tree classifier. Algorithm achieved 91.9% true positive detection accuracy. ROIs were manually cropped in the proposed system. Their proposed system is based on wavelet and curvelet coefficients, which is very high in numbers. Selecting best coefficients is an optimization problem and also it is very sensitive to noise. [27] proposed a method combines several artificial intelligence techniques with the discrete wavelet transform (DWT). ROI's are determined through dimensional analysis using a multi-resolution Markov random field algorithm, the segmentation is performed that leads to the application of tree type classification strategy. The algorithm was tested in the Mini-MIAS database and has a sensitivity of 97.3% with 3.9 false positives per image. Their proposed method works well with well-defined masses, but ill-defined masses are difficult to be classified by this method. METHODOLOGY Female breast parenchyma is a multiplex biological structure and is composed of glandular, fatty, and lymphatic tissues (lymphovascular structures). Mammography imprints the texture information of breast tissue in image. Though the composing components may be complicated, the mass regions are characterized of high intensity and high texture. Figure 1 shows the process of a typical analysis system. We propose an efficient and unsupervised approach to identify the suspicious regions in mammogram images. Proposed algorithm isolates the spatially interconnected structures in the image, which are concentrated around salient intensities. As a result, it is possible to extract high-level information to analyze further, to characterize the physical properties of mass regions and to prepare a short-list of skeptical ROIs. Figure 2 shows our proposed algorithm. Further explanation of the algorithm is explained in the following subsections. Image Standardization Data from different sources should be converted to one format. Proposed algorithm was tested on two datasets: Digi-Hierarchical Clustering Hierarchical Clustering ROI Detection Phase One of the main tasks is to get mass-candidate regions. Following subsections describe the way to get those regions. Smoothing It is assumed that malignant masses typically cause distortion to the surrounding tissues. So, segmentation process can over-segment the image and it can't get those masses in a single entity. To overcome this problem, prior smoothing of the image is necessary. In the present work, Gaussian pyramid is used to uniformly highlight the salient regions. Subsampling to many levels results in over smoothing the image which converts the image regions as blobs. However, some researchers ( [17]) have performed mass detection on reduced resolutions of 800m. Regions of mass are hyper-densed. We need to get the full mass area to extract meaningful features from the ROI. Abrupt changes in the intensity of the objects present in the image effect the segmentation process. Peaks in the image objects are smoothed by the above described preprocessing. Hierarchical Clustering with GLCM (Gray level Co-occurrence Matrix) data We applied hierarchical clustering with GLCM data to segment the salient regions of image. Before segmentation of the image, its contrast was enhanced by CLAHE (Contrast Limited Adaptive Histogram Equalization). Further, we calculate the gray-level co-occurrence matrix from image. GLCM is created with distance one and 4 directions [0 1; -1 1; -1 0; -1 -1] (0 • , 45 • , 90 • , 135 • ). Other angles were not computed due to redundancy of the data. GLCM data from all directions are summed up and normalized. Figure 4 depicts the explanation of co-occurrence matrix. Intensities in mass exhibit the glowing effect (intensities are propagated from the center of the masses). Hierarchical clustering can cluster image according to propagated intensities while having a family structure of concentric objects. At each hierarchical level a measure of dissimilarity is defined to differentiate clusters and object are merged together as one, if their dissimilarity is less than or equal to the acceptable dissimilarity measure. Many researchers have proposed methods for multilevel thresholding by discriminant analysis ( [15], [18], and [2]). They thresholded the image by the cluster analysis irrespective of the physical location of the cluster. This idea works better if the image is multi-modal and we divide it into two clusters (background and foreground). However, it does not give fine results on low-level x-rays images which are mostly unimodal. In this case, multi-thresholding does not give compact objects for ROI. We incorporated the discriminant analysis ( [2]) with GLCM data to get compact objects. The proposed method clusters the image intensities in a hierarchy, according to their co-occurrence and similarity measure. Number of thresholds are found by cutting the dendrogram at desired level.Initially, each gray-level is designated to a different cluster i.e. g gray-levels in image will generate q number of clusters and each cluster has its own threshold Ti . Family hierarchy of clustering process can be viewed as a dendrogram. The estimated thresholds for the image to segment can be obtained by cutting the branch in dendrogram. Clustering algorithm is defined in algorithm 1. The distance measure between two clusters in the proposed algorithm is defined as ratio between the measure of observed dispersement and the expected dispersement. it is calculated as: where q is the total number of clusters, Pq is probability density function of image histogram and it can be calculated as equation 2. CPi,j represents the normalized cooccurrence frequency of the cluster pair being merged. It is defined in equation 3. X is the mean value of the cluster and defined in equation 5. σ 2 is the variance of both clusters which are being merged. It is defined in equation 7. where l represents the gray-level in image (value: [0 255])such where CMs,t is the co-occurrence probability of gray-level s and t. Mean is also called as the expectation of the cluster and can be represented as: so we calculated the mean as: Variance of the distribution is defined as: This formulates the variance into the following equation. where CX is defined as average mean of the cluster pair. It is calculated as the weighted average between the cluster means of the pair being merged: We imposed a restriction that only the adjacent clusters are allowed to merge. The similarity measurement is adapted by [15]. Pair having the minimum distance value is the best candidate to merge. The saliency of a region is measured by the nesting depth of hierarchical clustering which identifies nested objects. One statistical parameter LevelParameter is introduced that represents the levels in hierarchical clustering. LevelParameter value of 5 is used in the study. Figure 5a shows the number of objects found in mammogram by segmentation process. Grouping and Elimination Segmentation process described in previous section results in a large number segmented objects. We devised an algorithm to reduce the number of objects and extract only the relevant data for analysis. Our first step in this process is grouping and elimination. As previously described, masses exhibits the glowing effect, therefore, we first find the densecore portions and then go to the next threshold level to find objects which encircles the previously detected object. The idea of prestige in link analysis is used and also the hierarchical clustering nodal relation is considered. Every possible regions are given a prestige score of 1. When they are encircled by other immediate lower density parent they forward their prestige score to the parent. Sum of euclidean distance between the higher density objects and lower density objects. Lower density object should cover at least 80% of higher density object. Algorithm 2 describes the process of merge score. This process is repeated for all the segmented regions at every selected hierarchical level. Hierarchical clustering gives a parent-child relationship of clusters also, we can use this relationship to avoid unacceptable merging of objects. Objects having at least 3 prestige score from each level are up-sampled to full resolution image. Result of merging process is shown in Figure 5, where 5a represents the detected ROIs and 5b shows the merged objects. Features for False Positive (FP) Analysis Following set of features are extracted to classify objects into true mass and breast tissue (false positive). These features are well-established statistical features and finalized by radiologist too after analyzing the prominent patterns of masses on mammograms. Region Contrast: Generally, mass is imprinted on mammogram as a dense object as compared to its surroundings, having at least a uniform density. We used this property for classification between true mass and breast tissue. Region Contrast is computed as a difference between mean intensities of foreground and background in ROI. Foreground area is the selected mass or object while background represents the background area surrounding this object. Regions which results in negative values of region contrast are rejected for further processing. Mean Gradient: Gradient monitors the directional change in intensity. Gradient magnitude describes that how quick the image is changing. We calculated the mean gradient of the boundary pixels which strengthens the compactness of the region( described later). Entropy: The concept of entropy is in information theory which states the probabilistic behavior of the information sources. This statistical measure is a measure of randomness that is used to characterize the texture of image. Standard Deviation: It is popular term in statistics which gives a measure of spread of data. This represents the measure, that how much close the points are in the given region of the image. Compactness: The value of compactness gives the ratio of contour which encloses an area. it is defined as: where A is Area of object enclosed by perimeter P. Usually Benign masses have higher value of compactness, because it defines that small perimeter is enclosing a bigger area. We have used this feature in benign vs malignant classification too. Classification Model SVM(Support Vector Machine) was used to classify the masses. We selected support vector machine as it gives good results for binary classification. The basic idea behind SVM is to separate the input data by optimal method. As our data is not linearly separable, we used Gaussian RBF (Radial basis function) kernel. Sigma and C are two important factors for RBF kernel. optimal values for RBF were gridsearched between 10 −3 to 10 3 . Harmonic Mean (HM) is calculated to compare the C and sigma pairs. Harmonic Mean is defined as: where sens is sensitivity and spec represents specificity of the system. We adopted a 10-fold cross validation technique to train,test and validate the data. Image Database This study was carried out on images from two databases. We selected 56 images from mini-MIAS database ( [20]). It includes 13 normal, 13 malignant and 30 benign cases. The dataset include all types of masses from both classes (benign and malignant). Table 1 shows the overview of number of cases used in experiments from MIAS-dataset. We also selected 76 cases from DDSM database ( [8]: [9]). Table 2 shows the summary of DDSM database Detection of ROIs Our proposed preprocessing steps detected almost all masses in the dataset. Through careful examination of ROIs, we found that our algorithm missed two cases in MIAS database. One from Malignant and the other from Benign case (mdb179 and mdb191), Dense-glandular and Fatty Glandular. The contrast in these two images was very high and distributed, making it difficult to detect isolated regions. All other masses were successfully detected. This results in the detection accuracy of 95.3%. The detection accuracy on DDSM dataset was 97.3%. We missed 2 cases. Detected ROIs were carefully compared with the given ground truth data. Normal and Mass Differentiation Our algorithm detected all the malignant masses except one (mdb0186) on MIAS dataset. However we did not get so prominent success on benign masses. 30 cases were tested but Algorithm failed to detect 6 masses. Three of these missed masses were Fatty (mdb069, mdb080 and mdb195), two were Dense-glandular (mdb193 and mdb290) and one was Fatty-glandular (mdb190). The total accuracy of the system was 83.43%. Figure 7 shows the example ROI which is classified as mass. We further investigated the missed cases and found the following observations. In the first missed case (mdb069), the margin and boundary with wide transition zone, if we compare with opposite side breast, the lesion could be detectable, and in clinical practice, we describe it as architectural distortion. In case of mdb080, the tumor lesion is subtle ill margined, non-mass like parenchymal asymmetric pattern. In case of mdb195, the malignant lesion is almost isodensed to the normal breast fatty parenchyma. So the detection is not feasible. In mdb186 we found that the mass has poor contrast and also it lacks the dense region. Itś contrast with respect to the surrounding was very poor. Benign cases, where algorithm was unable to classify masses, we observed that, in three fatty and one fatty glandular case (mdb069, mdb080, mdb190 and mdb195) the masses were not clear. They do not have center core region and their contrast with respect to their surrounding was poor too. We are confident that if we add some good contrast enhancement technique, our algorithm performance will be improved by classifying above described cases as well. The remaining two dense-glandular cases (mdb193 and mdb290) do not follow the assumption we made in this paper (they do not have glowing effect), so features values were not good in these cases to classify them. To successfully detect masses in these cases, it may require additional methods or include more features. In the present work, we did not reject any region because of its size, this results in generating a large number of false positives. Although our classification phase reduces the number of FPs, but we aim to reduce the number FPs by improved algorithm in future work. We also believe that automatic breast density assessment before applying our method will improve the performance ( [11]). We validated the results by plotting the receiver operating characteristic (ROC) curve, which illustrates the performance of binary classifier system as its discrimination threshold is varied. Figure 6 shows the ROC curve of classification between normal and mass data, which is obtained by varying the threshold on the probabilities by classifier (SVM). AUC refers to the Area Under Curve. Table 3 shows the classification results in terms of specificity and sensitivity.In medical domain, only sensitivity is not important, algorithm should yield good specificity results also. As previously described, we used harmonic mean (equation 10) to get the best pair of specificity and sensitivity Algorithm missed 2 cases from malignant category and 6 from benign category of DDSM dataset. The maximum sensitivity and specificity pair we achieved is 91.32% and 85.05% respectively. Average sensitivity and specificity is 76.19% and 87.05% respectively. We also tested our algorithm for its generality by training it on one dataset and testing on the other. Algorithm was trained on MIAS dataset, tested on DDSM and vice versa. Algorithm results in table 3 confirm our claim that proposed algorithm is not limited to some limited type of masses or abnormalities. It covers a wide spectrum of masses. Distribution of the dataset is uneven, which degrades the performance of learning algorithm. Investigation of the missed cases confirms the reasons de- [19] stated their results of mass detection phase they achieved 84.4% detection accuracy. Their algorithm is based on image enhancement and after that Gaussian Markov Random Field (MRF) is used for mass segmentation. They did not classified the ROIs into mass and non-mass regions. [10] also reported their detection accuracy as 94.44%. They presented a particle swarm optimization (PSO) based detection technique. Our algorithm outperformed previously reported detection accuracies. Comparison with existing algorithms Work presented by [7], [23], [5], and [4] can be consider as the baseline in recent work on this domain. [4] implemented a fully automated system. They extracted local binary pattern LBP features and the classification is done by SVM. They also proposed a feature selection technique. [4] reported their performance in terms of sensitivity and 75.86% is reported for overall CAD performance on MIAS database. [6] reported their results on already selected 305 ROIs and achieved a sensitivity of 76.53%. They extracted features from Grey-level co-occurrence matrices (GLCM) and then classify features into mass and non-mass regions. [5] proposed the technique of curvelet transformation, feature selection and then classification by SVM. They manually cropped the ROIs and then applied their algorithm. Their reported accuracy is higher 90%, but their algorithm is not fully automated, they lack mass detection phase. All methods were tested on separate dataset, cross validation between the datasets was never performed. CONCLUSION This paper proposes a new mass detection in mammogram images. The proposed method is fully automated. It finds the candidate regions by segmenting the salient regions in mammogram and then extract features to differentiate be-tween breast tissue and mass.Promising results are obtained in mass identification and normal vs mass tissue classification. Classification results confirms that the segmentation process extracts enough information to find masses and localize it in mammogram. Experiments were performed on mini-MIAS and DDSM databases to show the usefulness and generalization of the proposed algorithm. Correlating the full image set (CC and MLO) is considered as future work that can help to identify the architectural distorted mammograms also.
5,826.8
2019-04-12T00:00:00.000
[ "Medicine", "Computer Science" ]
Pathogenic Mitochondria DNA Mutations: Current Detection Tools and Interventions Mitochondria are best known for their role in energy production, and they are the only mammalian organelles that contain their own genomes. The mitochondrial genome mutation rate is reported to be 10–17 times higher compared to nuclear genomes as a result of oxidative damage caused by reactive oxygen species during oxidative phosphorylation. Pathogenic mitochondrial DNA mutations result in mitochondrial DNA disorders, which are among the most common inherited human diseases. Interventions of mitochondrial DNA disorders involve either the transfer of viable isolated mitochondria to recipient cells or genetically modifying the mitochondrial genome to improve therapeutic outcome. This review outlines the common mitochondrial DNA disorders and the key advances in the past decade necessary to improve the current knowledge on mitochondrial disease intervention. Although it is now 31 years since the first description of patients with pathogenic mitochondrial DNA was reported, the treatment for mitochondrial disease is often inadequate and mostly palliative. Advancements in diagnostic technology improved the molecular diagnosis of previously unresolved cases, and they provide new insight into the pathogenesis and genetic changes in mitochondrial DNA diseases. Mitochondria as the Energy Source in Cells Historically mitochondria evolved from a bacterial ancestor of α-proteobacteria and became endosymbionts living inside eukaryotes over one billion years ago [1]. Under normal physiological conditions, mitochondria produce most of the adenosine triphosphate (ATP) through the oxidative phosphorylation system (OXPHOS). The OXPHOS is composed of five protein complexes (complexes I-V), and mitochondria DNA (mtDNA) only encodes 13 structural subunits of complex I, III, IV, and V, whilst complex II is completely nuclear encoded. During OXPHOS, mitochondria produce reactive oxygen species (ROS) known as mitochondrial ROS (mtROS), which are formed as a consequence of proton leak during respiration at the inner mitochondrial membrane. The mtROS formed increases the risk of mtDNA perturbation and impairment of ATP synthesis, and it contributes to overall mitochondrial dysfunction [2]. Mitochondria are dynamic, constantly fusing and dividing [3]. A major component of the cellular control of mitochondria integrity is the specialized form of autophagy known as mitophagy. Mitophagy The first population-based study of a single pathogenic mtDNA mutation was reported in Finland with the prevalence of MELAS as high as 16.3 in 100,000 [21]. In England, two studies on mitochondrial disease were conducted in the early 2000s. The first study reported on the prevalence of patients with mitochondrial disease or the risk of developing disease in a northern England population, which was 12.78 in 100,000 [22], and the second reported on the prevalence of LHON mutations within a northeast England population, which was 11.82 in 100,000 [23]. The prevalence of LHON was also reported in the Dutch and Finnish populations as 2.6 in 100,000 and 2.06 in 100,000, respectively [24,25]. A prevalence study in an Asian population reported on the prevalence of MELAS as 0.2 in 100,000 [26]. The first population-based study of a single pathogenic mtDNA mutation was reported in Finland with the prevalence of MELAS as high as 16.3 in 100,000 [21]. In England, two studies on mitochondrial disease were conducted in the early 2000s. The first study reported on the prevalence of patients with mitochondrial disease or the risk of developing disease in a northern England population, which was 12.78 in 100,000 [22], and the second reported on the prevalence of LHON mutations within a northeast England population, which was 11.82 in 100,000 [23]. The prevalence of LHON was also reported in the Dutch and Finnish populations as 2.6 in 100,000 and 2.06 in 100,000, respectively [24,25]. A prevalence study in an Asian population reported on the prevalence of MELAS as 0.2 in 100,000 [26]. Mitochondrial Encephalomyopathy, Lactic Acidosis, and Stroke-Like Episodes (MELAS) MELAS is one of the most frequently maternally inherited mitochondrial disorders, and it impairs mitochondrial translation and protein synthesis, which results in the inability of mitochondria to meet the energy demand of various organs, eventually causing multi-organ dysfunction [27]. MELAS is diagnosed and characterized by strokes with hemiparesis and hemianopsia. It usually affects individuals aged under 40 years and it was also observed in patients during childhood. Studies showed that more than 80% of patients with MELAS feature the m.3243A>G mutation in the mitochondrially encoded tRNA leucine 1 (MT-TL1) gene [28,29]. Other mutations identified in MELAS include m.3271T>C [30] and m.1642G>A in the mitochondrially encoded tRNA valine (MT-TV) gene [31], which is a protein-encoding gene, m.9957T >C in the mitochondrially encoded cytochrome C oxidase III (MT-CO3) gene [32], and several mitochondrially encoded reduced nicotinamide adenine dinucleotide (NADH) ubiquinone oxidoreductase chain 5 (MT-ND5) mutations (m.1277A>G, m.13045A>C, m.13513G>A, and m.13514A>G) [33][34][35][36]. These mutations subsequently lead to destabilization of tRNA which results in reduction of OXPHOS proteins and insufficiency of complexes I, III, and IV [28,37]. Leber Heridetary Optic Neuropathy (LHON) LHON patients typically present with painless loss of central vision in one eye, followed by loss of vision in the second eye within weeks or months [44]. It predominantly affects males (80%) with disease onset between 15 and 30 years [45]. The primary cause of this disease is a mutation of the mtDNA, with a single amino-acid substitution in one of the mtDNA-encoded subunits of NADH ubiquinone oxidoreductase, the first complex of the electron transport chain. The majority of LHON cases are caused by single-nucleotide point mutations of mtDNA located in NADH dehydrogenase subunit 1 (ND1) (G3460A), ND4 (G11778A), or ND6 (T14484C) genes, which result in a dysregulated complex I of OXPHOS [46]. Leigh Syndrome Leigh syndrome, a highly heterogeneous disorder and the most common pediatric mitochondrial disease characterized by progressive neurodegenerative disorder, is caused by mutations in almost 80 different genes [18]. It is a rare inherited subacute necrotizing encephalomyelopathy that affects the central nervous system, and the onset of symptoms is typically seen between the ages of three and 12 months, often following a viral infection. MtDNA-associated Leigh syndrome is often seen in the neonatal phase. Several mutations of nuclear genes that affect assembly factors or subunits of the mitochondrial respiratory chain, mtDNA replication, transcription, and translation, as well as proteins involved in other mitochondrial processes, like pyruvate metabolism, coenzyme Q10 biosynthesis, and the oxidation of fatty acids, and non-mitochondrial processes, like thiamine metabolism, which affects mitochondrial function, result in "classic" Leigh syndrome or Leigh-like syndrome [47]. Other mtDNA Diseases MtDNA deletion syndromes are caused by a single large-scale deletion in the mtDNA genome, which include diseases such as Pearson syndrome, Kearns-Sayre syndrome (KSS), and chronic progressive external ophthalmoplegia (CPEO) [48]. Pearson syndrome is a very rare syndrome characterized by bone marrow failure, severe transfusion-dependent sideroblastic anemia, and variable exocrine pancreatic insufficiency [49]. Death may occur in early infancy, or survival after recovery from bone marrow dysfunction is possible, with a transition to clinical manifestations of KSS. KSS is a rare neuromuscular disorder known as mitochondrial encephalomyopathy with a prevalence of 1-3 cases in 100,000 [50], presented before the age of 20 years. One of its features is PEO, a disorder that affects children before the age of 10 with limited eye movements, bilateral ptosis, and orbicularis weakness. PEO is generally associated with single mtDNA deletions and considered the least lethal among the three syndromes [51]. MtDNA dysfunction in diabetes is known as maternally inherited diabetes and deafness (MIDD), which was first reported in 1992. A single point mutation in the mtDNA affects the activities of complex I and IV in the respiratory chain, which results in cellular energy deficiency in metabolically active organs such as the pancreas and cochlea [52]. Other mtDNA diseases with low prevalence include neurogenic muscle weakness, ataxia, and retinitis pigmentosa (NARP). Detection of Mutations in mtDNA Methods for detecting mutations in mtDNA do not differ much from those used to determine the primary sequence of any DNA. The first generation of DNA sequencing involved fragmentation and detection of radiolabeled DNA, and these protocols quickly improved over the years with parallel sequencing techniques [53]. In the past decade, the genetic approach widely used to confirm mtDNA disorders is next-generation sequencing (NGS) [54]. NGS utilizes single-stranded DNA fragments, which results in high background error frequency. A more sensitive approach, duplex sequencing (DS), allows >10,000-fold greater accuracy than conventional next-generation sequencing (NGS) [55]. DS sequences both strands, and it scores mutations that are present as complementary substitutions in both strands of a single DNA molecule as compared to NGS. The first report on DS was a study investigating the mutational variations of the whole mitochondrial genome between non-stem cells and stem cells in human breast tissues. The study reported that the mitochondrial genome of stem cells has a lower mutation burden compared to non-stem cells, and that the majority of mutational variations occurred randomly [56]. Recently, rolling circle amplification (RCA) was used to amplify mtDNA in a heritability analysis study of 35 individuals [57]. RCA, a low sequence-dependent amplification, was used to amplify the circular mitochondrial genome in a single reaction without the use of primers and temperature regulation, and this sequencing strategy was described as mitochondrial DNA analysis by rolling circle amplification and sequencing (MitoRS). This method was concluded to be a robust, accurate, and sensitive analysis, suitable for large samples [57]. MtDNA Intervention This review outlines the current advances in in vitro mitochondria manipulation, which allows researchers to understand the pathogenic implication and therapeutic potential of mtDNA mutation. The strategies involve either removing the detrimental mtDNA by transferring healthy mitochondria or targeting specific mtDNA sequences that cause mitochondrial disease (Figure 2). Genes 2020, 11,192 5 of 13 mutation in the mtDNA affects the activities of complex I and IV in the respiratory chain, which results in cellular energy deficiency in metabolically active organs such as the pancreas and cochlea [52]. Other mtDNA diseases with low prevalence include neurogenic muscle weakness, ataxia, and retinitis pigmentosa (NARP). Detection of Mutations in mtDNA Methods for detecting mutations in mtDNA do not differ much from those used to determine the primary sequence of any DNA. The first generation of DNA sequencing involved fragmentation and detection of radiolabeled DNA, and these protocols quickly improved over the years with parallel sequencing techniques [53]. In the past decade, the genetic approach widely used to confirm mtDNA disorders is next-generation sequencing (NGS) [54]. NGS utilizes single-stranded DNA fragments, which results in high background error frequency. A more sensitive approach, duplex sequencing (DS), allows >10,000-fold greater accuracy than conventional next-generation sequencing (NGS) [55]. DS sequences both strands, and it scores mutations that are present as complementary substitutions in both strands of a single DNA molecule as compared to NGS. The first report on DS was a study investigating the mutational variations of the whole mitochondrial genome between non-stem cells and stem cells in human breast tissues. The study reported that the mitochondrial genome of stem cells has a lower mutation burden compared to non-stem cells, and that the majority of mutational variations occurred randomly [56]. Recently, rolling circle amplification (RCA) was used to amplify mtDNA in a heritability analysis study of 35 individuals [57]. RCA, a low sequence-dependent amplification, was used to amplify the circular mitochondrial genome in a single reaction without the use of primers and temperature regulation, and this sequencing strategy was described as mitochondrial DNA analysis by rolling circle amplification and sequencing (MitoRS). This method was concluded to be a robust, accurate, and sensitive analysis, suitable for large samples [57]. MtDNA Intervention This review outlines the current advances in in vitro mitochondria manipulation, which allows researchers to understand the pathogenic implication and therapeutic potential of mtDNA mutation. The strategies involve either removing the detrimental mtDNA by transferring healthy mitochondria or targeting specific mtDNA sequences that cause mitochondrial disease (Figure 2). Mitochondria transfer technologies focus on introducing exogenous mitochondria to a recipient cell. In this technique, the mtDNA is not manipulated. MtDNA replacement technology is a more specific and targeted method that can generate non-native mtDNA sequences or repair the sequences of existing mtDNA to shift the mtDNA heteroplasmy ratio. Artificial Mitochondria Transfer The mitochondrion is an endosymbiotic organism with partial nuclear independence, allowing it to be exchanged between cells [58]. The transfer of exogenous mitochondria to an existing endogenous mitochondrial network can lead to an alteration in the function and bioenergetic profile of the recipient cells. Successful intercellular mitochondrial transfer relies on the communication between cells, which includes membrane nanotubes and other cytoplasmic bridges, exosomes, and mitochondrial fusion-fission mechanisms [59]. Exogenous mtDNA molecules are either incorporated into mammalian cells or into empty mitochondria and then are introduced into mammalian cells via endocytosis (Patent No: EP3067416A1). Viable isolated mitochondria can be internalized by simple co-incubation, which involves macropinocytosis. However, the internalization is only for a short duration of time [60]. Another quick and simple method is via centrifugation, which is also suitable for all cell types to transfer viable mitochondria into target cells. A non-ionic surfactant, PF-68, is used to enhance cell permeability, which increases the number of mitochondria penetrating the cell membrane [61]. MitoCeption is a method for quantitatively transferring mitochondria from human mesenchymal stromal/stem cells (MSCs) to cancer cells. Various types of stromal and cancer cell communications were documented, which include cytokine-dependent, metabolite exchange, and direct cell-cell contacts [62]. MitoCeption transfers mitochondria isolated from MSCs to cancer cells by simple coculture, and, at the end, cancer cells contain both endogenous and exogenous mitochondria [63]. It was reported that this transfer resulted in an increase in mtDNA concentration, OXPHOS activity, and ATP production of the cancer cells [58]. Genetic Transfer to the Mitochondria The mitochondrial inner membrane that selectively transports molecules into the mitochondrial matrix is impermeable to hydrophilic molecules. Exogenous genes with functional protein expression are transported into mitochondria to compensate for mitochondrial dysfunction as a consequence of mtDNA mutations. Labeled DNA oligonucleotides are transferred into the mitochondrial matrix using peptide nucleic acid (PNA) [64]. PNA has a similar structure to DNA or RNA; however, the sugar phosphate backbone is replaced by a peptide backbone. Labeled DNA oligonucleotides are introduced into the mitochondrial matrix using PNAs conjugated to mitochondrial-targeting peptides as a vehicle to be transported through the translocase outer membrane (TOM)/ translocase inner membrane (TIM) import apparatus. Biolistic transformation utilizes a helium shockwave to deliver DNA on microscopic metal particles that is incorporated into mtDNA via active homologous recombination [65]. To date, single-cell eukaryotes such as Saccharomyces cerevisiae [65] and Chlamydomonas reinhardtii [66] are the only species where biolistic transformation was used to deliver DNA into an organelle. MtDNA Gene Editing Gene editing in the mitochondria is based on the concept of using the inefficient double-strand break repair system in mitochondria and introducing endonucleases to degrade the mutant mtDNA and to repopulate with wild-type mtDNA. Pathogenic mtDNA mutations are generally heteroplasmic with the presence of pathology characteristics when the ratio of mutated mtDNA exceeds a certain threshold. Recently, Gammage et al. improved their previous findings on zinc finger nucleases (ZFN) to target and cleave predetermined loci in mtDNA [67]. They generated a mitochondrially targeted ZFN that carries two cleavage domains linked to the same protein that selectively eliminates pathogenic mtDNA, as well as a region of mtDNA that is most frequently associated with diseases, which contains several transfer RNAs and structural genes of the OXPHOS apparatus [68]. Mitochondrially targeted transcription activator-like effector nucleases (mitoTALENs) are used to cleave specific sequences in mtDNA with the goal of eliminating mitochondria carrying pathogenic Genes 2020, 11,192 7 of 13 point mutations. This approach was recently explored in two clinically important mtDNA point mutations associated with mitochondrial disease, which include myoclonus epilepsy with ragged red fibers (MERRF) and MELAS/Leigh syndrome [69]. The mutation load was successfully reduced in vitro, as depicted by improved biochemical oxidative phosphorylation defects. The authors described the location of the mutations and the size of the mitoTALEN as being the major challenge in translating the mitoTALEN approach into a clinical setting [69]. MitoTALEN specific for the mtDNA region harboring the m.5024C>T mutation was cloned into an adeno-associated virus and phage (AAVP) vector to test its role in regulating mtDNA heteroplasmy in vivo [70]. A mouse with heteroplasmic mitochondrial tRNA Ala gene mutations was the first mouse model with a heteroplasmic pathogenic mtDNA mutation generated. This mouse is associated with tRNA Ala instability and a mild cardiac phenotype at old age [70]. It resembles human heteroplasmy mtDNA mutations in tRNA Ala characterized by cytochrome C oxidase (COX)-deficient fibers that are associated with myopathy and impairment of OXPHOS [71]. A significant decrease in mutant/wild-type ratio expressed in skeletal and cardiac muscle compared to non-targeted tissue was observed after systemic delivery of AAV9-mitoTALEN [70]. Mitochondrial Manipulation in Clinics Research and trial of mitochondrial manipulation raise several bioethical issues. The clinical use of mitochondrial modification involves germline modification. Unlike somatic modification, these manipulated genes are transmitted to the offspring. The ability to freely manipulate genes hinders the future generation from receiving an unmanipulated gene pool. Furthermore, children born from a mitochondrial replacement procedure would have a genetic link to three people: their parents and the donor [72,73]. Predictive tests such as pre-implantation genetic diagnosis (PGD) and pre-natal diagnosis (PND) are available to avoid mtDNA disorder transmission. The two reproductive options that are well documented for mitochondrial donation in women with pathogenic mtDNA mutations who wish to reduce the risk of mitochondrial disorder in their children are maternal spindle transfer (MST) in unfertilized oocytes and pronuclear transfer (PNT) in zygotes [74]. The first country to allow the clinical use of mitochondrial transfer under license was the United Kingdom in October 2015 [75]. This was followed by the United States (US) in February 2016 with restricted use [76]. The panel of US experts recommended mitochondrial transfer to only be used to generate male babies to prevent the transmission of surrogate donor mitochondria to the future generation [77]. The first successful clinical case of reduced maternal mutated mtDNA transfer using MST resulting in the birth of a healthy boy was reported in 2017 [78]. The female carrier was asymptomatic and carried an 8993T>G mtDNA mutation in the MT-ND6 gene, associated with Leigh syndrome [78]. The debate about the ethics of germline modification is inevitable. The manipulation techniques which involve egg donation remain controversial and pose a potential risk to the donor. Since the transmission of mtDNA disorders is complex and hard to predict, a safe and effective technique with appropriate information and support is pertinent to the affected patient. Significant precaution measures should be taken in techniques involving altering genes; however, the goal of treatment is to alleviate suffering of the patient, and their families should also be considered. Conclusions and Future Perspectives MtDNA disorders are among the most common inherited human diseases, and their prevalence was shown to be influenced by demographic and genetic factors [79]. The heterogeneity of the mitochondrial genome poses unmet challenges to researchers. The in vitro modeling of mtDNA diseases is not straightforward, as it suffers additional complications due to the heteroplasmy and homoplasmy state of mitochondrial mutations. Moreover, the significant contribution of nuclear-encoded genes in regulating the mutation effect complicates the model even further. Despite these limitations, in vitro models are important in establishing the cause of mtDNA diseases. Initially, specific mtDNA sequences involved in the disease are identified and analyzed. The detrimental mtDNA sequences are either altered via gene editing or transferred into a different nuclear background, which allows the investigators to explore the potential effect of the specific mtDNA sequences on mitochondrial function and cell metabolism. The recent genomics advancements established the molecular diagnosis of suspected mtDNA diseases. Several mtDNA diseases are due to either dysfunction in mitochondrial replication or translation, which results in mtDNA mutation(s) or a mutation in nuclear-encoded genes that regulate mitochondrial pathways involved in ATP generation, encoding subunits of OXPHOS and associated assembly factors, enzymes involved in ubiquinone biosynthesis, and the pyruvate dehydrogenase complex. Despite the major advances highlighted in this review, current available treatment options for mtDNA diseases are limited and focus on disease management. However, assisted reproductive techniques which allow almost complete replacement of the cytoplasm of egg/embryo, eliminating the undesired mutated mitochondria, are proposed to families with mtDNA mutations that are transmitted down the maternal lineage. The development of techniques that genetically manipulate the detrimental mtDNA sequences either in vitro or in vivo may provide potential treatment for mtDNA diseases.
4,705.4
2020-02-01T00:00:00.000
[ "Biology" ]
Quantum spin liquid in the semiclassical regime Quantum spin liquids (QSLs) have been at the forefront of correlated electron research ever since their proposal in 1973, and the realization that they belong to the broader class of intrinsic topological orders. According to received wisdom, QSLs can arise in frustrated magnets with low spin S, where strong quantum fluctuations act to destabilize conventional, magnetically ordered states. Here, we present a Z2 QSL ground state that appears already in the semiclassical, large-S limit. This state has both topological and symmetry-related ground-state degeneracy, and two types of gaps, a “magnetic flux” gap that scales linearly with S and an “electric charge” gap that drops exponentially in S. The magnet is the spin-S version of the spin-1/2 Kitaev honeycomb model, which has been the subject of intense studies in correlated electron systems with strong spin–orbit coupling, and in optical lattice realizations with ultracold atoms. Supplementary Notes Supplementary Note 1. Semiclassical expansion around the states of the star dimer pattern Lattice superstructure & Hamiltonian Here we provide the details of the semiclassical expansion around the states of the star dimer pattern of Fig. 2 of the main text. The analysis is based on the six-sublattice decomposition shown in Fig. 8 of the main text, but we repeat the details here for completeness. The six-sublattice decomposition is shown again in Supplementary Figure 1, with a superlattice defined by the primitive translation vectors T 1 and T 2 . Any given site i of the lattice can be labeled as i = (R, ν), where R is a primitive vector of the superlattice and ν = 1-6 is the sublattice index. In this parametrization, the positions of the empty hexagons h α are labeled by R. The classical state is parametrized in terms of the η-variables, as shown in Fig. 2 of the main text. We will also use the local coordinate frames given in Eq. (3) of the main text, and define for each empty hexagon h R γ R ≡ κB R . (1) With these conventions and definitions, the Hamiltonian reads Semiclassical expansion In our semiclassical expansion we will keep up to four boson terms. So it suffices to keep the following terms from the standard [1] Holstein-Primakoff expansion for each site i = (R, ν): where c i , c + i are bosonic operators. We have: • Terms of the type S u i S u j (where i and j belong to the same empty hexagon): • Terms of the type S v i S v j (where i and j belong to the same empty hexagon): • Terms of the type S w R,ν S w R+Tνµ,µ : The terms that couple different empty hexagons are of the form S w R,ν S w R+Tνµ,µ . For simplicity, we will label (R, ν) → i and (R + T νµ , µ) → j. We have: The quartic term decouples as follows: We now repeat the main arguments described in the Methods section of the main paper that simplify this expression. The state around which we expand does not break the local BSS flux operators defined on the empty hexagons: = (−1) λ R S exp − iπ κ η 1 n R,1 + η 3 n R,3 + η 5 n R,5 + η 2 n R,2 + η 4 n R,4 + η 6 n R,6 , where n i = c + i c i is the boson number operator and λ R = κ(η R,1 + η R,3 + η R,5 ) + (η R,2 + η R,4 + η R,6 ), see main text. The invariance of the Hamiltonian and the state around which we expand under this operation translates into the invariance of the parity of the number κ η 1 n R,1 + η 3 n R,3 + η 5 n R,5 + η 2 n R,2 + η 4 n R,4 + η 6 n R,6 . But since κ and η can only take the values +1 and −1, it follows that the parity of this number is the same as the parity of the total number N R of bosons in any given empty hexagon: This means that terms that change the parity of N R are not allowed in the expansion. This excludes terms of the type c i c j or c i c + j , where i and j belong to different empty hexagons (see definition above). Equivalently, the meanfield parameters m ij and δ ij vanish by symmetry, and this is true to all orders in the Holstein-Primakoff expansion. We therefore get: i.e. empty hexagons decouple from each other and the S w i S w j terms give, for each empty hexagon R alone, a contribution where the constant p j = p R+Tνµ,µ refers to a neighboring hexagon and has to be found self-consistently in the general case. It is useful to add here one more consequence of the BSS flux conservation. In the classical, reference state, where all n i vanish, the BSS fluxes are equal to (−1) λ R S (see main text and [2]). Spin wave fluctuations dress the reference state but cannot change the BSS fluxes, because these are integer numbers. (13) then implies that the dressed ground state contains only terms with an even number of bosons N R . Mean field parameters: General relations Let us define the six eigenvectors of the matrix g · M that correspond to non-negative eigenvalues by X ν , ν = 1-6. Using: we get the following expressions for the mean-field parameters: where X ν denotes the ν-th eigenvector of g · M. Note that the last expressions in each line do not depend on the arbitrary phase for the eigenvectors X ν , which come out arbitrary when we diagonalize the matrix g · M numerically. Mean field parameters: Symmetry constraints We have already mentioned that all mean-field parameters defined above are real quantities. Here we give a list of symmetry operations (of the Hamiltonian and of the classical state around which we expand) which reduce strongly the number of independent mean-field parameters. • Symmetry Σ 1 . This is a π-rotation in real space around the center of the hexagon, followed by π 2 -rotations around the local w-axes in spin space: These relations are equivalent with • Symmetry Σ 2 . This is a reflection through the bonds (3,6) in real space, followed by π 2 -rotations around the local w-axes in spin space: These relations are equivalent with: • Symmetry Σ 3 . This is a reflection through the middle of the bonds (1,2) and (4,5) in real space, followed by zero or π-rotations around the local-w axes in spin space: These relations are equivalent with: • Symmetry Σ 4 . This is a reflection through the bonds (1,4) in real space, followed by π 2 -rotations around the local w-axes in spin space: Combining Σ 1 -Σ 5 gives the following constraints for the mean-field parameters: The mean field parameter m The numerical, self-consistent treatment of the decoupled spin-wave Hamiltonian gives a vanishing mean-field parameter m. This result does not arise from symmetry and is true only in the asymptotic large-S limit. For general S, m is a very small number. To see this we consider the self-consistent mean-field Hamiltonian for a single hexagon, that corresponds to the decoupled semiclassical problem that we are dealing with: where h loc is the self-consistent field exerted from neighboring hexagons and we have taken K = 1 without loss of generality. In what follows we shall use the Néel operator L defined as and the relations • For S = 1/2, the numerical, self-consistent solution gives h loc = 0.37888 and m = 0. However, this relation is special to S = 1/2 because the self-consistent ground state |g of H MF has the special property L|g = 0. And according to the above relations, this implies that g|S + 1 S − 2 |g = 0, which is equivalent with m = 0. • For S = 1 and higher, the ground state does not obey the property L|g = 0 and m is therefore finite. The numerical solution for S = 1 gives h loc = 0.83643 and m = 0.0011412, which is a very small number. • In the large-S limit, the parameter m must eventually vanish (consistent with the numerical results from the decoupled, large-S spin-wave Hamiltonian). The reason behind this is that as we increase S, the ground state |g comes closer and closer to the classical vacuum |0 (with spins fully polarized along their local w-axes), which has the property L|0 = 0 (because |0 is an eigenstate of each S w ν individually). In fact, this relation remains true when we include the leading effect of semiclassical corrections coming from V. At this leading level, the ground state wavefunction is given by [4] where R = 1−|0 0| E 0 −H 0 is the usual resolvent operator. To show that L|g 1 = 0 we use the fact that L commutes with H 0 (and therefore with R as well) and furthermore L|0 = 0. These properties give: We further have: S + 1 S + 2 − S + 2 S + 3 + S + 3 S + 4 − S + 4 S + 5 + S + 5 S + 6 + γS + 6 S + 1 + h.c. At higher orders n > 1, the ground state |g n does not satisfy this property (i.e. L|g n = 0), and a finite m is therefore expected (as found explicitly for S = 1 above, by the exact treatment of the equivalent spin Hamiltonian H MF ). Nevertheless, the important point is that m vanishes asymptotically for large S, and it is generally a very small number otherwise (m = 0.0011412 at S = 1). Two-fold degeneracy structure of the spin-wave spectrum Fig. 5 of the main text shows that the six spin-wave energies organize into three degenerate pairs. The symmetry origin of this degeneracy can be seen by considering the effect of the operation Σ 1 discussed above. We have: does not depend on the configuration of η's altogether. The Hamiltonian for the terms along the string becomes H = K(S u 1 S u 2 + S v 2 S v 3 + S u 3 S u 4 + · · · ) − |K|(S w 1 S w 1 + S w 2 S w 2 + · · · ) (51)
2,490.6
2017-06-12T00:00:00.000
[ "Physics" ]
The microRNA-23b/27b/24-1 cluster is a disease progression marker and tumor suppressor in prostate cancer. Our recent study of microRNA (miRNA) expression signatures in prostate cancer (PCa) has revealed that all members of the miR-23b/27b/24-1 cluster are significantly downregulated in PCa tissues. The aim of this study was to investigate the effectiveness of these clustered miRNAs as a disease progression marker and to determine the functional significance of these clustered miRNAs in PCa. Expression of the miR-23b/27b/24-1 cluster was significantly reduced in PCa tissues. Kaplan-Meier survival curves showed that low expression of miR-27b predicted a short duration of progression to castration-resistant PCa. Gain-of-function studies using mature miR-23b, miR-27b,and miR-24-1 significantly inhibited cell proliferation, migration and invasion in PCa cells (PC3 and DU145). To identify the molecular targets of these miRNAs, we carried out gene expression and in silico database analyses. GOLM1 was directly regulated by miR-27b in PCa cells. Elucidation of the molecular targets and pathways regulated by the tumor-suppressive microRNAs should shed light on the oncogenic and metastatic processes in PCa. INTRODUCTION Prostate cancer (PCa) is the most frequently diagnosed cancer and the second leading cause of cancerrelated deaths among men in developed countries [1]. Androgen signaling through the androgen receptor (AR) is an important oncogenic pathway for PCa progression. Most patients are initially responsive to androgen deprivation therapy (ADT), but their cancers eventually become resistant to ADT and progress to castrationresistant prostate cancer (CRPC). Although prostatespecific antigen (PSA) has been used for monitoring CRPC, outcomes in PCa patients are diverse, even within the same risk group, because of the heterogeneity of PCa cells [2][3][4]. Thus, identification of effective biomarkers for detection of CRPC is needed. With currently available therapies, CRPC is difficult to treat, and most clinical trials for advanced PCa have shown limited benefits, with disease progression and metastasis to bone or other sites [5,6]. Therefore, understanding the molecular mechanisms of androgen-independent signaling and metastatic signaling pathways underlying PCa using current genomic approaches would help to improve therapies for and prevention of the disease. A growing body of evidence indicates that microRNAs (miRNAs) also contribute to the initiation, development, and metastasis of various types of cancers [7]. Many human cancers show aberrant expression of miRNAs that can function as either tumor suppressors or oncogenes. Therefore, identification of aberrantly expressed miRNAs is the first step toward elucidating miRNA-mediated oncogenic pathways in human cancers. Based on this point, our group previously established miRNA expression signatures and identified novel oncogenic pathways regulated by tumor-suppressive microRNAs in several types of cancers, including PCa [8][9][10][11]. In a recent study from our laboratory, we found that the miR-23b/27b/24-1 cluster was significantly downregulated in PCa [11]. Some miRNAs are located in close proximity on the human genome; these are termed clustered miRNAs. In the human genome, 247 human miRNAs have been found to be clustered at 64 sites at inter-miRNA distances less than 5000 bp [12,13]. We previously reported that miR-1-1/133a-2 and miR-1-2/133a-1 formed clusters in different chromosomal loci in the human genome (20q13.33 and 18q11.2, respectively) and that these clusters function as tumor suppressors, targeting several oncogenic genes in human cancers, including PCa [13][14][15]. More recently, we showed that the miR-143/145 cluster, located at the 5q32 locus, acts as a tumor-suppressive miRNA cluster in renal cell carcinoma and PCa [16,17]. In this study, we hypothesized that the miR-23b/27b/24-1 cluster functioned as a tumor suppressor by targeting several oncogenic genes involved in specific cancer-related pathways in PCa. Elucidation of the molecular targets regulated by the tumor-suppressive miR-23b/27b/24-1 cluster will provide new insights into the potential molecular mechanisms of PCa oncogenesis and metastasis and will facilitate the development of novel diagnostic and therapeutic strategies for the treatment of PCa. We evaluated the expression levels of the clustered miRNAs (miR-23b, miR-27b and miR-24-1) in noncancerous tissues (n = 41) and PCa tissues (n = 49). In patients from whom normal prostate tissues were collected, the median PSA level was 7.3 ng/mL (range: 4.3-35.5 ng/mL). In contrast, in patients from whom PCa tissues were collected, PSA levels were quite high, with a median of 244 ng/mL (range: 3.45-3750 ng/mL). Thirtynine PCa patients had progressive disease classified as N1 or M1 according to TNM classification ( Table 1). Correlations between miR-23b/27b/24-1 expression and clinicopathological features in PCa specimens Among 49 PCa patients, 47 patients underwent ADT with luteinizing hormone-releasing hormone (LHRH) agonists and anti-androgens (Supplemental Table 1). A total of 16 ADT-treated patients progressed to CRPC over a median follow-up of 15.6 months. For patients with high versus low miR-23b/27b/24-1 expression, the risk of progression to CRPC was evaluated using the Kaplan-Meier method and log rank test for significant separation of survival curves. Low expression of miR-27b was found to be associated with shorter progression-free interval (P = 0.0346; Figure 2B). However, neither miR-23b nor miR-24-1 predicted the time to CRPC in these PCa patients (P = 0.321 and P = 0.231, respectively; Figure 2A and 2C). The multivariate Cox proportional hazards model was used to assess independent predictors of time to Effects of restoring miR-23b/27b/24-1 expression on cell proliferation, migration, and invasion in PC3 and DU145 PCa cells To investigate the functional effects of the miR-23b/27b/24-1 cluster, we performed gain-of-function studies using miRNA transfection in 2 PCa cell lines (PC3 and DU145). As observed using XTT assays, cell proliferation was significantly inhibited in miR-27b and miR-24-1 transfectants as compared with mock-or miR-controltransfected PC3 cells. However, the miR-23b transfectant did not exhibit reduced cell proliferation in PC3 cells. In contrast, inhibition of cell proliferation was only observed in the miR-24-1 transfectant in DU145 cells ( Figure 3A). In cell migration assays, transfection with each of the 3 miRNAs significantly inhibited cancer cell migration in all cell lines ( Figure 3B). In cell invasion assays, transfection with miR-27b and miR-24-1 inhibited cell invasion in PC3 and DU145 cells. However, transfection with miR-23b did not inhibit cell invasion in either PC3 or DU145 cells ( Figure 3C). In this study, we investigated the synergistic effects of miR-23b/27b/24-1 cluster in PCa cells. We performed XTT and migration assays using transfection of all possible combinations of these clustered miRNAs. However, we did not find synergistic effects in these assays (Supplemental Figure 1). Next, we investigated the expression statuses of these putative targets in PCa clinical specimens and examined gene expression profiles in the GEO database (accession numbers GSE29079) to evaluate upregulated genes in PCa specimens. Among the 4206 putative target genes of miR-23b, 147 genes were significantly upregulated in PCa specimens compared to noncancerous prostate tissues. In a similar analysis, among 4075 and 4321 putative targets of miR-27b and miR-24-1, 157 and 139 genes were upregulated in PCa tissues, respectively. Furthermore, we performed genome-wide gene expression analysis using PC3 cells and selected genes genes were selected as putative miR-23b, 27b and 24-1 target genes, respectively by TargetScan database analysis. We then analyzed the expression levels of these candidate genes by using available data sets of GEO (GSE 29079). The analyses showed that 147, 157 and 139 genes were significantly upregulated in PCa specimens compared with normal specimens. Furthermore, expression analysis data of miR-23b, miR-27b and miR-24-1 transfectants of PC3 cells were merged. Finally, 34, 52 and 56 genes were putative candidate genes regulated by miR-23b, miR-27b and miR-24-1, respectively. Oncotarget 7753 www.impactjournals.com/oncotarget that were downregulated following transfection with miR-23b/27b/24-1 as compared with the miR-control. Entries from the gene expression data were approved by GEO, and were assigned GEO accession number GSE47657. When we integrated all of these analysis results, a total of 34, 52, and 56 genes were identified as targets of miR-23b, miR-27b, and miR-24-1, respectively (Supplemental Tables 2-4). Our strategy for selection of miR-23b/27b/24-1 cluster-targeted genes is shown in Figure 4. GOLM1 was a direct target of miR-27b in PCa cells We performed real-time RT-qPCR and western blotting in PC3 and DU145 cells to investigate whether restoration of miR-27b altered the expression of the GOLM1 gene and GOLM1 protein. The mRNA and protein expression levels of GOLM1/GOLM1 were significantly repressed in miR-27b transfectants as compared with mock-or miR-control-transfected cells ( Figures 5A and 5B). Therefore, we next performed luciferase reporter assays in PC3 cells to determine whether GOLM1 mRNA had target sites for miR-27b. The TargetScan database predicted that 2 putative miR-27b binding sites existed in the 3'UTR of GOLM1 (positions 79-86 and 364-370). We used vectors encoding a partial wild-type sequence of the 3'UTR of GOLM1 mRNA, including the predicted miR-27b target site or a vector lacking the miR-27b target site. We found that the luminescence intensity was significantly reduced by cotransfection with miR-27b and the vector carrying the wild-type 3'UTR of GOLM1. On the other hand, the luminescence intensity was not decreased when the seed sequence of the target site was deleted from the vectors ( Figure 6). GOLM1 expression in PCa specimens Among 90 PCa and non-PCa samples, we selected RNA samples that could be used for reverse transcription analysis of GOLM1 mRNA expression. Finally, 11 PCa samples and 10 non-PCa samples were subjected to GOLM1 mRNA expression analysis in this study. RT-qPCR showed that miR-27b expression was significantly lower in PCa samples compared with non-PCa samples (P = 0.0012, Figure 7A). Moreover, the expression of GOLM1 was significantly higher in PCa tissues compared with normal tissues (P = 0.0006, Figure 7A). Spearman's Oncotarget 7754 www.impactjournals.com/oncotarget rank test showed that the lower expression of GOLM1 was correlated with higher miR-27b expression (r = -0.695 and P = 0.0019, Figure 7B). DISCUSSION Aberrantly expressed miRNAs disrupt the tightly regulated RNA networks in cancer cells, triggering cancer development and metastasis. Therefore, identification of the differentially expressed miRNAs in cancer cells is the first step to elucidating novel miRNA-mediated pathways in cancer. Expression signatures of various types of cancer tissues are important sources for the study of miRNAs in cancer. Based on the observed miRNA signatures, our group has identified tumor-suppressive miRNAs in PCa [11,15,17]. In this study, we focused on the miR-23b/27b/24-1 cluster because all of these miRNAs have been reported as downregulated miRNAs in our PCa signatures [11]. This is the first report that aimed to investigate the functional significance of all members of the miR-23b/27b/24-1 cluster in PCa. We confirmed the downregulation of the miR-23b/27b/24-1 cluster in PCa tissues with sets of independent clinical specimens. Elucidation of several miRNA signatures in PCa has demonstrated that some members of this miRNA cluster are expressed at low levels in cancer tissues [18][19][20]. From our analysis, we found that the miRNAs within the miR-23b/27b/24-1 cluster were regulated by the same transcriptional control mechanism in the human genome. A recent report showed that the transcription factor AP-1 directly binds to the promoter region of miR-23b and reduces its expression in MDA-MB-231 cells [21]. Interestingly, the expression of AP-1 (c-Jun and c-Fos) is associated with aggressive and androgen-independent prostate cancer [22,23]. While the mechanism silencing the expression of the miR-23b/27b/24-1 cluster in PCa cells is still unknown, analysis of the detailed molecular mechanisms will be necessary in future studies. Current PCa screening methods are based on the measurement of serum PSA, and a definite diagnosis is established by ultrasound-guided prostate needle biopsies [24,25]. PSA is the most common marker for detection of PCa and for following the course of CRPC or metastatic PCa. However, the course of PCa progression and clinical outcomes of PCa patients can differ even in patients with the same PSA value, Gleason score, and pathological stage. Thus, it is crucial to identify more sensitive biomarkers for improvement of PCa prognosis. Several groups have described the independent biochemical recurrence of prediction markers based on the expression levels of miRNAs, such as miR -21, miR-145, miR-200a, and miR-30d [26][27][28][29], using radical prostatectomy specimens. If Oncotarget 7755 www.impactjournals.com/oncotarget the expression status of miRNAs derived from needle biopsies at the initial visit can predict the possibility of progression to CRPC or metastasis, physicians can make more accurate treatment decisions for PCa patients. To investigate this, we analyzed the expression levels of miRNAs within the miR-23b/27b/24-1 cluster and their associations with the clinicopathological features of PCa patients. Surprisingly, the expression status of miR-27b indicated that this miRNA was a good prognostic marker for time to progression to CRPC. A large-scale cohort study will be necessary to determine whether miR-27b is an effective marker for CRPC-free interval. Our present data demonstrated that restoration of the miR-23b/27b/24-1 cluster significantly inhibited cancer cell proliferation, migration, and invasion in both androgen-dependent and -independent PCa cells, suggesting that the miR-23b/27b/24-1 cluster functioned as a tumor suppressor in PCa. Several studies have reported that these miRNAs have tumor-suppressive roles in PCa cells, similar to the results of our present study. For example, miR-23b directly controls the protooncogenes Src kinase and Akt, and overexpression of miR-23b inhibits proliferation, migration, invasion, cell cycle arrest, and apoptosis [30]. Another report has shown that miR-23b and miR-27b are downregulated in metastatic and CRPC tumors and that ectopic expression of these miRNAs suppresses cell invasion and migration in CRPC cell lines [31]. In contrast, the expression of miR-23b and miR-27b is highly upregulated in human breast cancer, and knockdown of miR-23b and miR-27b substantially represses breast cancer growth [32]. Interestingly, the expression status of the miR-23b/27b/24-1 cluster is not consistent among different types of cancers. Elucidation of the mechanisms controlling the expression of clustered miRNAs in each cancer is an important theme in this developing field. To date, few reports have described the functional significance of miR-27b and miR-24-1 in PCa cells. Identification of the targets regulated by the tumorsuppressive miR-23b/27b/24-1 cluster is important for clarifying our understanding of PCa oncogenesis and metastasis. Based on this view, we identified target genes of the miR-23b/27b/24-1 cluster using a combination of in silico analysis and gene expression analysis with miR-23b, miR-27b, and miR-24-1 transfectants. Using this strategy, we succeeded in proving that the tumor-suppressive miR-1/133a or miR-143/145 cluster regulates oncogenic genes in various cancers [15][16][17]. In the present study, we identified putative target genes regulated by miRNAs in the miR-23b/27b/24-1 cluster. Identification of these target genes will contribute to elucidation of novel cancer networks in PCa. Finally, we focused on the GOLM1 gene because this gene has been identified as a putative target of tumorsuppressive miR-27b and because miR-27b is a predictor of time to CRPC. Furthermore, our recent study demonstrated that the tumor-suppressive miR-143/145 cluster commonly targets GOLM1 and that silencing of GOLM1 significantly inhibits the migration and invasion of PCa cells [17]. Our present data clearly showed that GOLM1 was directly regulated by tumor-suppressive miR-27b in PCa cells. The GOLM1/GP73/GOLPH2 protein is encoded by the GOLM1 gene, located on human chromosome 9q21. 33. GOLM1 has been shown to be overexpressed in human PCa tissue [33,34], lung adenocarcinoma [35], and hepatocellular carcinoma [36]. However, while a number of studies have demonstrated that GOLM1 is expressed in cancer cells, the exact molecular mechanisms mediating GOLM1 function remain unclear. Additionally, we investigated whether GOLM1 contributed to the oncogenesis and metastasis of PCa. Therefore, elucidation of the regulatory networks of GOLM1-mediated signaling pathways will provide important information for the development of new therapeutic strategies against cancer cell metastasis. Further research is needed to reveal the oncogenic functions of GOLM1 that are regulated by the tumor-suppressive miR-143/145 cluster or by miR-27b in PCa. In conclusions, downregulation of the miR-23b/27b/24-1 cluster was a frequent event in PCa, and these clustered miRNAs functioned as tumor suppressors. Furthermore, miR-27b expression was a good disease progression marker in PCa. Elucidation of the molecular targets and pathways regulated by the tumor-suppressive miR-23b/27b/24-1 cluster should shed light on the oncogenic and metastatic processes in PCa and lead to the development of more effective strategies for future therapeutic interventions in patients with PCa. Patients and clinical prostate specimens Clinical prostate specimens were obtained from patients admitted to Teikyo University Chiba Medical Centre Hospital from 2008 to 2013. All patients had elevated PSA levels and had undergone transrectal prostate needle biopsy. Prostatic cancerous tissues (PCa, n = 49) and noncancerous tissues (non-PCa, n = 41) were used in this study. The patients' backgrounds and clinicopathological characteristics are summarized in Supplemental Table 1. Written consent for tissue donation for research purposes was obtained from each patient before tissue collection. Investigation has been conducted in accordance with the ethical standards and according to the Declaration of Helsinki and according to national and international guidelines. The protocol was approved by the Institutional Review Board of Teikyo University. For pathological justification of tissue composition, a pair of needle biopsy specimens was collected from the www.impactjournals.com/oncotarget same region as from patients in this study, and one was subjected to pathological justification; no cancerous tissue was found in non-PCa specimens. CRPC was defined according to European Association of Urology guidelines [37]. Cell culture Human prostate cancer cells (PC3 and DU145 cells) were obtained from the American Type Culture Collection (Manassas, VA, USA) and maintained in RPMI-1640 medium supplemented with 10% fetal bovine serum in a humidified atmosphere of 5% CO 2 and 95% air at 37°C. RNA isolation Total RNA was isolated using TRIzol reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's protocol. The quality of RNA was confirmed using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) as described previously [15,38,39]. Quantitative real-time reverse transcription polymerase chain reaction (RT-qPCR) The procedure for PCR quantification was carried out as previously described [15,38,39]. The expression levels of miR-23b (Assay ID: 000400), miR-27b (Assay ID: 000409), and miR-24-1 (Assay ID: 000402) were analyzed by TaqMan quantitative real-time PCR (TaqMan MicroRNA Assay; Applied Biosystems) and normalized to RNU48 (Assay ID: 001006). TaqMan probes and primers for GOLM1 (P/N: Hs00213061_m1), GAPDH (P/N: Hs02758991_g1), and GUSB (P/N: Hs00939627_m1) as an internal control were obtained from Applied Biosystems (Assay-On-Demand Gene Expression Products). The ΔΔCt method was applied for calculation of the relative quantities of target genes. All reactions were performed in triplicate and included negative control reactions that lacked cDNA. Cell proliferation, migration, and invasion assays To investigate the functional significance of the miR-23b/27b/24-1 cluster, we performed cell proliferation, migration, and invasion assays using PC3 and DU145 cells. The experimental procedures were performed as described in our previous studies [15,38,39]. Genome-wide gene expression and in silico analyses for the identification of genes regulated by the miR-23b/27b/24-1 cluster To gain further insights into the specific genes affected by the miR-23b/27b/24-1 cluster, we performed a combination of in silico and genome-wide gene expression analyses. First, genes regulated by the miR-23b/27b/24-1 cluster were listed using the TargetScan database as described previously [15,38,39]. Next, to identify upregulated genes in PCa, we analyzed a publicly available gene expression data set in the Gene Expression Omnibus (GEO, accession number: GSE29079). Finally, we performed genome-wide gene expression analysis using PC3 cells transfected with miR-23b, miR-27b, or miR-24-1. A SurePrint G3 Human GE 60K Microarray (Agilent Technologies) was used for expression profiling of each miRNA transfectant in comparison with negative control miRNA transfectants. Finally, downregulated mRNAs that contained target sites for each miRNA (miR-23b/27b/24-1) were listed as putative target genes for these miRNAs. Western blotting Cells were harvested 72 h after transfection, and lysates were prepared. Twenty micrograms of protein lysate from each sample was separated on Mini-PROTEAN TGX gels (Bio-Rad, Hercules, CA, USA) and transferred to polyvinylidene difluoride membranes. Immunoblotting was performed with rabbit anti-GOLM1 antibodies (1:250, HPA010638, Atlas Antibodies, Stockholm, Sweden), and anti-GAPDH antibodies (1:1000, ab8245, Abcam) were used as an internal loading control. Membranes were washed and incubated with anti-rabbit IgG horseradish peroxidase (HRP)-linked antibodies (7074, Cell Signaling Technology, Danvers, MA, USA). Complexes were visualized with Clarity Western ECL Substrate (Bio-Rad, Hercules, CA, USA). The experimental procedures were performed as described in our previous studies [15,38,39].
4,569.6
2014-07-31T00:00:00.000
[ "Biology" ]
Ascertaining injury risk issues through big data analysis: text-mining based analysis of national emergency response data Objectives Injury prevention can be achieved through various interventions, but it faces challenges due to its comprehensive nature and susceptibility to external environmental factors, making it difficult to detect risk signals. Moreover, the reliance on standardized systems leads to the construction and statistical analysis of numerous injury surveillance data, resulting in significant temporal delays before being utilized in policy formulation. This study was conducted to quickly identify substantive injury risk problems by employing text mining analysis on national emergency response data, which have been underutilized so far. Methods With emerging issue and topic analyses, commonly used in science and technology, we detected problematic situations and signs by deriving injury keywords and analyzing time-series changes. Results In total, 65 injury keywords were identified, categorized into hazardous, noteworthy, and diffusion accidents. Semantic network analysis on hazardous accident terms refined the injury risk issues. Conclusion An increased risk of winter epidemic fractures due to extreme weather, self-harm due to depression (especially drug overdose and self-mutilation), and falls was observed in older adults. Thus, establishing effective injury prevention strategies through inter-ministerial and interagency cooperation is necessary. Introduction Injury is defined as "a harmful outcome in terms of physical and mental health that occurs as a result of an intentional or unintentional accident" and constitutes a leading cause of subsequent disability and death worldwide (1).Injuries are preventable and hold significant importance for public health.In 2001, the World Health Organization recommended the establishment of health-centered national injury surveillance systems to enable a scientific approach to injury prevention.Different countries have established surveillance systems, which are categorized based on the injury severity and the use of passive or active surveillance.In the United States, the National Hospital Discharge Survey at the inpatient level, alongside the National Electronic Injury Surveillance System-All Injury Program and the National Hospital Ambulatory Medical Care Survey at the emergency department level, constitutes such a system.Australia uses the hospital-based National Hospital Morbidity Database as a data source for operating injury surveillance systems with separate categorizations for injury cases.Canada operates the National Trauma Registry at the inpatient level, in addition to the National Ambulatory Care Reporting System and the Canadian Hospitals Injury Reporting and Prevention Program (CHIRPP) at the emergency department level.Since 2005, South Korea has established a national integrated injury surveillance system and has gradually introduced a medical institution-based injury surveillance system.In 2006, using the emergency department medical system, the emergency departmentbased injury in-depth surveillance was introduced to assimilate injury data.To overcome limitations arising from the production and management of injury surveillance data by various relevant ministries based on the place, object, and activity of the injury, national injury comprehensive statistics have been published since 2010.This effort aims to integrate and standardize injury-related data that are generated in various forms to ensure comparability among the data and to identify the scale and characteristics of injuries in South Korea (2).The national injury statistics, along with the detailed injury surveillance data that they comprise, are utilized to identify injury issues in each area and to establish and implement preventive policies.One of the most important elements of an injury prevention policy is the "mechanism, " which refers to how the injury occurred and how the person got hurt, which be seen as a crucial area within injury policy issues.In policy science, a policy problem is an unrealized value or opportunity for improvement, and information about the nature, scope, and severity of the problem is obtained by applying a problem-structuring process.Problem structuring constitutes the steps of problem search, problem delineation, problem specification, and problem sensing, and is initiated by detecting early signs of widespread worry and stress (3).Furthermore, a public health approach to injury prevention starts with problem identification and surveillance procedures to collect and analyze data, thereby structuring the severity of the problem and its targets (4). Injury has a broader, more comprehensive scope than an abnormality or disease (5).Additionally, injury risk factors and vulnerable groups are constantly changing due to external environmental factors, such as climate change, demographic and social structural changes, and technological advancements.The components, causes, and consequences of injury problems are relatively broad and complex as compared to other areas, which makes it challenging to identify actual injury problems and establish preventive policies.There are two main limitations in the extant injury surveillance and utilization systems with regard to the timely identification of real problems across a wide range of injury domains. First, there is a limitation in the data.As most injury surveillance data are entered according to a standardized registration system to reduce errors in the input process and efficiently perform qualitycontrol processes, there is a possibility that detailed information on accident-site conditions and injury causes may be missing.Moreover, there is a significant time delay in utilizing the data due to the processing and management of very large amounts of data.The national emergency response data comprises recordings made by paramedics on the details of their activities in the emergency medical service (EMS) activity logbook in accordance with Article 18 (Maintenance of Records of Emergency Medical Service Activities) of the Enforcement Rules of the Act on 119 Rescue and Emergency Medical Services.Particularly, EMS activity logbook describes the paramedic's assessment of a severe trauma patient according to the prescribed format and additionally records the associated circumstances and witness statements as necessary.As these data are collected in real time, quick checking of the raw data through the system and the utilization of the data as one of the sources of national injury statistics are possible.Some of the data limitations can be overcome by utilizing national first-responder data. Second, research and policies often focus on microscopic injury issues in specific areas, which had led to a lack of research for the identification of macroscopic injury issues across all areas of injury.Existing studies related to injury prevention policies either identify influencing factors and risk factors through empirical studies by using diverse injury surveillance data or analyze the occurrence trends and characteristics of standardized injury accident types and risk factors.For instance, mortality data from the United States Centers for Disease Control and Prevention were used to calculate fall mortality rates by sex, age, race, ethnicity, and residency status (6), and mortality surveillance data from the Disease Surveillance Points (DSP) system in selected areas of Guangdong Province, People's Republic of China, were used to identify priorities for government intervention based on the cause-of-death code (7).In Canada, the migration of the emergency department-based injury and poisoning surveillance system from a centralized data entry process (CHIRPP) to an online distributed process (eCHIRPP) revealed that unintentional injuries were the leading cause of death of Canadians aged 1-44 years (8).Moreover, studies using hospital inpatient discharge record data from the Wisconsin Bureau of Health Information found that alcoholrelated problems and mechanical and motor problems significantly increased the risk of a diagnosed fall among inpatients aged 65 years and older (9).In San Francisco, trauma registry data, medical records, and outpatient mental health care data from the Billing Information System of the Department of Public Health showed that 20% of patients who were hospitalized for unintentional injuries were diagnosed with a mental illness (10).Although some studies have explored effective linkages between ambulance records and hospital records (ER, discharge) (11-13), none have addressed the issue of macroscopic injury through an evaluation of ambulance records.In Australia, a mental health and self-harm module was developed using paramedic electronic patient care data derived from the National Ambulance Surveillance System (14). With the rapid development of information and communication technology and digital services, data-driven decision-making has become a priority in the policy-making process.In recent years, the importance of text mining to support policy has been increasingly recognized by various organizations (15).In the field of science and technology, which has a relatively large amount of refined data, new objective studies are being promoted to identify emerging issues by utilizing text mining from a policy-making perspective.Topics from patents have been extracted to identify promising and unexploited technological areas for wireless power transmission through topic clustering, time-series analysis, and the application of technology innovation cycles (16).A model to detect emerging trends was We proposed a method for discovering a large number of injury problem representations and deriving a substantive problem.This study was conducted with an aim to ascertain a novel method to explore injury risk issues and identify injury problems by applying the concept and analysis methodology of emerging issues, which is actively used in the field of science and technology.We tested this proposed method by using national emergency response data, where the situation of the accident site and injury background have been recorded in detail.Among the four attributes of the emerging issues that have been identified in the literature, topic coherence and scientific impact are useful criteria for exploring topics of interest in science and technology; however, these criteria are somewhat heterogeneous to apply in the injury domain.Thus, in this study, injury risk keywords were selected based on radical newness (novelty) and rapid growth (scalability), and various connections between keywords were identified through semantic network analysis.corpus.Depending on the method and purpose of the analysis, tokenization criteria can include whitespace, morphemes, and nouns; this study focused on morphemes.Although English sentences comprise a typically straightforward arrays of words, Korean (Hangul) words often consist of more than one morpheme, and many words are used inertly and without meaning.For the sake of analysis efficiency, only nouns, which carry the most information in a sentence and can help identify the injury mechanism, were extracted from the nine parts of speech (noun, pronoun, rhetoric, investigation, verb, adjectives, adverbs, articles, and interjections) in Korean.However, during the translation of the results of the analysis into English, a term may sometimes be expressed as a phrase. Data downsizing The data were transformed into a term-document matrix data frame format.Documentation refers to the paramedic's assessment of each injury accident.The average number of injury accidents is 25,073 per year, and the total number of noun terms extracted from the corpus averages 7,757 per year.To streamline the data and enhance analysis efficiency, only terms with a Document Frequency (DF) of 10 or higher, indicating the number of accidents in which a particular term appears, were selected.According to Bird and Loftus' Safety Management Approach, 630 non-injury accidents provide many opportunities to prevent 10 minor accidents and one major injury (26).This implies that accidents involving 10 or more injuries in a given term per year serve as precursors to major injury accidents, but not isolated events.Reducing the data to terms with a DF of 10 or higher results in an average of 1,862 terms per year; moreover, after eliminating duplicate terms, a total of 2,961 terms were included in the final term list. Term importance analysis Term frequency (TF) is the number of occurrences of a term (t) in an individual document (d) divided by the total number of terms in that document.It emphasizes terms that appear more frequently within a document as being more crucial for describing the document's content.However, TF alone may not always perform optimally, leading to the introduction of inverse document frequency (IDF) to address this limitation.IDF decreases as a term is mentioned more often across the entire corpus and increases as it appears less frequently.Essentially, IDF quantifies the specificity of a term as an inverse function of the number of documents in which it appears (27).TF-IDF is the product of TF and IDF, incorporating a weighting mechanism that is directly proportional to term frequency and inversely proportional to document frequency (Equation 1).TF-IDF, a widely used heuristic method in information retrieval, increases the weight of terms occurring frequently in a document (Equation 2) and decreases the weight of terms occurring frequently across documents (Equation 3) (16).As different documents may have distinct TF values for the same term, the maximum TF value per term is used to calculate the TF-IDF weight.The top 300 terms are selected as candidate injury keywords based on TF-IDF weights. TF t number of times term t apprears in a document total numbe ( ) = r r of terms in the document (2) Categorization In the stage of detecting problem situations and identifying signs of change, the methodology of exploring emerging technologies in the field of science and technology is utilized.Much criticism has been directed at existing research related to emerging technologies for not being suitable for identifying new topics, and to address this, precise definitions and criteria indicators related to emerging technologies have been proposed (23).Utilizing these concepts, in Korea, novelty, fast growth, and cross-sector impact are defined as criteria indicators for emerging issues, and empirical studies have been conducted to swiftly identify comprehensive emerging issue candidates from literature databases (Web of Science) or online news sources (28,29).In these studies, concerning keyword categorization, the average frequency, acceleration, and relative volatility of each technological keyword were measured, and based on the distribution of each indicator, they were classified into four groups: emerging technology, variable technology, diffusing technology, and undiffused technology."Emerging technology" refers to rapidly diffusing technologies, "variable technology" refers to technologies that diffuse quickly but have high volatility, "diffusing technology" refers to technologies that have already diffused and matured, and "undiffused technology" refers to technologies that have not yet diffused.The analysis results are being utilized to understand the landscape of Korea's technological and industrial ecosystem (29). Building upon this framework of previous studies, this research aims to classify terms based on statistical characteristics according to novelty and fast growth criteria and to define and characterize these features.Novelty is assessed using average frequency and relative volatility, while considering both the occurrence frequency of a term and its temporal variability.Relative volatility represents the relative value of the standard deviation of the frequency of each term over a period.Calculations are performed based on the annual average frequency to gage the extent to which the term's frequency of occurrence is dispersed each year, indicating rapid changes compared to the average.To account for the increase in the number of terms mentioned over time, the ratio of the standard deviation of a term to the average of the standard deviations of all terms is calculated, as depicted in Table 1. For scalability, the average acceleration is computed to understand the likelihood of continued scaling in the future.The concept of acceleration introduces an incremental acceleration value for the frequency of occurrence of a term each year.This value is accumulated until the year of the last appearance to finally calculate the acceleration value (30). For each of the 300 candidate injury keywords, average frequency, relative volatility, and average acceleration are computed to rank them, assigning a score ranging from 1 to 5. Based on the score distribution of the three indicators, each term is classified as a "hazardous accident, " "noteworthy accident, " or "diffusion accident" to define its characteristics. Frontiers in Public Health 05 frontiersin.org Semantic analysis This step involves observing the meaning and impact of injury keywords.To delve into the issue of "hazardous accident" terms with high policy importance, injury accident data containing "hazardous accident" terms were selected for another round of text mining.A semantic network map was generated by extracting terms related to "hazardous accident" terms, and communities were explored to pinpoint actual injury risk issues based on the connections between terms and communities. Semantic network analysis Semantic network analysis comprises nodes and links, where nodes represent terms with distinct properties, and links denote connections between terms.To establish connections or links between terms, an analysis is conducted to identify terms frequently used together or appearing simultaneously in the same document.The phi (φ) coefficient serves as a measure for identifying binary correlations between terms in a document (31).The correlation coefficient is interpreted as follows: 0.05-0.10indicates a weak correlation, 0.10-0.15indicates a moderate correlation, 0.15-0.25 indicates a strong correlation, and 0.25 or higher indicates a very strong correlation (32,33).In this study, the correlation coefficient between terms was calculated using Table 2 and Equation ( 4) and was considered a network link to construct a semantic network map. Community exploration Community exploration is a process for identifying groups of nodes that interact within a network based on structural characteristics, which represents a crucial step in understanding various network structures applied across diverse fields (34).Within a community, numerous internal links foster cohesion, whereas fewer links between communities lead to separation.Modularity serves as a metric for evaluating community splits, calculated by comparing them to a random baseline rather than an absolute value.This implies that the optimal community separation corresponds to the point when modularity is maximized (35).Although various algorithms exist for community analysis, the Louvain algorithm (36), a modularity optimization method, is employed here.The Louvain algorithm initiates with each node forming its own cluster and progressively merges pairs of clusters until achieving maximal modularity (35).Its advantage lies in not needing to calculate all nodes each time, thereby reducing computation time by simplifying the modularity change equation. Results of term importance analysis TF-IDF weights were computed to assess the significance of terms in paramedic evaluations of injury accidents each year, and the top 300 terms were chosen as "candidate injury keywords, " which were then sorted by the average annual TF-IDF weight.Table 3 presents a summary of the results for 20 of the leading 300 TF-IDF terms on an annualized basis. The terms "snowy road, " "escalator, " "rice paddy, " "pratfall, " "thorn, " "bathhouse, " "object, " "ice road, " "insect, " and "centipede" were ranked as the most important.Examining the TF-IDF weighting results by year in Table 3, most terms, including those mentioned, have diminished in importance in recent years compared to their importance in the past.This trend is likely attributed to a decrease in the number of injury accidents in the study area over the past decade and an increase in the volume of assessment reports and the terms used.As the total number of injury accidents is applied to the total number of documents in Equation ( 3), a reduction in injury accidents directly leads to a decrease in IDF.Moreover, the revision of the standardized EMS guidelines for 119 paramedics has provided a more specific method of writing evaluation opinions than in the past.The quality control of 119 emergency services, based on recorded information such as EMS activity logs, has increased the volume of paramedic evaluation opinions and enhanced information delivery power.As indicated in Table 1, the number of terms recorded in the assessment findings increased over time, estimated to have reduced the TF value inversely proportional to the total number of words in the document according to Equation (2). Categorization and keyword selection The average frequency, relative volatility, and average acceleration were computed for the 300 terms chosen as candidate injury keywords and converted to an ordinal scale of 1-5.As displayed in Table 4, terms with high average frequency (Point 4, Point 5) exhibit a lower tendency to be centered than terms with low average frequency (Point 1, Point 2), particularly terms within the top 20% of average frequency (Point 5), which has a standard deviation of 448.55.Conversely, for average acceleration, the standard deviations of the top 20% (point 5) and bottom 20% (point 1) terms are relatively large, indicating that certain terms have very fast acceleration or deceleration trends.As terms with average acceleration scores in the bottom 40% (point 1, point 2) have negative average acceleration, the rate of appearance of these terms is decreasing.Similar to the average frequency score, the relative volatility score shows that the top-ranked terms have a relatively large standard deviation. Based on the ranking scale scores for average frequency, relative volatility, and average acceleration, the injury keywords were categorized into three groups (hazardous, noteworthy, and diffusion accidents), as presented in Table 5.To enhance the efficiency of the analysis, terms related to the time of day (afternoon, morning, etc.), person (daughter-in-law, acquaintance, etc.), and injury site (wrist, legs, etc.) were excluded when selecting keywords.These terms are not useful for narrowing down the scope of a wide range of injury types.However, some terms, especially those related to the time of day, may be crucial factors in understanding injury mechanisms.In such cases, these terms will be categorized as key terms with high connectivity in future semantic network analyses.This led to the classification of 25 "hazardous accidents, " 12 "noteworthy accidents, " and 28 "diffusion accidents, " as demonstrated in Table 6.The characteristics of each category of keywords can be summarized as follows. The "hazardous accident" group is defined as terms with an average frequency, average acceleration, and relative volatility score of 4 or higher, indicating very high average frequency, acceleration, and relative volatility (Figure 2).This implies that there have been numerous accidents involving the term in the last 10 years, the rate of increase is high, the volatility is high, and it is likely to increase sharply at any moment.Therefore, it can be interpreted as an accident term with the highest risk in the future.Examples include "fall, " "toilet, " "Soju, " and "athletic." "Noteworthy accident" terms are those with an average frequency of 2 or less and an average acceleration and relative volatility of 3 or more.Accidents that are relatively infrequent but have higher than moderate acceleration and relative variability (3 or more points) include "dizziness, " "icy road, " "snowy road, " and "intravenous Semantic network analysis related to hazardous accident terms.fluids." These terms are characterized by an uncertain future trend of increasing or decreasing accidents, which may increase or decrease rapidly as the social environment changes."Diffusion accident" terms are terms with an average frequency of 4 or more but an average acceleration of 2 or less.As these are terms that have a high frequency of accidents and reduced acceleration, it is likely that the risk is already recognized, and various preventive measures are in place.Examples include "task, " "stairs, " "bicycle, " "burn/scald, " and "assault." Semantic analysis of hazardous accident keywords By selecting 32,918 injury accidents over the past 10 years that were related to 25 "hazardous accident" terms with high urgency for policy introduction, data pre-processing and downsizing processes were performed (more than 10 injury accidents).The phi coefficient between the terms was calculated and applied to the links in the network.Based on this, the ego network centered on the keywords of hazardous accidents was extracted.An ego network is a subnetwork composed of selected nodes and their neighbors, called egos (35), and is often used when the number of nodes makes it difficult for researchers to capture meaningful information.In this study, 25 hazardous accident keywords were applied as egos to extract a network consisting of nodes that form direct links with the keywords and links between them.Furthermore, a community analysis was conducted to examine how the network was organized and how it functioned. The total number of nodes in the network was 3,892, and the number of links was 97,275: thus, there were 3,867 terms with at least 10 co-occurrences with the 25 hazardous accident terms in the last 10 years.Although the scope of the network was reduced by extracting the ego network, this was still rather large (in terms of the number of nodes and links).Therefore, the correlation matrix had to be resized to reduce the number of links to make the network more readable.In this study, the threshold of the correlation coefficient, which was the basis of the correlation matrix, was set to 0.05 (32)-a threshold value that indicated weak correlation, to filter out links.The adjusted network consisted of 320 nodes and 373 links. There were a total of 39 pairs of terms with a phi coefficient of 0.15 or higher, as shown in Table 7.In particular, the correlation coefficients of "exercise" and "sensation, " and "wheelchair" and "motorized" were over 0.5, indicating a very close correlation."Cold, " "athletic, " and "depression" had the highest number of closely related terms (8, 6, and 5, respectively), whereas "ladder, " "self-harm, " "soccer, " and "dislocation" were analyzed as being closely related to 3 terms each. According to the community analysis, there were 13 communities, as shown in Table 8 and Figure 3.The hazardous accident keywords "depression" and "self-harm" have a degree of 52 and 38, respectively, making them the most connected terms in the entire network.The subnetwork of "depression" and the subnetwork of "self-harm" formed one community (C2) with "department of psychiatry," "panic," "impulse," "attempt," "counseling," etc., and showed a highly dense network based on the high number of connections.For "self-harm," both self-harm means (fruit knife, kitchen knife, razor, scissors) and self-harm parts (abdomen, wrist, left side) were mainly connected whereas, for "depression," drug-related terms, such as "medicine," "insomnia," "medication," and "stabilizer," were mainly connected.Scatterplot of statistical characteristics and accident keywords.Based on the types of terms and connections that make up the community, it could be inferred that depression was causing impulsive self-harm attempts in the form of drug overdoses or cuts with sharp objects.The subnetworks organized around the keywords "wheelchair" (degree 16), "bed" (degree 16), "toilet" (degree 11), "dementia" (degree 25), "fall" (degree 5), and "pratfall" (degree 7) were organized into one community (C1) through the terms "nursing home, " "recuperation, " "protective agent, " "bedridden, " "hip joint, " and "behavior." "Wheelchair" was connected to terms related to traffic accidents (car, passenger, driver) and places of accidents (rice paddy, ditch, nursing home), and "dementia" formed linkages with terms related to people around (son, daughter-in-law, grandmother, old man) and underlying diseases (high blood pressure, Parkinson's disease, diabetes)."C1" was a community dedicated to the topic of injury accidents among the older adult with low physical and mental health levels.As for "fall, " the average frequency of occurrence over 10 years was extremely high (Figure 2), while there were relatively few terms that form direct links.This means that while falls occur frequently, they occur in a variety of situations with no specific hazardous locations or factors.This community (C1) was connected to a smaller community (C11) centered around "ladder" (degree 10) through "waist." "C11" was analyzed as comprising terms related to the location of the crash (orchard, trees, construction site, roof). The subnetwork centered on the keyword "meal" (degree 19) was organized into one community (C6) with a subnetwork centered on the keyword "chair" (degree 7) and the node "dining room." "Meal" is connected with food terms (grain of rice, fish, foreign object, thorns) and patient condition terms (airway, obstruction, cyanosis), and the community can be defined as an accident that occurs when food gets stuck in the airway during a meal. The community (C5) organized around the keyword "cold" (degree 28) was affected by COVID-19 and consisted of terms related to prevention (glove, vaccine, goggles, COVID-19, wearing, inoculation), terms to identify the route of infection (overseas trip, visiting, overseas, contact, travel, domestic, path), and terms for cold symptoms (body aches, phlegm, runny nose, fever).Through the mediation of "symptoms, " it was connected to a community (C9) centered on "dizziness" (degree 15), but the connectivity between communities for injury causes and conditions was relatively weak.Considering the terms "mowing, " "hornet, " "vomit, " and "rash" linked to "dizziness" in "C9, " the community can be defined as various symptoms caused by wasp stings during outdoor activities such as mowing. The subnetwork centered on the keyword "athletic" (degree 20) was connected to the subnetwork centered on "dislocation" through "fixed" and "deformation" to form one community (C4)."Athletic" was especially highly correlated with "sense" and "function" because paramedics essentially checked motor function and sensation in the event of an injury accident and included them in their assessment."Athletic" has the meaning of physical movement, but it can also refer to activities to improve health, so careful interpretation is required.In the latter case, it forms a link with the type of sport (fitness, swimming, stretching). The community (C8) centered on the keyword "school" (degree 18) and the community (C7) centered on the keyword "soccer" (degree 22) were connected through "playground." "School" is connected mainly with injury location-related terms (dormitory, classroom, gym, auditorium, infirmary, main gate), and "soccer" is connected mainly with injury-inducing action terms (air, jump, landing, competition, tackle, heading). "C12" and "C13" are analyzed as communities consisting of only two nodes and one link, where 'home' is connected to "smoke, " and "cement" is connected to "yard." For both communities, the lack of components made it difficult to identify a clear sense of the meaning. Discussion In this study, injury keywords were selected, classified, and defined according to categories, and the problem of injury risk was specified through semantic network analysis.Among the key findings, three injury issues that were likely to lead to an increase in risk in the future were summarized, along with their implications. First, when analyzed based on the annualized term importance (TF-IDF) (Table 3), "snowy road" and "icy road" were found to be very important.In particular, "icy road" showed a sharp increase in the number of related injury accidents in recent years -from 32 in 2020 to 120 in 2021 and 300 in 2022.It was categorized as a "hazardous accident" term with a high average frequency, relative volatility, and average acceleration based on this time-series trend change (Figure 2).Through the linkage of "icy road" with "ankle" and "heavy snowfall" within the seventh community (C7), it can be inferred that there is a significant increase in ankle injuries due to slip-and-fall accidents on icy roads as a result of abnormal weather conditions such as heavy snowfall and cold snaps.It is predicted that extreme anomalies in average surface temperatures will increase by 32% across East Asia due to global warming, with an increase in cold snaps similar to the one experienced in East Asia in January 2016 (37).Severe cold snaps are known to cause not only direct health effects such as hypothermia and frostbite but also indirect health effects such as limb fractures due to slip-and-fall accidents on icy roads and traumatic brain injuries.According to previous studies, winter seasonal fractures, exceeding seasonal fluctuations significantly, occur on days with low temperatures and precipitation such as rain or snow (38-41).Hence, it is deemed important to emphasize the potential increase in the risk of slipand-fall accidents on icy roads due to future extreme weather events.However, as media attention emphasizes traffic accidents, the risk of falls is often overlooked.There is a need to demonstrate the relationship between weather variables, severe weather warnings, and fracture prevalence for healthcare planning and to manage fluctuations in healthcare demand (39).Winter fractures have the characteristics of a "major accident"; however, they differ from injuries caused by major accidents in that they can be predicted and prevented.Immediate cleaning of pavements in Frontiers in Public Health 11 frontiersin.orgcity centers and other areas with heavy pedestrian traffic and providing the public with practical advice on how to walk more safely on slippery surfaces are essential (38).Second, "depression" and "self-harm" formed a highly dense network based on a large number of related terms (Figure 3).Associated terms were primarily related to drugs, sharp self-injury tools, and self-injury sites, and were also highly associated with communities centered around "Soju, " a popular type of alcohol in South Korea.The results of multiple prior studies provide further support for the analysis of injury risk issues like this, adding to its credibility.Depression and suicide are closely related, as depression is the psychiatric diagnosis most commonly associated with suicide (42,43).Any situation that negatively affects an individual is known to have the potential to trigger depressive symptoms and eventually lead to suicidal behavior (44).As with completed suicide, people who self-harm are likely to suffer from depression, and subsequent suicide rates are high, especially for those with persistent depression (45,46).According to a specific study, there is a tendency for suicide rates and self-harm rates to increase when starting or discontinuing antidepressants.Therefore, it is important to exercise caution not only when initiating treatment for depression but also when discontinuing it (47).It is also known that alcohol can acutely increase the risk of self-harm through several mechanisms 48.In this study, the frequency of "depression" increased sharply to 96 in 2014, 203 in 2018, and 369 in 2022, and the frequency of "self-harm" increased proportionally to 136 in 2014, 198 in 2018, and 265 in 2022.Depression and self-harm not only form a large semantic network through multiple terms but also show a linear relationship in quantitative terms.The change in the time-series trend confirms that the problem may have a more negative social impact in the future.Based on the results of previous studies analyzing the relationship between depression and various variables such as stress, self-harm, suicide, and alcohol consumption, early management and psychological and social treatment for self-harm and suicide due to depression seem to require further reinforcement.In particular, measures will be needed to strengthen access restrictions for drug overdose and self-cutting -the methods of self-harm identified in this study.Cutting the skin with a sharp object such as a razor, glass, or knife is the most common form of self-harm (48).Although cutting-based forms of self-harm have been described since ancient times, drug overdoses have emerged after the significant growth in relatively safe pharmaceutical products (49).In the United Kingdom, the number of paracetamol overdoses has decreased significantly since legislation was amended in 1998 to require packaging units of painkillers to be below the lethal dose (50).Therefore, effective measures are needed to prevent physical access to suicidal means in South Korea.According to South Korea's Fifth National Suicide Prevention Master Plan, released in February 2023, to reduce suicide risk factors, antiepileptic drugs, sedatives, sleeping pills, antiparkinsonian drugs, and sodium nitrite, known as suicide drugs, will be included in the online suicide risk notices for the next 5 years, and monitoring will be strengthened.In light of these findings, effectiveness is expected to increase if a specific implementation plan is prepared to limit physical accessibility in combination with cognitive access prevention, such as media measures, SNS monitoring, and blocking of harmful sites. Finally, the community's terminological organization of keywords such as "wheelchair, " "bed, " "toilet, " "dementia, " and "fall" confirms that older adults with low levels of physical and mental health are at an increased risk of injury accidents.The keyword to describe the injury activity in this community is "fall," with other terms being structured around environmental and biological risk factors, which are defined as the main risk factors for falls (biological/behavioral/environmental/socioeconomic) (51).As mentioned above, "fall" has been categorized as a high-risk term due to its high frequency of occurrence, rapid rate of increase, and high volatility in the last decade (Figure 2).A fall is a prominent external cause of unintentional injury and is usually defined as an inadvertent movement to the ground, floor, or lower level (51).Globally, adults aged 65 years and older experience falls more often than younger individuals, often resulting in serious injuries and increased healthcare costs.Gait and balance disorders in older adults are one of the most common causes of falls, with a negative impact on quality of life and survival (52).In this regard, falls are considered a major public health issue, and more people will be at risk of falls with the growth of the aging population.An analysis of large cohort data found that higher age, polypharmacy, malnutrition, smoking, and alcohol use significantly increased the risk of falls, and that individuals with heart disease, hypertension, a history of falls, depression, and pain were at higher risk of falls than those without these comorbidities (53).The semantic network of this study also confirmed this trend, as terms that can infer comorbidities, such as "dementia, " "Parkinson's disease, " "medical history," "diabetes," and "surgery," were grouped into the same community as "fall".Using two or more medications increases the risk of falls and injuries among the older adult (54), particularly antihypertensive medications, one of the factors in the Downton Fall Risk Index (DFRI), have been shown to increase the risk of serious injuries due to falls (55,56).The issue of falls among the older adult has been recognized as an important topic from a public health perspective for the past several decades, and this study also confirmed comprehensive risks related to older adult falls through text mining.With the rapid aging of Korean society, unintentional falls among the older adult are predicted to have significant medical and economic consequences in the future.Given the geographic and ethnic limitations, studies focusing on older adults in South Korea are needed to identify demographic characteristics, comorbidities, and lifestyle factors that influence the risk of falls, which should facilitate the development of effective fall prevention strategies. The ultimate goal of an injury prevention policy is to identify and reduce potential injury risks or issues through the analysis of current and historical conditions.These potential injury risk issues can significantly impact the magnitude and social consequences of future injuries.In this study, TF-IDF weights, commonly used in text mining research, were employed to select candidate injury keywords.Statistical analysis indicators, particularly "novelty" and "scalability, " among various features defining emerging issues, were then utilized to derive final keywords, which were subsequently categorized based on time series features.Additionally, semantic network analysis was conducted on keywords with high policy importance to explore injury risk issues.The significance of this study lies in proposing a method to promptly identify injury risk issues expected to escalate in the future by leveraging information from national EMS activities, which has been underutilized as injury surveillance data until now. Nonetheless, this study has certain limitations.While analyzing semantic networks, a threshold for the correlation coefficient was applied to reduce the matrix for efficient analysis.However, this method could not guarantee the importance of the links.More principled approaches exist for extracting the network backbone, preserving the essential structure and overall properties of the network by identifying links responsible for a disproportionately large percentage of the connection strength of each node (35).Future work should apply these principled link filtering approaches and compare them to the results of this study. Furthermore, as the analysis was confined to injury surveillance data from a specific region in South Korea, these data may not be considered representative of universal injury risk issues.Different cultural values, health and social services, and geographic conditions in various countries or regions are expected to result in diverse forms and causes of injury.Therefore, further research should be conducted in regions and countries with diverse injury environments to design injury-prevention policies that reflect both universal and localized characteristics. TABLE 1 Results of the number of EMS activities and terms extracted by year. TABLE 2 Frequencies of the two words within each line (29). TABLE 3 List of top 20 candidate injury keywords based on annualized TF-IDF. TABLE 4 Summary statistics by ranking scoreband. TABLE 5 Score table of average frequency, average acceleration, and relative volatility by term category. TABLE 6 Statistical characteristics and representative keywords by term category. TABLE 7 Correlation coefficients and co-occurrence frequencies for pairs of highly correlated terms. TABLE 8 Community analysis results from the hazardous accident keyword network., toilet, bed, living room, fall, sofa, electric, nursing home, early morning, communication, toilet seat, high blood pressure, recuperation, guardian, disability, protective agent, hip joint, person concerned, etc.
9,041.2
2024-02-28T00:00:00.000
[ "Medicine", "Computer Science" ]
Multi-Focus Image Fusion Based on Decision Map and Sparse Representation : As the focal length of an optical lens in a conventional camera is limited, it is usually arduous to obtain an image in which each object is focused. This problem can be solved by multi-focus image fusion. In this paper, we propose an entirely new multi-focus image fusion method based on decision map and sparse representation (DMSR). First, we obtained a decision map by analyzing low-scale images with sparse representation, measuring the e ff ective clarity level, and using spatial frequency methods to process uncertain areas. Subsequently, the transitional area around the focus boundary was determined by the decision map, and we implemented the transitional area fusion based on sparse representation. The experimental results show that the proposed method is superior to the other five fusion methods, both in terms of visual e ff ect and quantitative evaluation. Introduction Multi-focus image fusion is a method of combining multiple images with different focal points into a composite image in which all objects are completely focused. The composite image will be more suitable for visual perception, making it easier for humans to further complete image processing tasks. Multi-focus image fusion technology has been widely used in digital photography, computer vision, military reconnaissance, and other fields [1]. With the maturity and improvement of image fusion technology, miscellaneous image fusion methods have emerged in the past few years. As many new fusion algorithms have been proposed recently, we feel inclined to divide the current fusion methods into four categories: multiscale transform (MST) methods, spatial domain methods, sparse representation (SR) methods, and neural network methods. Among the existing transform domain image fusion methods, MST is widely used [2]. A variety of multiscale transforms have been proposed and applied to image fusion. These include the Laplacian pyramid (LP), discrete wavelet transform (DWT) [3,4], dual-tree complex wavelet transform (DTCWT) [5], and discrete cosine harmonic wavelet transform (DCHWT) [6]. The multiscale geometric analysis tools developed in recent years have higher directional sensitivity than wavelets, such as shearlet transform [7], curvelet transform (CVT) [8], nonsubsampled contourlet transform (NSCT) [9], and so on. All of these transform domain fusion methods have a similar "decomposition-fusion-reconstruction" framework. First, the source images are decomposed into a multiscale transform domain to obtain transform coefficients, and the transform coefficients are then fused based on a certain fusion rule. Finally, the fusion coefficients are inversely transformed to reconstruct the fused image. jointly generate activity level measurement and fusion rules and overcome some difficulties faced by certain existing fusion methods. Based on the analysis and research of existing multi-focus image fusion methods, we propose a new multi-focus image fusion method based on decision map and sparse representation (DMSR), which can not only satisfy the requirements of the visual effect and fusion performance but also make the algorithm robust and adaptive. In our framework, the advantages of fusion methods based on the decision map and sparse representation are combined. Considering that the human visual system does not require much detail in identifying the focused and defocused area of the source images, we generated a sparsity graph using low-scale images of the source images. In the existing multi-focus image fusion methods based on the decision map, each pixel is strictly defined as focused or defocused, which inevitably leads to erroneous judgment in the decision map. In particular, the pixels of the uncertain region are difficult to determine simply as focus or defocus. In order to avoid this defect, we analyzed the sparseness of the corresponding points in the sparsity graph and divided each pixel into three categories-focused, defocused, and uncertain-to generate the initial decision map. Then, the spatial frequency method was used to further divide each point in the uncertain region of the initial decision map into focused or defocused points, and the final decision map was determined. After obtaining the fused image based on the final decision map, the transitional area of the source images was detected according to the final decision map, and the area was processed by the multi-focus image fusion algorithm based on the sparse representation to obtain the transitional area fusion result. Finally, the fused image based on the final decision map and the transitional area fused image were averaged to obtain the final fused image. In order to verify the effectiveness of the proposed method, we performed a large number of experiments using two data sets based on the three target quality indicators. The experimental results show that our method is superior to the other five methods, both in terms of visual effect and quantitative evaluation. The remainder of this paper is organized as follows. Section 2 describes the specifics of our proposed method. The experimental results, a comparison with the state-of-the-art methods and objective evaluations are demonstrated in Section 3. Finally, Section 4 is the conclusion of this paper. Proposed Fusion Scheme The newly proposed multi-focus image fusion framework is shown in Figure 1. Obviously, the fusion method consists of two main steps: generating a decision map and performing fusion. In the first step, multi-focus feature analysis of the low-scale images of the two source images is performed to obtain the corresponding clarity score maps. Then, they are normalized to get the initial decision map, and the spatial frequency method is used to obtain the final decision map. Section 2.1 details the creation of the score maps, and the specific process for further obtaining the initial decision map and the final decision map are described in Section 2.2. In the second step, the fused image based on the final decision graph and the transitional area fused image are obtained, respectively, and the two images above are averaged to obtain the final fused image. Among them, the fusion process of the transitional area is based on sparse representation, which is elaborated in Section 2.3. the creation of the score maps, and the specific process for further obtaining the initial decision map and the final decision map are described in Section 2.2. In the second step, the fused image based on the final decision graph and the transitional area fused image are obtained, respectively, and the two images above are averaged to obtain the final fused image. Among them, the fusion process of the transitional area is based on sparse representation, which is elaborated in Section 2.3. Clarity Score Map Firstly, wavelet decomposition is performed on the two multi-focus source images by a wavelet basis, and four low-frequency sub-band images of horizontal low-frequency and vertical lowfrequency (LL), horizontal low-frequency and vertical high-frequency (LH), horizontal highfrequency and vertical low-frequency (HL), and horizontal high-frequency and vertical highfrequency (HH) are obtained, respectively. Among them, the LL low-frequency sub-band images still maintain the overview and spatial characteristics of the source images and are suitable for the analysis and extraction of the subsequent source image focusing features, so they are selected as the low-scale images of the algorithm, as shown in Figure 2c,d. Next, the sparse representation of lowscale images is carried out, and the corresponding sparsity graphs are generated. Finally, two corresponding clarity score maps are obtained by the image block-based clarity measurement method. The main steps of creating clarity score maps are described as follows: • Clarity Score Map Firstly, wavelet decomposition is performed on the two multi-focus source images by a wavelet basis, and four low-frequency sub-band images of horizontal low-frequency and vertical low-frequency (LL), horizontal low-frequency and vertical high-frequency (LH), horizontal high-frequency and vertical low-frequency (HL), and horizontal high-frequency and vertical high-frequency (HH) are obtained, respectively. Among them, the LL low-frequency sub-band images still maintain the overview and spatial characteristics of the source images and are suitable for the analysis and extraction of the subsequent source image focusing features, so they are selected as the low-scale images of the algorithm, as shown in Figure 2c,d. Next, the sparse representation of low-scale images is carried out, and the corresponding sparsity graphs are generated. Finally, two corresponding clarity score maps are obtained by the image block-based clarity measurement method. The main steps of creating clarity score maps are described as follows: • The low-scale versions of source images I LL A , I LL B ∈ R H×W are divided into √ n × √ n image patches using the smooth window technique from top left to bottom right, and the sliding step is one. All patches are reshaped into n dimensional column vectors v i Given the global dictionary Φ ∈ R n×K (n << K), each column vector can be represented by the where M i A and M i B denote the sum values, respectively. If M i A ≥ M i B , each score value within the √ n × √ n corresponding patch is centered at x i + √ n, y i + √ n) in the clarity score map S A add one, and vice versa, as shown in Figure 3. In addition, the total times of the comparison between each corresponding pair of patches are recorded in a weight map W. Decision Map The above clarity score maps are binarized by a given threshold K 1 and denoted as S A and S B , as shown in Figure 4a,b (the focused pixels are marked as yellow, defocused pixels are marked as blue). It can be observed that there may be some misjudgment areas caused by misclassification in the focused area or the defocused area. Morphological techniques are used to filter out these misclassifications to obtain the standard normalized clarity score maps. The results are shown in Figure 4c,d and denoted as S A and S B . Thus, we can determine the location of the uncertain area when the focused areas of Figure 4c,d overlap. Decision Map The above clarity score maps are binarized by a given threshold 1 K and denoted as ' S , as shown in Figure 4a,b (the focused pixels are marked as yellow, defocused pixels are marked as blue). It can be observed that there may be some misjudgment areas caused by misclassification in the focused area or the defocused area. Morphological techniques are used to filter out these misclassifications to obtain the standard normalized clarity score maps. The results are shown in Figure Finally, the initial decision map is obtained by as shown in Figure 4e, where the white pixels indicate the uncertain area. In order to make the size of the decision map consistent with the source images, the upsampling operation is also carried on to the initial decision map. The next target is to generate the final decision map. As mentioned above, there is still an uncertain area in the initial decision map D  . To obtain the final decision map, further analysis and processing of the uncertain area is needed. We use the spatial frequency method to divide the pixels of the uncertain area in the initial decision map D  into two categories-focused and defocused-to obtain the final decision map containing only the focused area and the defocused area. The spatial frequency method can be described as Finally, the initial decision map is obtained by as shown in Figure 4e, where the white pixels indicate the uncertain area. In order to make the size of the decision map consistent with the source images, the upsampling operation is also carried on to the initial decision map. The next target is to generate the final decision map. As mentioned above, there is still an uncertain area in the initial decision map D. To obtain the final decision map, further analysis and processing of the uncertain area is needed. We use the spatial frequency method to divide the pixels of the uncertain area in the initial decision map D into two categories-focused and defocused-to obtain the final decision map containing only the focused area and the defocused area. The spatial frequency method can be described as where I is the input image, Ω is a 7 × 7 window centered on the point (x, y), and x and y represent the horizontal and vertical differences of the pixel points, respectively. The larger the spatial frequency value, the higher the clarity value of the point. Thus, points in the uncertain area of the initial decision map D can be classified according to the following decision rules: Assuming that the spatial frequency values of the corresponding uncertain pixel points in the two source images are SF A (x, y) and SF B (x, y), respectively, and SF A (x, y) > SF B (x, y), the pixel point can be determined as the focus point, and vice versa. Based on this, the final decision map D can be obtained, as shown in Figure 4f. Fusion Based on the final decision map D, the fused image I F can be simply obtained by However, in this way, the pixels in the transitional area are actually averaged. This can cause undesirable effects such as the edge-blocking effect and artificial-edge effect. In order to suppress these effects at the same time, it is considered that the pixel classification of the transitional area has the following difficulties: the difference in the clarity of the pixels is small, the gray change is irregular, and the traditional classification methods have difficulty with accurate division. For the transitional area, we choose the fusion method based on sparse representation. The determination of the transitional area and the specific fusion algorithm are as follows: • The boundary line of the final decision map D is centered, the appropriate radius (3-5 pixels) is set, and the corresponding rectangular area is delineated as the transitional area R. where j is the column index of the sparse coefficient matrix, and τ is the index of the atom in the dictionary Φ. • The fused vector V F without the DC components is obtained by The fused DC component obeys the following rule: where each column vector v j F in V F is reshaped into a block with size √ n × √ n and then overlaid at its recorded position in Λ. • Finally, the transitional area fused image V F based on the sparse representation and the fused image I F based on the final decision map are averaged to generate the final fused image I F . As shown in Figure 5, compared with the fused image I F based on the final decision map, our final fused image I F is significantly clearer at the "brim edge" and "sweater texture". • The fused vector j F v is determined as follows: where each column vector j On the fifth step of the algorithm, most of the existing fusion methods calculate the fused DC components using a simple average. However, this easily produces fuzzy effects around some strong edges due to the great change in brightness. The main reason for this is that the energy of the region with high brightness diffuses into the region with low brightness when losing focus. Therefore, we modify the fusion rule for DC components. When the DC components from different source images are close to each other, we choose the average operation; otherwise, the minimal DC component is selected. Experiment and Analyses This section verifies the effectiveness of the proposed method by experimenting with different types of source images. The fusion results of the proposed method are compared with several existing fusion algorithms, including DCHWT [6], SOMP [19], GF [15], IM [16], and CNN [30]. Source Images The experiment was performed on two image datasets. The first one included eight pairs of popular multi-focus source images, as shown in Figure 6 [31]. The other one was composed of 20 pairs of color multi-focus images selected from the Lytro picture gallery, as shown in Figure 7 [32]. On the fifth step of the algorithm, most of the existing fusion methods calculate the fused DC components using a simple average. However, this easily produces fuzzy effects around some strong edges due to the great change in brightness. The main reason for this is that the energy of the region with high brightness diffuses into the region with low brightness when losing focus. Therefore, we modify the fusion rule for DC components. When the DC components from different source images are close to each other, we choose the average operation; otherwise, the minimal DC component is selected. Experiment and Analyses This section verifies the effectiveness of the proposed method by experimenting with different types of source images. The fusion results of the proposed method are compared with several existing fusion algorithms, including DCHWT [6], SOMP [19], GF [15], IM [16], and CNN [30]. Source Images The experiment was performed on two image datasets. The first one included eight pairs of popular multi-focus source images, as shown in Figure 6 [31]. The other one was composed of 20 pairs of color multi-focus images selected from the Lytro picture gallery, as shown in Figure 7 Parameter Setting The 8 × 8 image patches were used in the computation of sparse coefficients for each pixel location. Besides that, the block size of the sliding window used for clarity level comparison in the clarity score map was also fixed to 8 × 8. The threshold 1 K for binarizing clarity score maps was set The overcomplete dictionary Φ used in sparse representation had a size of 64 × 256, which was trained globally from a large set of natural images. The residue error of the SOMP algorithm was set as ε = 5 . The DCHWT method was implemented based on multiscale transform toolboxes downloaded from MATLAB Central [33], and its level of wavelet decomposition was set to 4. The codes for the GF and IM methods can be found on Xu Dongkang's homepage [34], and the codes for the NSCT-PCNN are available on Qu Xiaobo's homepage [35]. The parameters of these methods were set to their recommended values. Objective Evaluation Metrics To evaluate the fusion quality of different fusion methods, three fusion quality metrics were utilized in our experiment. The large value of the fusion quality metric indicates better fusion quality. Normalized mutual information (MI), MI Q [36]: MI Q is used to overcome the deficit of MI [37]. MI Q is defined as MI MI A F MI B F Q H A H F H B H F where ( ) H X is the entropy of image X , and ( , ) MI X Y is the mutual information between image X and Y . The MI Q measures the amount of information in the fused image inherited from the source images. Parameter Setting The 8 × 8 image patches were used in the computation of sparse coefficients for each pixel location. Besides that, the block size of the sliding window used for clarity level comparison in the clarity score map was also fixed to 8 × 8. The threshold K 1 for binarizing clarity score maps was set as K 1 = 0.65. The overcomplete dictionary Φ used in sparse representation had a size of 64 × 256, which was trained globally from a large set of natural images. The residue error of the SOMP algorithm was set as ε= 5. The DCHWT method was implemented based on multiscale transform toolboxes downloaded from MATLAB Central [33], and its level of wavelet decomposition was set to 4. The codes for the GF and IM methods can be found on Xu Dongkang's homepage [34], and the codes for the NSCT-PCNN are available on Qu Xiaobo's homepage [35]. The parameters of these methods were set to their recommended values. Objective Evaluation Metrics To evaluate the fusion quality of different fusion methods, three fusion quality metrics were utilized in our experiment. The large value of the fusion quality metric indicates better fusion quality. 1. Normalized mutual information (MI), Q MI [36]: Q MI is used to overcome the deficit of MI [37]. Q MI is defined as where H(X) is the entropy of image X, and MI(X, Y) is the mutual information between image X and Y. The Q MI measures the amount of information in the fused image inherited from the source images. 2. Petrovic's metric, Q AB/F [38]: Q AB/F evaluates the fusion performance by measuring the amount of gradient information transferred from source images into the fused image. It is calculated by where Q AF (i, j) = Q AF g (i, j) · Q AF o (i, j). Q AF g (i, j) and Q AF o (i, j) are the grad magnitude and orientation at pixel location (i, j), respectively. Q BF is computed similarly to Q AF . W A (i, j) and W B (i, j) are the weights of Q AF (i, j) and Q BF (i, j), respectively. 3. The quality index, visual information fidelity for fusion (VIFF) [39]: This is a multiresolution image fusion metric based on visual information fidelity. To calculate the VIFF, the images are divided into blocks in each sub-band, and visual information in each block is measured using different models, including the Gaussian scale mixture (GSM) model, the HVS model, and the distortion model. The VIFF of each sub-band is then calculated, and an overall quality measure is determined by weighting. Evaluation on Popular Multi-Focus Images In this section, we demonstrate the advantages of the proposed method (DMSR) on popular multi-focus images. An example, the fused images of the "Lab" pair (640 × 480) using different fusion methods is presented in Figure 8c-h. The "Lab" source images are shown in Figure 8a,b. For better comparison, we also present the normalized difference images between the correctly focused source image and the fusion results in Figure 9. It can be observed that the fused images obtained by DCHWT or SOMP methods showed serious artifacts and visible fake edges around the "man". The GF method had ringing artifacts and blurring effects near the "men". The IM method suffered from blurring effects near the "men's hair". The CNN method could achieve better fusion quality, but some small defects could still be found with careful observation, such as imperceptible artificial flaws on the "table" (see the lower middle in Figure 9e). Comparatively, the DMSR produced the best fused image. 3. The quality index, visual information fidelity for fusion (VIFF) [39]: This is a multiresolution image fusion metric based on visual information fidelity. To calculate the VIFF, the images are divided into blocks in each sub-band, and visual information in each block is measured using different models, including the Gaussian scale mixture (GSM) model, the HVS model, and the distortion model. The VIFF of each sub-band is then calculated, and an overall quality measure is determined by weighting. Evaluation on Popular Multi-focus Images In this section, we demonstrate the advantages of the proposed method (DMSR) on popular multi-focus images. An example, the fused images of the "Lab" pair (640 × 480) using different fusion methods is presented in Figure 8c-h. The "Lab" source images are shown in Figure 8a,b. For better comparison, we also present the normalized difference images between the correctly focused source image and the fusion results in Figure 9. It can be observed that the fused images obtained by DCHWT or SOMP methods showed serious artifacts and visible fake edges around the "man". The GF method had ringing artifacts and blurring effects near the "men". The IM method suffered from blurring effects near the "men's hair". The CNN method could achieve better fusion quality, but some small defects could still be found with careful observation, such as imperceptible artificial flaws on the "table" (see the lower middle in Figure 9e). Comparatively, the DMSR produced the best fused image. Another example, the fusion results of the "Flowerpot" image pair (944 × 736) are shown in Figure 10c-h. The normalized difference images between the correctly focused source image and the fusion results are shown in Figure 11. Similar to the previous example, the DCHWT and SOMP method produced serious artifacts around the "horologe". The fused image obtained by the GF method suffered from a ringing effect, and the edges of the "horologe" were blurred. The results of the IM method also showed similar artifacts near the "horologe". Although the CNN method performed well overall, it exposed obvious artifacts on the "ground" and the "wall" of the fused image. Comparatively, the DMSR method exhibited the best visual quality. Another example, the fusion results of the "Flowerpot" image pair (944 × 736) are shown in Figure 10c-h. The normalized difference images between the correctly focused source image and the fusion results are shown in Figure 11. Similar to the previous example, the DCHWT and SOMP method produced serious artifacts around the "horologe". The fused image obtained by the GF method suffered from a ringing effect, and the edges of the "horologe" were blurred. The results of the IM method also showed similar artifacts near the "horologe". Although the CNN method performed well overall, it exposed obvious artifacts on the "ground" and the "wall" of the fused image. Comparatively, the DMSR method exhibited the best visual quality. Another example, the fusion results of the "Flowerpot" image pair (944 × 736) are shown in Figure 10c-h. The normalized difference images between the correctly focused source image and the fusion results are shown in Figure 11. Similar to the previous example, the DCHWT and SOMP method produced serious artifacts around the "horologe". The fused image obtained by the GF method suffered from a ringing effect, and the edges of the "horologe" were blurred. The results of the IM method also showed similar artifacts near the "horologe". Although the CNN method performed well overall, it exposed obvious artifacts on the "ground" and the "wall" of the fused image. Comparatively, the DMSR method exhibited the best visual quality. Table 1, with the best results indicated in bold. It can be seen that the DMSR method outperformed all other methods and won in almost all the quality metrics. To evaluate fusion performance more objectively, each pair of popular multi-focus images was fused by six fusion methods. The values of metrics Q MI , Q AB/F , and VIFF were calculated and are recorded in Table 1, with the best results indicated in bold. It can be seen that the DMSR method outperformed all other methods and won in almost all the quality metrics. Evaluation on Lytro Image Dataset The Lytro image dataset was composed of 20 color multi-focus image pairs of the same size (520 × 520). For visual evaluation, the fused results of the "Lytro17" image pair obtained by different fusion methods are demonstrated in Figure 12. In order to observe the fusion effect of the transitional area more intuitively, some details of the puppy have been intercepted and enlarged. The DCHWT method still exhibited undesirable ringing artifacts around the head, as shown in Figure 12c. The same phenomenon can also be seen in Figure 12d,e,g. As shown in the close-up views of Figure 12f, the IM method suffered from severe blurring effects and false edges. Comparatively, the DMSR methods produced ideal fusion images without perceptible artifacts along the focus boundary. Evaluation on Lytro Image Dataset The Lytro image dataset was composed of 20 color multi-focus image pairs of the same size (520 × 520). For visual evaluation, the fused results of the "Lytro17" image pair obtained by different fusion methods are demonstrated in Figure 12. In order to observe the fusion effect of the transitional area more intuitively, some details of the puppy have been intercepted and enlarged. The DCHWT method still exhibited undesirable ringing artifacts around the head, as shown in Figure 12c. The same phenomenon can also be seen in Figure 12d,e,g. As shown in the close-up views of Figure 12f, the IM method suffered from severe blurring effects and false edges. Comparatively, the DMSR methods produced ideal fusion images without perceptible artifacts along the focus boundary. Further, the quantitative assessments of the six methods are shown in Figure 13. The charts show that the proposed method outperformed the others and obtained the best quality metrics. Further, the quantitative assessments of the six methods are shown in Figure 13. The charts show that the proposed method outperformed the others and obtained the best quality metrics. Evaluation on Three Multi-Focus Images Our method was also suitable for more than two multi-focus images. The three source images for "Toy" (512 × 512) are shown in Figure 14a-c, and close-up views are shown at the bottom for better observation. Figure 14d,e show that the fused images obtained by the DCHWT and SOMP methods showed serious blurring effects at the "ball" in the right corner. The GF fusion method produced jagged edges around the "puppet", as shown in Figure 14f. The IM fusion method exhibited slight blurry artifacts in the upper-right corner of the "ball", as shown in Figure 14g. Compared with other methods, the CNN and DMSR performed well. As shown in Figure 14h,i, all focused areas from the source images were merged into the fusion image with imperceptible artifacts. The values of Q MI , Q AB/F , and VIFF for various fusion methods are presented in Table 2, with the best results indicated in bold. Evaluation on Three Multi-focus Images Our method was also suitable for more than two multi-focus images. The three source images for "Toy" (512 × 512) are shown in Figure 14a-c, and close-up views are shown at the bottom for better observation. Figure 14d,e show that the fused images obtained by the DCHWT and SOMP methods showed serious blurring effects at the "ball" in the right corner. The GF fusion method produced jagged edges around the "puppet", as shown in Figure 14f. The IM fusion method exhibited slight blurry artifacts in the upper-right corner of the "ball", as shown in Figure 14g. Compared with other methods, the CNN and DMSR performed well. As shown in Figure 14h,i, all focused areas from the source images were merged into the fusion image with imperceptible artifacts. The values of MI Q , / AB F Q , and VIFF for various fusion methods are presented in Table 2, with the best results indicated in bold. Conclusions In this paper, we propose a new multi-focus image fusion method based on decision map and sparse representation. By generating the initial decision map by focusing on feature analysis for low-scale images, not only can the performance be guaranteed but the computational complexity can also be effectively reduced. Aiming at the characteristics of difficult decisions in the transitional area, we used the fusion algorithm based on sparse representation to directly fuse this and effectively reduce the error caused by incorrect judgment while ensuring the quality of fusion. In addition, the fusion method is also generalized to be capable of fusing more than two images. Experimental results show that the fusion method proposed in this paper has better fusion quality than other methods, both in terms of visual perception and objective measurement. In the future, we plan to evaluate whether the method proposed here can be applied to multi-focus image fusion in dynamic scenes. Author Contributions: B.L. and H.C. conceived and designed the algorithm; B.L. and H.C. performed the experiments; W.M. analyzed the data and contributed reagents/materials/analysis tools; B.L. and H.C. wrote the paper; W.M. provided technical support and revised the paper.
7,299.6
2019-09-02T00:00:00.000
[ "Computer Science" ]
Ethyl 2-[({[4-amino-5-cyano-6-(methylsulfanyl)pyridin-2-yl]carbamoyl}methyl)sulfanyl]acetate monohydrate The title compound, C13H16N4O3S2·H2O, crystallizes in a ‘folded’ conformation with the ester group lying over the carbamoyl moiety, with one solvent water molecule. The molecular conformation is stabilized by an intramolecular C—H⋯O hydrogen bond, and an N—H⋯O hydrogen-bonding interaction involving the lattice water molecule. The packing involves N—H⋯N, N—H⋯O, O—H⋯N and O—H⋯O hydrogen bonds and consists of tilted layers running approximately parallel to the c axis, with the ester groups on the outer sides of the layers and with channels running parallel to (101). The title compound, C 13 H 16 N 4 O 3 S 2 ÁH 2 O, crystallizes in a 'folded' conformation with the ester group lying over the carbamoyl moiety, with one solvent water molecule. The molecular conformation is stabilized by an intramolecular C-HÁ Á ÁO hydrogen bond, and an N-HÁ Á ÁO hydrogen-bonding interaction involving the lattice water molecule. The packing involves N-HÁ Á ÁN, N-HÁ Á ÁO, O-HÁ Á ÁN and O-HÁ Á ÁO hydrogen bonds and consists of tilted layers running approximately parallel to the c axis, with the ester groups on the outer sides of the layers and with channels running parallel to (101). Comment A great deal of interest has been focused on the synthesis of functionalized pyridine derivatives due to their biological activities (Shi et al., 2005). For example, some 2-pyridine radicals are incorporated into the structures of cardiotonic agents such as milrinone (Dorigo et al., 1993) and HIV-1 specific transcriptase inhibitors (Dolle et al., 1995). Aminocyanopyridines have been identified as IKK-β inhibitors (Murata et al., 2003). Many pyridine derivatives are of commercial interest being used as herbicides, fungicides, pesticides, and dyes (Lohray et al., 2004;Merja et al., 2004;Chaki et al., 1995;Thomae et al., 2007). Besides, pyridine derivatives are important and useful intermediates in the preparation of a variety of heterocyclic compounds (Konda et al., 2010). In view of these observations and in continuation of our work on the synthesis of heterocyclic systems for biological evaluations, we report here the synthesis and crystal structure of the title compound. The title compound ( Fig. 1) crystallizes in a "folded" conformation with the ester group lying over the carbamoyl moiety such that the dihedral angle between the best planes through the pyridyl ring and the C11-C13/O3 unit is 22.4 (1)°. Molecular conformation is stabilized by an intramolecular C-H···O hydrogen bond, forming a S(6) motif, Fig. 1, (Bernstein et al., 1995) and an N-H···O hydrogen bonding interaction involving the lattice water molecule. This conformation appears to result from the several hydrogen bonding interactions involving the lattice water molecule, Fig. 2 and Table 1. The packing consists of tilted layers running approximately parallel to the c axis, Fig. 3, with the ester groups on the outsides of the layers and having channels running parallel to (101), Fig. 4. Refinement H-atoms attached to carbon were placed in calculated positions (C-H = 0.95 -0.98 Å) while those attached to nitrogen were placed in locations derived from a difference map and their coordinates adjusted to give N-H = 0.91 Å. All were included as riding contributions with isotropic displacement parameters 1.2 -1.5 times those of the attached atoms. Figure 1 Perspective view of the asymmetric unit with 50% probability ellipsoids and hydrogen bonds depicted by dashed lines. Packing projected along the c axis showing the tilted layers. Figure 4 Packing viewed along the axis of the channels. Displacement ellipsoids for non-H atoms are drawn at the 50% probability level. Special details Geometry. Bond distances, angles etc. have been calculated using the rounded fractional coordinates. All su's are estimated from the variances of the (full) variance-covariance matrix. The cell e.s.d.'s are taken into account in the estimation of distances, angles and torsion angles Refinement. Refinement on F 2 for ALL reflections except those flagged by the user for potential systematic errors. Weighted R-factors wR and all goodnesses of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The observed criterion of F 2 > σ(F 2 ) is used only for calculating -R-factor-obs etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
1,010.8
2014-06-04T00:00:00.000
[ "Chemistry", "Materials Science" ]
The thermal conductivity of the Earth's core and implications for its thermal and compositional evolution Abstract Determining the thermal conductivity of iron alloys at high pressures and temperatures are essential for understanding the thermal history and dynamics of the Earth's metallic cores. The authors summarize relevant high-pressure experiments using a diamond-anvil cell and discuss implications of high core conductivity for its thermal and compositional evolution. EARTH SCIENCES Special Topic: Key Problems of the Deep Earth The thermal conductivity of the Earth's core and implications for its thermal and compositional evolution Kenji Ohta 1, * and Kei Hirose 2,3 The thermal conductivity of iron alloys is a key to understanding the mechanism of convection in the Earth's liquid core and its thermal history. The Earth's magnetic field is formed by a dynamo action that requires convection in the liquid core. Present-day outer core convection can be driven by the buoyancy of lightelement-enriched liquid that is released upon inner core solidification in addition to thermal buoyancy associated with secular cooling. In contrast, before the birth of the inner core, the core heat loss must be more than the heat conducted down the isentropic gradient in order to drive convection by thermal buoyancy alone, which can be a tight constraint upon the core thermal evolution. Recent mineral physics studies throw the traditional value of the Earth's core thermal conductivity into doubt (Fig. 1). Conventionally the thermal conductivity of the outer core had been considered to be ∼30 W m −1 K −1 , an estimate based on shock experiments and simple physical models including the Wiedemann-Franz law: κ el = LTρ −1 , where κ el , L, T and ρ are electronic thermal conductivity, Lorenz number, temperature and electrical resistivity, respectively [1]. Such relatively low core conductivity indicates that liquid core convection could have been driven thermally even with [1,2,[4][5][6][7]9,16]. Filled symbols were calculated on the basis of the Wiedemann-Franz law with ideal Lorenz number (L 0 = 2.44 × 10 −8 W K −2 ). Gray bands indicate (a) the range of saturation resistivity [9] and (b) thermal conductivity computed from the saturation resistivity and the Wiedemann-Franz law. relatively slow cooling rate. However, in 2012-2013, our conventional view was challenged by both computational and experimental studies showing much higher core conductivity [2][3][4]. Since then, experimental determinations of the thermal conductivity of iron and alloys have been controversial (Fig. 1). Ohta et al. [5] measured the electrical resistivity of iron under core conditions in a laser-heated diamond-anvil cell (DAC). The results demonstrate relatively high thermal conductivity of ∼90 W m −1 K −1 for liquid Fe-Ni-Si alloy based on their measured resistivity for pure iron, Matthissen's rule and Wiedemann-Franz law, which is compatible with ab initio simulations [2,4]. On the other hand, flash laserheating and fast thermal radiation detection experiments demonstrated the low core conductivity of 20-35 W m −1 K −1 based on finite element method simulations [6,7], in accordance with the traditional estimate [1]. Since transport properties that describe non-equilibrium phenomena are difficult to measure, the fact that determinations of the iron conductivity under core conditions have become viable these days is a remarkable success in mineral physics. Nevertheless, the discrepancy in core conductivity makes a big difference in the expected age of the inner core, mechanism of liquid core convection and thermal history [3]. Despite a number of subsequent studies based on a variety of different techniques, we still see a dichotomy of proposed core conductivity values (Fig. 1). The 'saturation' resistivity, which is derived from the fact that the mean free path of electron-phonon interaction cannot be longer than the interatomic distance, gives the lower bound for conductivity. Such saturation resistivity lies between two clusters of reported high and low resistivity values. While the resistivity saturation is important in highly resistive transition metals and their alloys [3,8] (Fig. 2), the conventional estimate [1] did not include the effect of saturation in their models, which resulted in much higher resistivity than the saturation value and hence low core conductivity. The core electrical resistivity measured by recent DAC experiments [3,5,9] shows resistivity saturation (Fig. 2), demonstrating the high core conductivity as far as the Wiedemann-Franz law holds with ideal Lorenz number (Fig. 1). Additionally, since temperature has a large effect on resistivity, temperature gradient in a laser-heated sample is an issue. An internally-resistance-heated DAC provides homogenous and stable sample heating and is thus a promising technique for conductivity measurements at high pressure and temperature (P-T) [9]. The validity of the Wiedemann-Franz law under extreme conditions has also been an issue. Simultaneous measurements of the electrical resistivity and the thermal conductivity of iron alloy under core high P-T conditions will provide decisive evidence for it. As introduced above, the most recent high P-T measurements for Fe containing 2, 4, 6.5 wt.% Si using an internallyresistance-heated DAC have demonstrated that the thermal conductivity of Fe-12.7 wt.% (22.5 at.%) Si is ∼88 W m −1 K −1 at core-mantle boundary (CMB) conditions when the effects of resistivity saturation, melting and crystallographic anisotropy at measurements are taken into account [9] (Fig. 1). Thermal conductivity of Fe-10 at.% Ni-22.5 at.% Si alloy, a possible outer core composition, could be ∼79 W m −1 K −1 considering the impurity effect of Ni [10]. Si exhibits the largest 'impurity resistivity', indicating that the 79 W m −1 K −1 is the lower bound for the thermal conductivity of the Earth's liquid core. The core thermal evolution models by Labrosse [11] demonstrated that if liquid core convection has been driven by thermal buoyancy with the core thermal conductivity of 79 W m −1 K −1 at the CMB and no radiogenic heating in the core, the CMB temperature is calculated to be ∼5500 K at 3.2 Ga and ∼4800 K at 2.0 Ga. Such high CMB temperature suggests that the whole mantle was fully molten until 2.0-3.2 Ga. It is not consistent with geological records, calling for a different mechanism of core convection. Chemical buoyancy may be an alternate means of driving convection in the core from the early history of the Earth. It has been proposed that the compositional buoyancy in the core could arise from the exsolution of MgO, SiO 2 or both [12][13][14]. Recent core formation models based on the core-mantle distributions of siderophile elements suggest that core metals segregated from silicate at high temperatures, typically at 3000-4000 K and possibly higher [13,15], which enhances the incorporation of lithophile elements including Si and O, and possibly Mg into metals. It is suggested that the (Si, O)-rich liquid core may have become saturated with SiO 2 upon secular cooling [14]. Indeed, the original core compositions proposed in recent core formation models include Si and O beyond the saturation limit at CMB conditions [15], i.e. 136 GPa and 4000 K, leading to SiO 2 crystalliza-tion [13]. The rate of SiO 2 crystallization required to sustain geodynamo is as low as 1 wt.% per 10 9 years, which corresponds to a cooling rate of 100-200 K Gyr −1 [14]. The most recent model of the core compositional evolution by Helffrich et al. [13] showed that MgO saturation follows SiO 2 saturation only when >1.7 wt.% Mg in the core. If this is the case, in addition to solid SiO 2 , (Mg, Fe)-silicate melts exsolve from the core and transfer core-hosted elements such as Mo, W and Pt to the mantle. The core-derived silicate melts may have evolved toward FeO-rich compositions and now represent the ultra-low velocity zones above the CMB.
1,754.2
2020-12-26T00:00:00.000
[ "Physics" ]
Unexpected crossovers in correlated random-diffusivity processes The passive and active motion of micron-sized tracer particles in crowded liquids and inside living biological cells is ubiquitously characterised by"viscoelastic"anomalous diffusion, in which the increments of the motion feature long-ranged negative and positive correlations. While viscoelastic anomalous diffusion is typically modelled by a Gaussian process with correlated increments, so-called fractional Gaussian noise, an increasing number of systems are reported, in which viscoelastic anomalous diffusion is paired with non-Gaussian displacement distributions. Following recent advances in Brownian yet non-Gaussian diffusion we here introduce and discuss several possible versions of random-diffusivity models with long-ranged correlations. While all these models show a crossover from non-Gaussian to Gaussian distributions beyond some correlation time, their mean squared displacements exhibit strikingly different behaviours: depending on the model crossovers from anomalous to normal diffusion are observed, as well as unexpected dependencies of the effective diffusion coefficient on the correlation exponent. Our observations of the strong non-universality of random-diffusivity viscoelastic anomalous diffusion are important for the analysis of experiments and a better understanding of the physical origins of"viscoelastic yet non-Gaussian"diffusion. correlated Gaussian noise, we demonstrate that the similarity of these models in the Brownian case disappears in the anomalous diffusion case. We present detailed results for this non-universality in the viscoelastic anomalous diffusion case in terms of the time evolution of the MSDs, the effective diffusivities, and the PDFs of these processes. Specifically, we show that in some cases anomalous diffusion persists beyond the correlation time while in others normal diffusion emerges. Comparing our theoretical predictions with experiments will allow us to pinpoint more precisely the exact mechanisms of viscoelastic yet non-Gaussian diffusion with its high relevance to crowded liquids and live cells. FBM-generalisation of the minimal diffusing-diffusivity model We first analyse the FBM-generalisation of our minimal DD model [7], whose Langevin equation for the particle position reads dx/dt = 2D(t)ξ H (t) ( 1 ) in dimensionless form (see appendix A). The dynamics of D(t) is assumed to follow the square of an auxiliary Ornstein-Uhlenbeck process Y(t) [7], In the above ξ H (t) represents fractional Gaussian noise, understood as the derivative of smoothed FBM with zero mean and autocovariance ξ 2 H τ ≡ ξ H (t)ξ H (t + τ ) [35,36] decaying as ξ 2 H τ ∼ H(2H − 1)τ 2H−2 for τ longer than the physically infinitesimal (smoothening) time scale δ [35]. η(t) is a zero-mean white Gaussian noise of unit variance. We assume equilibrium initial conditions for Y(t), i.e., Y(0) is taken randomly from the equilibrium distribution f eq (Y) = π −1/2 exp(−Y 2 ) [7,17]. Thus the process Y(t) is stationary with variance Y 2 = D = 1/2. The autocorrelation is Y(t)Y(t + τ ) = exp(−|τ |)/2 with unit correlation time in our dimensionless units. From equation (1) we obtain the MSD (see appendix B) with kernel K(τ ) = √ D(t 1 )D(t 2 ) = (1/π)[b(τ ) + a(τ ) arctan(a(τ )/b(τ ))], where τ = |t 1 − t 2 |, a(τ ) = e −τ , and b(τ ) = √ 1 − a 2 (τ ). We first demonstrate how to get the main results for the MSD from simple estimates at short and long times compared to the correlation time of the D(t) dynamics. As the diffusion coefficient does not change considerably at times shorter than the correlation time, K(0) ≈ √ D(t)D(t) = D = 1/2, equation (4) yields For long times t 1, more care is needed: as we will see, the long-time limit is different for the persistent and anti-persistent cases. For the persistent case H > 1/2 we assume that the main contribution to the integral in equation (4) at long times comes from large τ , since the noise autocorrelation decays very slowly. We thus approximate K(τ ) ≈ |Y(t)| |Y(t + τ )| = |Y(t)| 2 = 1/π. Then, In the anti-persistent case H < 1/2 we split equation (4) into two integrals, 4t In the first integral at long times it is eligible to replace the upper limit of the integral by infinity, since it converges 9 . The second integral produces a subleading term, since it is bounded from above by Ct 2H , C being a constant. We therefore have the following asymptotic result for the MSD in the anti-persistent case at long times, with D eff = lim δ→0 2 +∞ 0 K(τ ) ξ 2 H τ dτ . Thus, the FBM-DD model demonstrates surprising crossovers in the behaviour of the MSD. In the persistent case the MSD scales as t 2H at both short and long times, but 9 If the diffusivity is constant, then K(τ ) is constant as well, and this approximation cannot be used, since necessarily (1) and the exact MSD (C.13) for H = 1 as well as numerical integration of (4) for different H. The MSD approaches the limits (dashed lines) t 2H at short times and, at long times, anomalous [(2/π)t x2H ] or normal [2D eff t] scaling for super-and subdiffusion, respectively. Middle: effective diffusion coefficient as function of H. The theoretical curve [equation (D.10) for H < 1/2 and 1/π for H > 1/2] shows a distinct discontinuity at the Brownian value H = 1/2. Results from numerical evaluation of equations (D.1), (4), and simulations are shown to gradually approach the theoretical values (see text and appendix D). Right: crossover of the PDF from short-time non-Gaussian form with exponential tails to long-time Gaussian. The crossover is described in terms of the kurtosis (see figure E1). with different diffusion coefficients. This is in a sharp contrast with the Brownian yet non-Gaussian diffusion characterised by the same, invariant diffusivity at all times. In the antipersistent case the situation is even more counterintuitive: the subdiffusive scaling of the MSD at short times crosses over to normal diffusion at long times. The behaviour of the MSD is shown in figure 1. For superdiffusion, the change of the diffusivity between the short and long time superdiffusive scaling t 2H is distinct. Excellent agreement is observed between the exact and numerical evaluation for H = 1 and H = 0.7, 0.8, respectively. The exact analytical expression for H = 1 is derived in appendix C. In the subdiffusive case simulations and numerical evaluation nicely coincide and show the crossover from subdiffusion to normal diffusion. Figure 1 also shows the effective long time diffusivity. For superdiffusion the constant value 2/π ≈ 0.63 [see equation (6)] is distinct from the H-dependency for subdiffusion [H < 1/2, see equation (D. 10)]. For the Brownian case, D eff = 1/2, leading to a distinct discontinuity at H = 1/2. Note the slow convergence to the theory of simulations results and numerical evaluation of the respective integrals (see appendix D for details). Given the above arguments that at short times (t 1) the diffusivity is approximately constant, we expect that in this regime the PDF corresponds to the superstatistical average of a single Gaussian over the stationary diffusivity distribution of the Ornstein-Uhlenbeck process, where x 2 (t) ST = t 2H and K 0 is the modified Bessel function of the second kind [7]. In the relevant large value limit of the scaling variable z = xt −H the Bessel function has the expansion K 0 (z) ∼ π/(2z) exp(−z) and thus represents the desired exponential tails, with a power-law correction [7]. 10 For long times (t 1) the diffusivity correlations decay and the Gaussian limit P(x, t) = G( x 2 (t) LT ) is recovered, where we introduce the general definition For H > 1/2, the long-time MSD is x 2 (t) LT = (2/π)t 2H while for H < 1/2, x 2 (t) LT = 2D eff t. The crossover behaviour of P(x, t) is indeed corroborated in figure 1 for different values of H. How do these observations compare to generalisations of other established random-diffusivity models? While in the normal-diffusive regime these models encode very similar behaviour, we show now that striking differences in the dynamics emerge when the motion is governed by long-range correlations. FBM-generalisation of the Tyagi-Cherayil (TC) model The generalisation of the Tyagi-Cherayil (TC) model [16] in dimensionless units reads 10 Such sub-dominant power-law corrections may indeed account for the deviations from the pure exponential shape of the PDF reported in [3]. However, many experimental data sets may not have sufficient resolution for smaller z values to pin down sub-dominant corrections. The exact result (E.14) gradually converges to the theoretical curve for different δ and t. Right: crossover of the PDF from short-time non-Gaussian shape with exponential tails to a long-time Gaussian. The crossover is described in terms of the kurtosis in appendix E. This expression is obtained from the original equations (appendix E) via the transformations t → t/τ c and ). Using the same notation as before, η represents zero-mean white Gaussian noise and ξ H (t) is fractional Gaussian noise with Hurst exponent H. The TC model looks quite similar to the minimal DD model as stochastically modulated Brownian motion, however, there exists a decisive difference: in equations (10) the OU-process Z(t) enters without the absolute value used in the minimal DD model (1). In expression (10) the prefactor Z(t) is therefore not a diffusion coefficient (by definition, a non-negative quantity). In the case H = 1/2, the analysis in [16] shows that on the level of the diffusion equation the quantity Z 2 (t) (in our notation here) takes on the role of the diffusion coefficient, and in this sense is thus well defined. The extension to fractional Gaussian noise therefore appears justified, yet we stress that the process (10) is intrinsically different from the FBM-DD model (1). As our discussion shows, the close similarity between the TS and DD models in the case H = 1/2 is replaced by a distinct dissimilarity in the emerging dynamics for H = 1/2. The MSD of the FBM-TC model reads where the kernel K(τ ) is defined as It is shown in figure B1 along with the corresponding Langevin simulations. Before presenting the exact solution, let us apply an analogous reasoning for the behaviour of the MSD as developed for the FBM-DD model above. Namely, at short times we approximate K(τ ) ≈ Z 2 = 1/2. Then equation (11) At long times the MSD can be composed of the two parts x 2 (t) = 4t The upper limit of the first integral can be replaced by infinity because the first integral converges in both persistent and anti-persistent cases at long times [K(τ ) decays to 0 exponentially, different from the FBM-DD model]. The second term is subleading in comparison to the first term. As a result the MSD at long times scales linearly in time, H τ dτ . Indeed, from the exact form of the MSD in appendix E we obtain the limiting behaviours Thus for both sub-and superdiffusion this model shows a crossover from anomalous to normal diffusion, as demonstrated in figure 2. The effective long-time diffusion coefficient in this model varies continuously as D eff = Γ(2H + 1)/2 for all H. In particular, this means that for H = 1/2, D eff = 1/2. Figure 2 shows the exact match of the simulations results and the numerical evaluation at finite integration step. The PDF at short times coincides with the superstatistical limit in expression (8) above, as shown explicitly in equation (E.15). At long times we recover the Gaussian P(x, t) = G(Γ(2H + 1)t). Note that for H = 1 the noise is equal to unity at all times and the dynamics of x(t) is completely determined by the superstatistic encoded by the OU-process Z(t). The tails of the PDF are thus always exponential, reflected by the fact that the kurtosis has the invariant value 9 (see appendix E). Despite the strong similarity between the DD and TC models in the Brownian case, for correlated driving noise their detailed behaviour is strikingly dissimilar, due to the different asymptotic forms of the kernel K(τ ) (figure B1). FBM-generalisation of the switching (S) model The third case model we consider here is the S-model with generalised noise [19], where n(t) is a two-state Markov chain switching between the values {0, 1} and ξ H (t) represents again fractional Gaussian noise. The constants D i are the diffusivities in the two states. The switching rates are k 12 and k 21 , such that the correlation time is τ c = 1/(k 12 + k 21 ). Note that the S-model (14) for white Gaussian noise with H = 1/2 is well known in the theory of stochastic processes [1,2]. In nuclear magnetic resonance literature it is known as the Kärger model [20,54]. From the first and second moments of the process θ(t), equations (F.5) and (F.6), we calculate the MSD of the process. In the Brownian limit H = 1/2 the MSD has a linear dependence at all times, This result was also obtained in [20]. For the general case with the correlation function based on fractional Gaussian noise, we have where At short times t τ c we find the scaling behaviour At long times t τ c the same scaling law is obtained, but with a different prefactor for the persistent case (H > 1/2), In contrast, for the anti-persistent case (H < 1/2), we derive a crossover to normal diffusion, From equations (15), (18), and (19), the long-time effective diffusivity can be obtained as The crossover behaviours of the MSD in the persistent and anti-persistent cases, analogous to the difference in the long-time scalings of the FBM-DD model, are displayed in figure 3. We also see some similarities between the FBM-S and FBM-DD models for the effective diffusivity. For the FBM-S model an H-dependent behaviour for H < 1/2 is followed by a discontinuity at H = 1/2 and then a constant value for H > 1/2. The results of the MSD for finite values δ and t are given in appendix F. Next we discuss the PDF and kurtosis. At short times the continuous superstatistic of the previous cases is reduced to the discrete case of two superimposed Gaussians, producing the non-exponential form At long times a single Gaussian dominates, where x 2 (t) LT is given by equations (18) and (19) for the super-and subdiffusive cases, respectively. Figure 3 shows the superimposed two Gaussians at short times and the single Gaussian at long times. Conclusions Viscoelastic anomalous diffusion with long-ranged correlations is a non-Markovian, natively Gaussian process widely observed in complex liquids and the cytoplasm of biological cells. Most data analyses have concentrated on the MSD and the displacement autocorrelation function. Yet, once probed, the PDF in many of these systems turns out to be non-Gaussian, a phenomenon ascribed to the heterogeneity of the systems. Building on recent results for Brownian yet non-Gaussian diffusion, in which the non-Gaussian ensemble behaviour is understood as a consequence of a heterogeneous diffusivity coefficient, we here analysed three different random-diffusivity models driven by correlated Gaussian noise. Despite the simplicity of these models we observed surprising behaviours. Thus, while in the Brownian case all models display a linear MSD with invariant diffusion coefficient, in the correlated case a crossover occurs from short to long-time behaviours, with respect to the intrinsic correlation times. In particular, whether the long-time scaling of the MSD is anomalous or normal, depends on the specific model. Moreover, the effective diffusivity exhibits unexpectedly complex behaviours with discontinuities in the FBM-DD and FBM-S models. We note here that an additional crossover may come into play when a cutoff time scale of the fractional Gaussian noise becomes relevant [55]. In all cases a crossover from an initial non-Gaussian to a Gaussian PDF occurs. We showed that the FBM-S model is different from the other models in that it encodes an initial superposition of two Gaussians, turning into a single Gaussian at long times. We note that while the short-time exponential shape may point towards a universal, extreme-value jump-dominated dynamics [21], data also show stretched-Gaussian shapes [43], as well as long(er)-time convergence towards an exponential [8]. Clearly, the phenomenology of heterogeneous environments is rich and needs further investigation. From an experimental point of view, the behaviours unveiled here may be used to explore further the relevance of the different possible stochastic formulations of random-diffusivity processes. For instance, in artificially crowded media one may vary the Hurst exponent by changing the volume fraction of crowders or the tracer sizes, or add drugs to change the system from super-to subdiffusive [30]. Comparison of the resulting scaling behaviours of MSD and associated effective diffusivity may then yield decisive clues. The results found here will also be of interest in mathematical finance. In fact, the original DD model is equivalent to the Heston model [56] used to describe return dynamics of financial markets. Fractional Gaussian noise in mathematical finance is used to include an increased 'roughness' to the emerging dynamics [57]. The different models studied here could thus enrich market models. The CLT is a central dogma in statistical physics, based on the fact that the entry variables are identically distributed. For inhomogeneous environments, ubiquitous in many complex systems, new concepts generalising the CLT will have to be developed. While random-diffusivity models are a start in this direction and provide relevant strategies for data analyses [58], ultimately more fundamental models including the quenched nature of the disordered environment [23,59] and extensions of models for non-equilibrium situations [60] need to be conceived. Acknowledgments We acknowledge funding from DFG (ME 1535/7-1). RM acknowledges the Foundation for Polish Science (Fundacja na rzecz Nauki Polskiej, FNP) for an Alexander von Humboldt Polish Honorary Research Scholarship. FS acknowledges Davide Straziota for helpful discussions and financial support of the 191017 BIRD-PRD project of the Department of Physics and Astronomy of Padua University. We acknowledge the support of the German Research Foundation (DFG) and the Open Access Publication Fund of Potsdam University. Appendix A. Dimensionless units for the FBM-DD model In dimensional form the starting equations governing the evolution of the position x(t) of the diffusing particle in the fractional version of the minimal DD-model read Here D(t) is the diffusion coefficient of dimension [D] = cm 2 s −1 , ξ H represents fractional Gaussian noise with the Hurst index H ∈ (0, 1] whose dimension is [ξ H ] = s H−1 and whose correlation function reads [35] ξ Noting that for the Gaussian noise sources we have ξ Now, we choose the temporal and spatial scales such that τ c = σ 2 = 1, to find With this choice of units, the stochastic equations of our minimal FBM-DD model are then given by equations (1) and (2) of the main text. Using the integral where erfc(z) = 1 − erf(z) = 2π −1/2 +∞ z e −t 2 dt is the complementary error function, we rewrite B 1 and B 2 as and Plugging equations (B.2) and (B.3) into (B.1) and after some transformations, we get which is equation (4) in the main text. This result is verified by simulation of the Ornstein-Uhlenbeck process in figure B1. We immediately obtain the first-order and second-order derivatives of K(τ ) with respect to τ , and K(τ ), K (τ ) and K (τ ) are all monotonic and have the following limits Appendix E. FBM-generalisation of the Tyagi-Cherayil model We now consider the fractional TC model Here ξ H (t) represents fractional Gaussian noise, η(t) is a white Gaussian noise, and the respective correlation functions are the same as in equation Equation (11) can be solved analytically, (E.5) and Considering the leading term of the Taylor expansion in terms of δ we get and After plugging equation (E.7) into (E.3) we get At long times t satisfying δ 1 t we have Here D eff can be calculated as For both persistence and anti-persistence cases, a crossover from anomalous diffusion to normal diffusion emerges. . (E.14) The second term on the right hand side contributes to the discrepancies near H → 0 in figure 2(b) of the main text. We expect the same behaviour of the PDF as for the DD model of reference [7] but with the rules of FBM. In particular, at short times we expect the superstatistical behaviour to hold and the PDF should be given by the weighted average of a single Gaussian over the stationary diffusivity distribution of the OU process. Therefore the expected PDF reads where G(σ 2 ) = (2πσ 2 ) −1/2 exp(−x 2 /(2σ 2 )) is the Gaussian distribution, p Z (Z) is the PDF of the dimensionless OU-process, and K 0 is the modified Bessel function of the second kind. At longer times the Gaussian limit will be reached, P(x, t) = G(Γ(2H + 1)t). (E. 16) In particular, for H = 1, the PDF is always exponential at both short and long times. This can be seen from examination of the kurtosis, namely, the fourth order moment of the displacement reads This means that for H = 1, the crossover to the Gaussian will never emerge at any time. This is a fundamental distinction from the FBM-DD model. The behaviour of the kurtosis is shown in figure E1. 2 ) 2 τ 2 c . The correlation (shown in figure B1 in comparison to Langevin simulations) approaches a 2 + a 2 at short times and a 2 at long times. The fourth order moment of the displacement reads
5,027.4
2020-05-01T00:00:00.000
[ "Physics" ]
Diagnostics of cycloidal gear speed reducers in vertical multirotor system V. Barzdaitis*, V.V. Barzdaitis**, K. Kazlauskienė***, A. Tadžijevas**** *Kaunas University of Technology, Studentu str. 56-332, 51424, Kaunas, Lithuania, E-mail<EMAIL_ADDRESS>**Vytautas Magnus University, Vileikos str. 8, 44404, Kaunas, Lithuania, E-mail<EMAIL_ADDRESS>***Kaunas University of Technology, Studentu str. 56-345, 51424, Kaunas, Lithuania, E-mail<EMAIL_ADDRESS>****Klaipėda University, Bijūnų str.17, 91225, Klaipėda, Lithuania, E-mail<EMAIL_ADDRESS> Introduction High productivity vertical axis rotating diffusion machine driven by multiple drives is complicated system for technical condition monitoring, diagnostics and failure prognosis.The failure diagnostic problems of vertical axis rotating machine driven by ten cycloidal gear reducers is complicated from practical diagnostics point of view using vibration monitoring parameters [1].Unlike many others gear power transmissions the cycloidal gear driver is not typically back drivable and in case of only one drive failures it may cause failure of driving involute pinion tooth or in worst casefailure of expensive driven gear.The operation of cycloidal gear drive is based on the eccentric motion nature of the cycloidal disc.Generally each cycloidal gear drive includes two subsystemshigh and low rotational speed stages.Each stage comprises two cycloidal discs.The two stages cycloidal gear drive has many antifriction bearings with additional 2 bearings of output rotor with involute pinion tooth.Such design scheme drive is perfectly balanced and eliminates rotor unbalance caused vibration [2][3][4].The cycloidal gear drives severe vibration mainly caused by antifriction bearings failures.In industry the electric motor and drive defects diagnostics are concentrated on machine mechanical vibration periodic monitoring in situ.This is one of several stages of technical condition assessment of whole rotating system in general and each element in particular.The traditional rotating machinery diagnostic methods described in International Standards (ISO 13373-1:2002, ISO 13373-2:2005, ISO 13379:2003, ISO 2954:2012, ISO 7919 and ISO 10916, etc.).All of those methods are used in practice.But it is a general view in fault diagnostics technique, but not acceptable for different design machines and specific operation conditions.In this work we put attention of fault diagnostics of vertical rotating system simultaneously running with ten cycloidal gear drives with doubled crankshafts and antifriction bearings.The experimental research was based on each drive absolute vibration measurements data in situ at full load of diffusion machine operation. Research object The general scheme of diffusion machine with ten cycloidal gear drives is shown in Fig. 1.Cycloidal drive represent first CR1 and second CR2 stages and has the same design scheme but difference in geometrical size reference to large output rotor 3v torque. Vibration measurement data and results Bearings housings absolute vibration was measured with pjezoaccelerometers 2PH, 3PA, 5PA, 5PH and 8PH (sensitivity 100 mV/g, resonance frequency 22 kHz) in two directions: radial (H) and axial (A) reference to vertical axis of rotation.Measurement data was analysed with Vibration signal analysers (Ahash A4300, A4101, CZ).The pjezoaccelerometers were attached at 5 local points on CR1 and CR2 bearing housings as shown in Fig. 2. Too many measurement points used to pick up main points that measurement data effectively helps to identified main vibration parameters and increased accuracy of evaluation of technical condition of CR1 and CR2 in general and crankshafts eccentric bearings in particular.The many years diagnostics practice indicated that some cycloidal speed reducer elements (cycloidal discs, ring gear pins/rollers, the crankshafts with main two bearings) are sufficiently reliable in comparison with the technical condition of crankshafts eccentric bearings.The main accent of this research was put on technical condition monitoring of whole drive, especially of crankshaft's two eccentric bearings.Because in case of these bearings failure cycloidal speed reducer can damaged pinion or gear involute tooth zm, zd as practical diagnostic data indicated.The damaged (1st drive) and undamaged (6th drive) drives 2nd bearings vibration velocities spectra measured in radial direction with 2PH transducer is shown in Fig. 4, a Conclusions 1. Vertical axis cycloidal gear speed reducers with antifriction bearings failure diagnostics can be successful when systematic condition monitoring procedure is provided with seismic transducers attached to CR first stage CR1 bearing housings in radial and axial directions. 2. The CR first stage CR1 rotor doublet crankshaft excited vibration velocity amplitude is dominated in all ten drives 26.2 Hz = 2X in comparison with CR1 stage rotor synchronized rotation 1X frequency. 3. The high frequency (up to 5000 Hz) vibration acceleration root mean square value arms is more informative parameter in comparison with vibration velocity parameter measured up to 1000 Hz frequency, according to ISO 10816 norms. Fig. 5 Fig. 5 Radial vibration acceleration spectra of damaged 1st and undamaged 6th drives, 2PH transducer: a -damaged 1st drive; b -undamaged 6th drive The damaged and undamaged CR drives 2nd bearings vibration acceleration spectra measured in radial direction with 2PH transducer is shown in Fig. 5, a, b.The high frequency vibration acceleration amplitudes are more informative for identification of damaged CR drive.The
1,096.2
2017-03-05T00:00:00.000
[ "Engineering" ]
Confirmation of the existence of the X17 particle In a 2016 paper, an anomaly in the internal pair creation on the M1 transition depopulating the 18.15 MeV isoscalar 1 state on 8Be was observed. This could be explained by the creation and subsequent decay of a new boson, with mass mXc = 16.70 MeV. Further experiments of the same transition with an improved and independent setup were performed, which constrained the mass of the X17 boson (mXc) and its branching ratio relative to the γ-decay of the 8Be excited state (BX), to mXc = 17.01(16) MeV and BX = 6(1) × 10−6, respectively. Using the latter setup, the e+e− pairs depopulating the 21 MeV J = 0− → 0 transition in 4He were investigated and a resonance in the angular correlation of the pairs was observed, which could be explained by the same X17 particle, with mass mXc = 16.98 ± 0.16(stat) ± 0.20(syst) MeV. Introduction A recent measurement of the angular correlation of e + e − pairs from the 18.15 MeV J π = 1 + → 0 + M1 transition of 8 Be revealed an anomalous peak-like enhancement relative to the internal pair creation (IPC) at large e + e − separation angles [1]. This was interpreted as the creation and subsequent decay of a new boson with a mass of m X c 2 = 16.70 ± 0.35(stat) ± 0.5(syst) MeV. Later experiments on the same transition observed the same particle, with mass m X c 2 = 17.01 ± 0.16(stat) ± 0.20(syst) [2]. The possibility that the anomaly could be explained without a new particle, but within nuclear physics, with an improved model of the reaction or by introducing a nuclear transition form factor was explored by Zhang and Miller [3]. They were unable to explain the anomaly with the former approach, and obtained unrealistic form factors for the latter one. The statistical significance of the beryllium anomaly observation and the possible relation of the X17 boson to the dark matter problem, and the fact that it might explain the (g-2) µ puzzle [4,5], sparked interested from the theoretical and experimental particle and hadron physics community. Some of the recent possible explanations for the anomaly shall be discussed next. Feng et al. [4,6] further expanded on the idea of the new boson, analysing it as a protophobic vector gauge boson mediating a fifth force, with weak coupling to Standard Model (SM) particles. This model explains the data obtained from the beryllium anomaly and why in certain other experiments no contribution from the X17 was observed. * e-mail<EMAIL_ADDRESS>The protophobic nature of the X17 arises mostly from searches for π 0 → Z + γ decay in the NA48/2 experiment [7]. The X17 was not observed in this experiment, which requires that the coupling of the X17 particle to the up and down quarks to be protophobic. This means that the charges e u and e d of the up and down quarks, written as multiple of the positron charge e, satisfy the relation 2 u + d ≤ 10 3 [4,6]. Many studies of such protophobic models were subsequently performed, including an extended two Higgs doublet model by Delle Rose and co-workers [8]. Delle Rose et al. [9] described the anomaly with a light Z 0 bosonic state, arising from the U(1)0 symmetry breaking, with significant axial couplings so to evade low scale experimental constraints. They also showed how both spin-0 and spin-1 solutions are possible and describe the Beyond the Standard Model (BSM) that can accommodate these, including frameworks with either an enlarged Higgs, or gauge sector, or both. Ellewanger and Moretti [10] made yet another explanation for the anomaly, using a light pseudoscalar particle. The X17 could be a J π = 0 − pseudoscalar particle, due to the quantum-numbers of the exited states and ground state of 8 Be. In that case, they predicted that the branching ratio for the 17.6 MeV transition should be about ten times smaller than the 18.15 MeV one, which agrees with the experimental results. In a recent experiment, the existence of the X17 boson was also observed on the 21 MeV transition of 4 He, which is also reported in this note. This reinforces the idea of new physics, by excluding the possibility of interference from decay channels from nearby energy levels. This is an important result, since a previous observation made by Boer et al. [11] of a possible light boson candidates seen from deviations from the expected IPC spectrum obtained The new setup (b) consisted of 6 telescopes, and the MWPCs was replaced by DSSDs, which can be used for the particle identification, removing the need for the thin scintillators. by the decay of a 17.6 MeV excited state in 8 Be, could be explained without new physics, but by considering some mixing from E1 transitions from nearby energy levels to the explored M1 transition (specifically, a M1 + 23% E1 mixed transition could explain Boer's results) [1]. Despite the beryllium anomaly described by Krasznahorkay et al. [1] being significantly different than Boer's (the latter being an excess instead of a bump), the false-alarm left the particle physics community sceptical of new a particle interpretations from similar experiments. Experiments The 7 Li(p,γ) 8 Be reaction was used to populate the 17.6 MeV and 18.15 MeV 8 Be states, with proton energies of E p = 441 keV and E p = 1030 keV. The experiment was performed on the 2 MV Tandetron accelerator at MTA Atomki. A proton beam with a current of 1.0 µA was impinged on a 15 µg/cm 2 LiF target for the 441 keV resonance, and on a 300 µg/cm 2 LiF thick target evaporated onto 20 µg/cm 2 carbon foils, for the 1030 keV resonance. Given that the energy loss in the targets was of 9 keV and 70 keV, respectively, the actual proton bombarding energy was set to 450 keV and 1100 keV [2]. In contrast to the previous experiment [1], a much thinner carbon backing was used, the number of telescopes was increased from 5 to 6, and the MWPC detectors were replaced by double sided silicon strip detectors (DSSDs), with a larger effective area. Those improvements, particularly the change in number and angle of telescopes, changed the efficiency for e + e − pair detections. The improved setup consisted of 6 telescopes on a plane perpendicular to the beam direction, each at 60 • to its neighbours. Each telescope contains a plastic scintillator, with dimensions of 82 × 86 × 80 mm 3 , and a 50 × 50 mm 2 DSSD with 16 strips for each direction. The target was placed in a carbon fibre vacuum chamber, with 1 mm thick walls, in the centre of the detection system. To monitor γ-rays produced from the decay of the 18.15 MeV state, a rel = 100% High Purity (HP) germanium detector was placed 25 cm away from the target. The 3 H(p,γ) 4 He reaction was used to populate the broad second excited state in 4 He (E x = 21.1 MeV, Γ = 0.84 MeV, J π = 0 − ) , with a proton energy of E p = 0.900 MeV, which is below the 1.018 MeV threshold for the (p,n) reaction. The first excited state in 4 He (E x = 20.21 MeV, Γ = 0.50 MeV, J π = 0 + ) overlaps with the second, and it de-excites via an E0 transition. For the 3 H(p,γ) 4 He reaction, the target was a tritated titanium disk 3.0 mg/cm 2 thick, evaporated onto a 0.4 mm tick Mo disk. The concentration of tritium atoms was 2.66 × 10 20 atoms/cm 2 . To avoid evaporation of tritium, the target was kept at a liquid nitrogen temperature. For all experiments, the energy calibration was obtained from the 6.05 MeV IPC E0 transition from the 19 F(p,α + e + e − ) 16 O reaction. Any non-linearity effects, due to the signal amplification or otherwise, would be seen from the 17.6 MeV transitions from the Li(p,γ)Be reaction. The angular efficiency of the setup was determined by sampling neighbouring events from the same dataset, guaranteeing no correlation between them. The efficiency is then used to provide a setup independent result. Reference [12] describes the previous setup, with 5 telescopes (seen on Fig. 1 (a)) and a set up similar to the one used on the current experiments (seen on Fig. 1 (b)). The efficiencies for pair detection from both setup geometries differ significantly, hence the results with the new one can be considered as an independent measurement. Experimental results: 8 Be experiment In the 8 Be experiments, both the 18.15 MeV and the 17.6 MeV transition were observed. While no signal enhancement was observed for the 17.6 MeV transition on either experiments, it was used to check for non-linearity effects during the energy calibration. Figure 2 shows the resulting sum energy and angular correlation spectra for the improved experimental setup. It is in agreement with the previous experiment [1], the M1 transition follows theoretical predictions, without the contribution of the X17 on the 17.6 MeV transition. Figure 3 shows the results for the 18.15 MeV 8 Be transition. In red dots with error bars the current results [2] are shown, while in blue the previous results are shown [1]. There is a good agreement between both experiments. Function fitting The e + e − background angular distribution is modelled by an exponentially decreasing distribution, and the boson is modelled after simulations of a boson decaying to e + e − pairs. The fit was performed using RooFit [13], with the following distribution function: [1] where N bkgd and N sig are the number of background and signal events, respectively. To model the signal, a two dimensional distribution was constructed, with mass and opening angle dependencies. The mass dependency was obtained from linear interpolation of the e + e − angular distribution, simulated for discrete particle masses. With the PDF described in Equation 1, fits were performed to determine the N bkgd and N sig , by fixing a mass on the signal PDF. The best fitted values were taken from this method. To obtain the mass precisely, a fit was made with the mass as a fit parameter. With the results, the branching ratio relative to the γ-decay was calculated for the best fit. The results published in [1] are m X c 2 = 16.70(51) MeV and B X = 5.8 × 10 −6 , with 6.8σ significance. The same data fitted with the method listed above yields m X c 2 = 16.86(6) MeV and B X = 6.8(10) × 10 −6 , with 7.37σ significance. The new experiment [2] resulted a mass of m X c 2 = 17.17(7) MeV and a relative branching ratio of 4.7(21) × 10 −6 , with 4.90σ significance. The difference between the obtained mass of the X17 particle in each dataset are larger than the statistical error. This can be due to the uncertainty of the beam position on the target, or some misalignment of the detectors, which affects the determination of the position of the hits relative to the target, therefore skewing the angular correlation between the e + e − pairs. By averaging the results for the 8 Be experiments, the mass and relative branching ratio were determined to be m X c 2 = 17.01 (16) and B X = 6(1) × 10 −6 . Experimental results: He Experiment The expected angular correlation for e + e − pairs from the X17 boson in the decay of the 21.0 MeV 4 He state is at around 110 • , instead of the 140 • observed on the 8 Be experiment, due to the higher energy of the 4 He transition. This higher energy results in a larger kinetic boost for the X17, which yields lower opening angles between the decay products of the X17. Since the expected angular correlation for the e + e − pairs for the boson is peaked around 110 • , the energy sum spectra was also taken for pairs 60 • and 120 • apart. While the telescopes at 120 • should contain some enhancement from the decay of the boson, the telescopes at 60 • should provide a background, which can then be used to determine a signal region for the transition. As seen on Fig. 4, when taking the difference of those energy sum spectra, it becomes clear that the signal region is 19.5 MeV ≤ E total ≤ 22.0 MeV Figure 5 shows the angular correlation results for the previously mentioned 4 He transition. The e + e − pairs were gated by the energy sum on the signal region for the transition (between 19.5 MeV and 22.0 MeV), and with an asymmetry parameter, defined in Ref. [1], such that |y| ≤ 0.5. The peak appears at 115 • , which is consistent with the X17 interpretation, with mass of m X c 2 = 16.98 ± 0.16(stat) ± 0.20(syst) MeV. Future experiments In the coming years, several independent particle physics experiments will probe the same parameter space of the X17 boson. Their results will be fundamental in determining if the existence of such particle is true or not. Some of these experiments will be briefly discussed. Additional discussion can be found in Ref. [6]. The NA64 experiment at CERN searched with a 100 GeV/c e − beam for a hypothetical boson with mass m X c 2 = 16.7 MeV, near the proposed mass of the X17. It covers most, but not all, of the allowed e parameter space for protophobic bosons [14]. The DarkLight experiment, which will search for dark photons in the 10 MeV/c 2 to 100 MeV/c 2 energy range, is summed energy spectra from different telescope pair angles: for telescopes 120 • apart is shown in red, and in black for telescopes 60 • apart, which are used as a background measurement. Bottom figure: measured energy sum from e + e − pairs originated from the 21 MeV 4 He state decay; the background coming from the target was subtracted, but not the constant one caused by the cosmic rays projected to cover most of the allowed e parameter space for protophobic boson [15]. The experiment aims to produce dark photons by scattering e − off a hydrogen gas target. A proof-of-principle measurement is currently being done [16]. The MESA experiment, similarly to the DarkLight, will be searching for dark photon with electron scattering of hydrogen gas. The explored mass range of MESA will be between 10 and 40 MeV/c 2 [17]. The BESIII experiment currently contains the largest dataset of J/ψ events (around 10 10 events). Jiang, Yang and Qiao [18] proposed that an analysis of the current dataset for new gauge bosons would be possible, expecting around 10 3 scalar, Z 0 -like bosons under specific conditions. The ForwArd Search ExpeRiment (FASER) [19] at LHC is set to search for light, weakly interacting particles, such as axiom-like particles [20][21][22][23], with a detector placed in the forward regions of ATLAS. The search for light gauge boson was proposed in e + e − collision experiments or e + beam dump experiments, namely the aforementioned BESIII experiment [18], the BaBar experiment [24], the PADME experiment [25], and the KLOE-2 experiment [26]. The PADME experiment is running until the end of 2019, and will be moved to Cornell and/or JLAB to get higher intensity positron beams [27][28][29][30]. Experiments exploring other high-energy nuclear transitions would also shed light on the anomaly. Previous experiments performed in the 1970s explored such highenergy transitions [32,33], but without the required production cross section and branching ratio to observe deviations on IPC. Conclusions The anomalous angular correlation observed on the original experiment was reproduced using the new independent setup with the same 18.5 MeV transition from the 7 Li(p,γ) 8 Be reaction. A signal was also observed on the 21.0 MeV transition of 4 He. The 4 He signal can be explained by the same new X17 particle, with mass m X c 2 = 16.98 ± 0.16(stat) ± 0.20(syst) MeV, which agrees with the mass range obtained from the 8 Be experiments (m X c 2 = 17.01 ± 0.16 MeV). The observation of a similar anomalous internal pair creation on the 21 MeV transition of 4 He is strong evidence for new physics, since it excludes the possibility of interference from other decay channels from excited states near 18.5 MeV present on the 8 Be case. Many experiments in the coming years will be looking directly at the possibility of a new gauge boson, or indirectly, by probing the same parameter space as the X17. This will likely determine the existence of such particle, and constrain its properties. The beryllium anomaly observed in 2016 shows that nuclear physics can be taken as a relatively cheap laboratory for particle physics, and the many unsolved problems of physics, which may be partially or fully explained with the existence of weakly interacting light particles, are an
4,001.2
2020-04-09T00:00:00.000
[ "Physics" ]
Explosive dynamics of double tearing mode in Tokamak Using the CLT code, the resistivity dependence of the reconnection rate during the explosive phase at various separations of two rational surfaces of m/n = 3/1 double tearing mode is investigated quantitatively. Our study focuses on the explosive reconnection process where the exchange of island positions takes place and no secondary island forms. The negative dependence of explosive reconnection rate on resistivity in low resistivity and the systematic study of the effect of the separation on the resistivity dependence in high resistivity have been studied for the first time. The negative dependence is qualitatively different from the results in some relative studies where it usually exhibits a positive dependence on the resistivity or is independent of the resistivity. The negative dependence in two regions with a low resistivity, with a high resistivity and a large separation is caused by different reasons: one is the thickness of the current sheet, and the other is the separation. Introduction Scenarios with a reversed shear (RS) profile of the safety factor (q) have been a fascinating operation to obtain steady-state high-performance within tokamaks [1].As the frequently seen phenomenon from RS profile, double tearing mode (DTM) represents a vital subject needing to be settled down prior to steady-state operation [1].Severe DTM instability could degrade plasma confinement or might result in disruption [2].With the magnetic and kinetic energies abruptly growing in the explosive growth of DTM, a nonlinear destabilization is suddenly initiated.The destabilization showing close relationship to the strong coupling between the DTM can reflect the fast reconnection processes in the explosive phase. Usually, the nonlinear development of DTM consists of four distinguishable phases: linear growth, transition, explosive growth, and decay phases [3][4][5][6][7][8][9][10]. It is found that the resistivity (η) dependence of the growth rate in the linear growth phase is γ∼η 1/ 3 with the separation of two rational surfaces being small, while the scaling law is γ∼η 3/ 5 with the separation being large [11,12].It is well-known that the DTM can cause much faster reconnection during the explosive growth phase, which is of much weaker resistivity dependence in comparison with that in the linear growth phase.For the much weaker η dependence of the reconnection rate, the previous studies report different dependences on the plasma resistivity with scaling law such as η 1/ 5 [3,13] or nearly no resistivity dependence [6][7][8][9][10].Some researches relate the fast reconnection to the secondary instability that makes the magnetic reconnection faster and weakens the dependence on the resistivity during the explosive phase.Ishii et al [8][9][10] argued triangularity deformation of magnetic flux around X-point and the resultant current point as the secondary instability for triggering the abrupt growth.Janvier et al [14,15] supposed a secondary modulation type instability as a structure driven instability.Ali et al [16] proposed the secondary instability resulting from the quasilinear modification of the current profile to study numerically the growth rate of explosive reconnection of single tearing modes.Del Sarto et al [17,18] provided analytical models for the explosive reconnection of primary single tearing modes in terms of secondary tearing-type instabilities, by thus deducing a decrease of the dependence of the growth rate on resistivity and on electron inertia.Some researches suggested the secondary instability is instability of current sheets to form secondary islands [19][20][21][22][23][24][25], which results the reconnection rate can be nearly independent of η [20][21][22][23][24][26][27][28][29] or even negative dependence [5,19,25,30].In some simulations, the reconnection rate during the explosive phase is measured based on the inverse of the growth time of the secondary island from zero to some width [20,21,24,26,30].These simulation results about the η dependences were obtained based on different conditions and geometries.There has been no systematic study on the effect of the separation.Furthermore, in the previous simulations, the negative dependence in the explosive process is closely related to the formation of secondary islands [5,19,25,30].Del Sarto and Ottaviani [18] obtained the negative dependence on resistivity for secondary collisionless tearing modes to primary resistive internal-kink modes.In the abrupt reconnection process where no secondary island is generated, there are no reports about negative dependence of the explosive reconnection rate on η, which is worthwhile to further investigate its possibility and inside physics in detail. In this paper, we focus on the effect of the separation on the resistivity dependence of the reconnection rate during the explosive phase by using compressible magnetohydrodynamic simulation code in three-dimensional toroidal geometry (CLT).A general rule in the prediction of the resistivity dependence of the maximum reconnection rate in the explosive phase is summarized.These results can exert vital impacts on speeding up or slowing down the reconnection process during the explosive growth phase.Thus, it can provide a possible guidance for experimentally controlling and altering some of the parameters governing the resistivity η and the separation of two resonant surfaces ∆r in order to better understand conditions under which destabilization and stabilization can be realized. Theoretical method This study investigates magnetic reconnection in the explosive growth phase of m/n = 3/1 DTM using CLT code.A set of equations used in the simulation is provided as follows [31][32][33] Figure 1.Initial q-profiles for various separations between two rational surfaces. (2) In these formula, all variables including space length, time, velocity, electric field, current density, magnetic field, plasma pressure, and plasma density can be shown below: 0 0/µ 0 ), and ρ/ρ 00 → ρ.The variables including minor radius, Alfvén speed, Alfvén time, initial plasma density and magnetic field in magnetic axis are a, v A = B 00 / √ µ 0 ρ 00 , t A = a/v A , ρ 00 and B 00 , respectively.The resistivity η, the diffusion coefficient D, the perpendicular and parallel thermal conductivity κ ⊥ and κ || , the viscosity µ are normalized as follows: , κ || = 5 × 10 −2 and µ = 5 × 10 −7 were chosen.Without considering the influence of instability resulted from the pressure gradient, the initial plasma pressure is treated as a constant and the initial pressure gradient is zero.Figure 1 displays the initial profiles of safety factor q for m/n = 3/1 DTM in the RS configuration in the cases of different separations ∆r.The formula used for the q-profiles can be expressed by the formula as follows [7,34,35]: where the parameters are defined as q c = 0.495, λ = 1.0, r 0 = 0.36, δ = 0.29, and A = 5.The distance between two rational surfaces changes with the parameter α. In the present simulations, the aspect ratio is chosen to be R/a = 4/1.A uniform mesh with 256 × 32 × 256 (R, φ, Z) is used, and the convergent study was carried out. Simulation results At first, the mode development reflecting the reconnection dynamics is studied.Figure 2 presents the evolution of the total kinetic energy (E k ) with µ = 5 × 10 −7 , η = 1 × 10 −6 and ∆r = 0.295.The evolution of E k over time goes the following four different stages: linear growth, transition, explosive growth, and decay phases (marked I, II, III and IV, from left to right, respectively), as shown in simulations with a slab geometry [3-6, 34, 36, 37] and the experiments in the TFTR tokamak [1].During the linear growth stage, the magnetic islands grow on two rational surfaces separately as the conventional tearing mode, resulting in the constant growth rate.During the transition stage, the inner and outer independent growing islands enter into their nonlinear phase.The magnetic flux from the inner/outer islands piles up in the vicinity of the reconnection region of the outer/inner islands.The explosive growth stage is resulted from the rapid release of the earlier piled-up magnetic flux through the X point [13].The decay stage follows after the interchange of the islands.Figure 3 shows Poincaré plots of magnetic fields at different times, as indicated by the vertical dotted lines in figure 2. Magnetic islands form and grow on two rational surfaces of q = 3, respectively (figure 3(a)).With growing the islands, the inner islands are squeezed outward gradually and pushed to the X points of the outer islands (figures 3(b) and (c)).The fully grown island can bring its closed field lines to another rational surface, which is similar to that an external driving force leads to the explosive growth in magnetic reconnection [13].During the explosive growth stage, the inner island shrinks in the poloidal direction to push the entire inner island outward, whereas the outer island is pushed inward further, this finally results in the radial position change of the magnetic islands on outer and inner rational surfaces, as shown in figures 3(d)-(h).The position exchange of magnetic islands originally situated on inner and outer rational surfaces are often observed in many simulations [3-8, 26, 38, 39], and a systematic study on the dependence of the position change on the resistivity and viscosity has been previously presented [40].Our simulation results indicate that the position exchange is common in the explosive growth phase when η ⩽ 1 × 10 −5 and µ ⩽ 1 × 10 −5 [40].The present study mainly focus on the explosive dynamics of magnetic reconnection in which the radial position change of islands takes place.The resistivity dependence of the reconnection rate is studied when the viscosity is fixed at the low value µ = 5 × 10 −7 , aiming to weaken the influence of the viscosity.Figure 4 shows the time evolutions of the E k for different resistivities (η = 6 × 10 −7 ∼ 1 × 10 −5 ) with ∆r = 0.295.The scaling law for the resistivity dependence of the linear growth rate is γ∼η 0.6 in our simulations (not shown), which agrees well with the results for a single tearing instability obtained by calculations theoretically and numerically [7,11,41,42].The scaling law demonstrates the accuracy of the CLT in our study and also indicates that during the linear growth stage, the islands with a large separation between two rational surfaces grow around each rational surface almost independently, as presented in figure 3(a).To quantify the reconnection process during the explosive stage, the resistivity dependence of the maximum reconnection rate γ max with µ = 5 × 10 −7 and ∆r = 0.295 is shown as the dark yellow line and symbol in figure 5.The behavior of the maximum reconnection rate during the explosive stage is nonmonotonic in η. The impact of the separation between two rational surfaces on the η dependence of the explosive reconnection rate is studied.Figure 5 shows γ max vs. η for different separations between two rational surfaces.The maximum reconnection rate increases with increasing the separation, because the strong driving force related to the piled-up flux enhances reconnection processes [6].The behavior of the maximum reconnection rate during the explosive stage is nonmonotonic inη, and the characteristics are different for different separations.In general, the η dependence of the maximum reconnection rate γ max shows three kinds of dependences: the increase with increasing η (positive dependence), no change with increasing η (independence), and the decrease with increasing η (negative dependence).In the present investigation, the abrupt growth in the early nonlinear growth is caused by a sudden release of the piledup magnetic flux through the initial X-point, resulting in the position exchange of the magnetic islands.Inner/outer islands expand towards X points of outer/inner islands due to the coupling of the inner and outer islands in the nonlinear phase, which leads to the initial X-type reconnection region to stretch to the Y-type region.Thus, the current sheet is formed in the reconnection region around the primary X point.The reconnection takes place in the current sheet.The magnetic flux is accumulated at the inflow region of the current sheet when the released magnetic flux by magnetic reconnection is less than that by driven-in by the external driving flow.This piled up flux can enhance the growth of the current sheet.Since the piled-up magnetic flux at the inflow region of the current sheet can be allowed only by nonlinear effects, this stage of flux piled-up is associated to a slowing down of the primary reconnection rate close to saturation.Thus, the nonlinear growth of the current sheet could be identified with the stage II of figure 2. γ max is determined by the final piled-up flux when the current sheet thickness reaches its minimum (just before the abrupt release), which is related to the pile-up speed of external flux and the reconnection speed of internal flux.The change of the reconnection rate is due to a nonlinear change of the primary reconnection region.In general, the reconnection rate has a weaker dependence on the resistivity than the classical Sweet-Parker model [13]. The negative dependence in low resistivity (η ⩽ 3 × 10 −6 ) has not been reported, which is probably because this low resistivity regime has not been reached in the previous simulations.The current sheet formed by the external driving force which results from the growing inner island, and is timedependent.The current sheet is thinned by the squeezing of the external pile-up flux.The amplitude of the current sheet at the separatrix grows with time while its width shrinks [42].The amplitude can reach its maximum and the width can reach its minimum at the time of the maximum reconnection rate in which the position exchange of magnetic islands occurs, as shown in figures 6 and 7.The corresponding current density profiles (Z = 0) with different resistivities and separations are presented in figures 8(a) and (b).In addition, the current sheet thickness τ is also displayed in figure 8(c).Combined with figures 5-8, three features can be found in the low resistivity regime.Firstly, for the lower resistivity, the current sheet is thinner.Secondly, for the larger separation, the thinner current sheet can be achieved when the reconnection reaches its maximum.Finally, the η dependence of γ max varies little with separations, scales as η −0.2±0.02(dashed lines in figure 5), indicating that the contribution of the separation on the pile-up flux is uniform, i.e. the separation does not alter the η dependence obviously.Thus, the negative-dependence in this resistivity region can be attributed to the important role of the current sheet thickness.The nonlinear growth of the current sheet could be identified with the stage II of figure 2 since the piledup magnetic flux can be allowed only by nonlinear effects.It can be seen from figure 4 that for the low resistivity, the duration of the transition phase is longer, thus, the final thickness of the current sheet is thinner.That is, for a lower resistivity, although the squeezing speed of external flux outside the current sheet is slow, the reconnection speed of internal flux inside the current sheet is also slow and the current sheet is thinner, which may result in more pile-up flux than that for a higher resistivity. In the high resistivity (η ⩾ 6 × 10 −6 ), three kinds of the η dependences are obtained at different separations.It is found for the first time that three types of η dependences of the maximum reconnection rate are observed in our systematic simulation of the separations of the two rational surfaces: positive dependence with a smaller separation (∆r ⩽ 0.261), no dependence with a moderate separation (∆r = 0.273), and negative dependence with a large separation (∆r ⩾ 0.273).For a large separation of two rational surfaces, larger magnetic islands are required for the system to have an explosive phase.Although the squeezing speed of external flux is slow, the strong flux drive on the reconnection region resulting from the larger islands may result in more pile-up flux, causing a very fast reconnection [19], which can weaken the dependence on the resistivity.This can explain that the positive dependence is weakened as the separation increases from ∆r = 0.249 to ∆r = 0.261.At ∆r = 0.273, the maximum reconnection rate becomes independence, and even the negative dependence for ∆r = 0.277.The effects of the resistivity and the separation on the pile-up flux are coexisting and competitive.The resistivity plays a major role since the reconnection rate depends on the resistivity.Obviously, the effect of separation is non-uniform, but is gradually enhanced with the increasing separation, resulting in different dependences on η for different separations.That is, the η dependence is gradually weakened to be independent or even negative. The independence and negative-dependence in previous studies are due to the secondary instability: instability of current sheets to form secondary islands [19][20][21][22][23][24][25], or structuredriven nonlinear instability [14,15], or triangularity deformation of magnetic flux around X-point and the resultant current point [8][9][10].Del Sarto and Ottaviani [18] obtained the negative dependence on resistivity for secondary collisionless tearing modes to primary resistive internal-kink modes.They are of a different nature from our present observations.The explosive growth rate we measure does not corresponding to the generation of secondary islands (as shown in figure 3), implies that it is not related to tearing-type modes observed in [18][19][20][21][22][23][24][25].Although the evolution of the structure deformed into a triangular shape appears to be analogous to those observed by Janvier et al [14,15] in slab geometry and those by Ishii et al [8][9][10], but their topologic structure of the magnetic flux is remarkably different.The current point supposed by Ishii et al [8][9][10] is not realized in our simulation.The structuredriven instability supposed by Janvier et al [14,15] was considered to be responsible to the explosive dynamics, but did not result in an interchange of the magnetic islands.In the present investigation, the abrupt growth in the early nonlinear growth is caused by a sudden release of the piled-up magnetic flux through the initial X-point, resulting in the radial position change of the magnetic islands.That is, the change of the reconnection rate is due to a nonlinear change of the primary reconnection region, rather than to a secondary tearing instability of different nature. The moderate resistivity (3 × 10 −6 < η < 6 × 10 −6 ) is a transition region.The dependence of the maximum reconnection rate on η shows a transition tendency of three types of η dependences with different separations.The η dependences are complex and cannot be classified separately. Based on the above analysis the physical mechanisms of the three scalings can be obtained.In the low resistivity regime (η ⩽ 3 × 10 −6 ), negative dependence appears due that the thickness of the current sheet plays the important role.As the resistivity increases, with a smaller separation (∆r ⩽ 0.273), the resistivity dependence changes to be positive or independent, while with a larger separation (∆r ⩾ 0.277) and a higher resistivity (η ⩾ 6 × 10 −6 ), the resistivity dependence changes to be negative.They are due to the enhanced and non-uniform roles of the separation.The result of coexisting and competitive effects of the resistivity and the separation on the pile-up flux is that there is a specific value (specific resistivity and separation), in which the final accumulation of magnetic flux is faster and more abundant, resulting in the higher γ max value, or in which the final accumulation of magnetic flux is slower and less, resulting in the lower γ max value. Conclusion To conclude, with the CLT code, the impact of separation between two rational surfaces on the resistivity dependence of the reconnection rate during the explosive stage is investigated quantitatively.Our study focuses on the explosive dynamics of magnetic reconnection in which the position exchange of magnetic islands takes place and no secondary island forms.The negative dependence of the explosive reconnection rate on η in low-resistivity regime (η ⩽ 3 × 10 −6 ) and the systematic study of the effect of the separation on the η dependence in high-resistivity regime (η ⩾ 6 × 10 −6 ) have been investigated for the first time. The negative dependence on η with a small resistivity (η ⩽ 3 × 10 −6 ) is due to the important role of the thickness of the current sheet.The η dependence of γ max varies little with separations, scales as η −0.2±0.02 , indicating the uniform contribution of the separation to the pile-up flux, i.e. the separation does not alter the η dependence obviously. For the high resistivity regime (η = 6 × 10 −6 ∼1×10 5 ), the effect of the separation is obvious.The η dependence of the maximum reconnection shows obvious difference with the increasing separation: positive dependence with a smaller separation (∆r ⩽ 0.261), no dependence with a moderate separation (∆r = 0.273), and negative dependence with a larger separation (∆r ⩾ 0.273).With the increasing separation between two rational surfaces, larger magnetic islands can be reached for entering into an explosive phase, thus, more reconnection magnetic flux piled up on rational surfaces, leading to a faster reconnection in the explosive phase, which may weaken the η dependence, finally causing a reconnection rate nearly independent of η (∆r = 0.273) or even negative dependence (∆r ⩾ 0.277).Contrast to the almost same η dependence in low resistivity region (η ⩽ 3 × 10 −6 ), there are different and weakened η dependences for increasing separations.These results indicate the effect of separation is non-uniform and gradually enhanced with the increasing separation. In the moderate resistivity (3 × 10 −6 < η < 6 × 10 −6 ), the η dependence of the maximum reconnection shows the transition tendency of three η dependences with the separation.The region is a transition region, in which the η dependences are complex. The resistivity dependence of the reconnection rate during the explosive stage can be largely different with the change of the separation and the resistivity.Thus, it could provide a possible guidance for experimentally controlling and altering some of the parameters governing the resistivity η and the distance of two resonant surfaces ∆r in order to better understand conditions under which destabilization and stabilization can be realized. Figure 5 . Figure 5. Resistivity dependence of the maximum reconnection rate (γmax) in the explosive stage for different separations between two rational surfaces.The η −0.2±0.02fitting line in the low resistivity regime is given by dashed lines. Figure 6 . Figure 6.Poincaré plots showing magnetic fields (upper plane) and contour plots of toroidal current density (bottom plane) when the reconnection reaches its maximum at three different resistivities with µ = 5 × 10 −7 and ∆r = 0.295. Figure 7 . Figure 7. Poincaré plots showing magnetic fields (upper plane) and contour plots of toroidal current density (bottom plane) when the reconnection reaches its maximum at three different resistivities with µ = 5 × 10 −7 and ∆r = 0.249. Figure 8 . Figure 8.Current density profiles (Z = 0) at different resistivities for (a) ∆r = 0.295 and (b) ∆r = 0.249.(c) The current sheet thickness.The inserted figures in figures (a) and (b) are magnified pictures in the region around the thinnest current sheets.
5,257
2023-11-17T00:00:00.000
[ "Physics" ]
Parallelized Integrated Time-Correlated Photon Counting System for High Photon Counting Rate Applications Parallelized Integrated Time-Correlated Photon Counting System for High Photon Counting Rate Applications information Abstract Time-correlated single-photon counting (TCSPC) applications usually deal with a high counting rate, which leads to a decrease in the system efficiency. This problem is further complicated due to the random nature of photon arrivals making it harder to avoid counting loss as the system is busy dealing with previous arrivals. In order to increase the rate of detected photons and improve the signal quality, many parallelized structures and imaging arrays have been reported, but this trend leads to an increased data bottleneck requiring complex readout circuitry and the use of very high output frequencies. In this paper, we present simple solutions that allow the improvement of signal-to-noise ratio (SNR) as well as the mitigation of counting loss through a parallelized TCSPC architecture and the use of an embedded memory block. These solutions are presented, and their impact is demonstrated by means of behavioral and mathematical modeling potentially allowing a maximum signal-to-noise ratio improvement of 20 dB and a system efficiency as high as 90% without the need for extremely high readout frequencies. Introduction Time-correlated single-photon counting (TCSPC) is a mature and extremely accurate low light signal measurement technique that uses single quanta of light to provide information on the temporal structure of the light signal. The method was first conceived in nuclear physics [1] and was for a long time primarily used to analyze the light emitted as fluorescence during the relaxation of molecules from an optically excited state to a lower energy state [2]. Today, TCSPC is widely used in many applications that require the analysis of fast weak periodic light events with a resolution of tens of picoseconds such as diffuse optical tomography (DOT) [3,4], fluorescence lifetime imaging (FLIM) [5] and high-throughput screening (HTS) [6]. TCSPC is based on detecting single photons of a periodical light signal, measuring the detection times within the light period and reconstructing the light waveform from the individual time measurements after repeating the measurements for enough times. Traditionally, the TCSPC technique relied on vacuum tube technologies such as PMTs and MCPs. These mature technologies are capable of achieving very good performances, but they are expensive, cumbersome and fragile and require extremely high operating voltages, which make them unsuitable for the fabrication of miniaturized portable TCSPC imaging systems. In recent years, single-photon avalanche diodes (SPADs) have gained a wide popularity as a less expensive and more compact alternative for vacuum tube detectors. The integration of planar epitaxial SPADs in standard CMOS technology has significantly improved the level of miniaturization of SPADs and paved the way for SPAD arrays. These devices possess the typical advantages of microelectronics integrated circuits, such as small size, ruggedness, low operating voltages and low cost. Furthermore, they can be directly implemented with the necessary associated circuits on the same chip to realize an integrated, ultrasensitive, high-speed and low-cost TCSPC imaging system. Many SPAD-based TCSPC systems have been successfully demonstrated lately. Nowadays, state-of-the-art imaging sensors integrating thousands of single-photon detectors on the same chip have been demonstrated in standard CMOS technology [7,8]. Most integrated TCSPC systems consist of 2D arrays or 1D arrays of SPADs with their associated electronics in the form of smart pixels resulting in a trade-off between high-photon detection efficiency and advanced electronic functionalities [9][10][11]. This approach allows for a better detection efficiency compared to a single commercial SPAD. However, such designs should be conceived such that the detection yield is optimized, i.e. ensure an optimal detection efficiency and a limited counting loss probability. In this chapter, we present these two issues and propose methods to quantify and limit their effects based on mathematical and behavioral modeling. A parallelized macropixel structure for SNR optimization Single-photon avalanche diodes (SPADs) operate in Geiger mode; in this mode, the p-n junction is biased beyond its breakdown voltage, as a result a high electric field exists in the charge space such that a charge carrier ideally created by photoelectric interaction is enough to generate a self-sustained avalanche. Indeed, unlike linear APDs, where stopping the light signal is enough to stop the avalanche, when an avalanche is triggered in an SPAD, the current will continue to increase until the destruction of the component as a result of overheating. Therefore, the avalanche must be swiftly quenched by an associated circuitry that senses the avalanche and stops it by reducing the reverse bias below the breakdown voltage, so that the avalanche cannot maintain itself, then returned it to its initial condition. The circuit used to accomplish these tasks is the quenching circuit, and the selection of such circuit is not a trivial task as it directly affects many of the SPAD performance metrics [12]. It is therefore important to choose a suitable quenching circuit for the desired application so it will not limit or deteriorate the SPAD characteristics. Each SPAD with its associated electronics forms an independent pixel, and the quenching electronic is the main part of the SPAD-associated electronics; however, other smart functionalities could also be included in the pixel. In particular, it is possible to use a gating signal to activate or inactive the SPAD; this functionality is traditionally used to operate the SPAD in gated mode where it is enabled only during the gate-on window and disabled during the gateoff time interval such that absorbed photons do not trigger an avalanche. This functionality could also be used to deactivate SPAD showing an abnormal behavior that affects the system yield. In [13], a macropixel architecture that makes use of such approach was implemented, in this approach. The macropixel ( Figure 1) is divided into eight pixels that could be activated or deactivated based on their activity levels. This option was added to ensure that the SNR is not affected by an undesirable effect that could decrease the detector's efficacy. The signal delivered by a photon counting detector is affected by temporal fluctuations that are expressed as a Poisson distribution. If N is the average number of detected pulse, it includes a fluctuation expressed in the shot noise n ¼ ffiffiffiffi N p , while the other electronic noise can be ignored thanks to the infinity gain of the SPAD. The total signal N is given by N=N ph + N d where N ph is the total of detected photon and N d is the number of counts caused by the dark count. The associated shot noises are n ph ¼ ffiffiffiffiffiffiffiffi The number of photons is measured by subtracting the results of two measurements: one for the total number of counts (Nph + Nd) and the second for the dark ones (Nd). In this case, the total noise is given by If N d is considered as a constant equal to the mean value N d instead of being measured each time, the variance of the term comes to zero, and thus, the number of photons and its associated noise are given by Figure 1. Simplified schematic of the parallelized macropixel is presented in [13]. Therefore, the signal-to-noise ratio is In the case of a multi-SPAD macropixel, the SNR of the macropixel structure is the sum of each SPAD photon count divided by the total noise component: where Nph i is the number of detected photons and Nd i is the dark count rate of the ith SPAD (SPAD i ) in the macropixel. Consequently, the signal-to-noise ratio can be optimized by switching SPADs on/off such that pixels showing undesirable activity levels are deactivated. These undesirable pixels could be 'hot pixels' showing an above-average high dark count rate or 'dark pixels' showing a below-average low light sensibility. Hot pixel elimination These pixels could be identified through a calibration phase where the individual DCR of each SPAD Nd i is measured in the dark and potentially eliminated based on a hot pixel elimination (HPE). To evaluate the benefit of such approach, we assume that the macropixels are uniformly lighted, i.e. all the Nph i are equal to Nph, and all the SPAD's DCR are equal to Nd except for one SPAD j that presents a DCR m times higher than the rest of the SPADs. Thus, the signal-tonoise ratio is given by By turning off the noisy SPAD, the SNR becomes Consequently, disabling the noisy SPAD leads to a signal-to-noise ratio improvement of where α ¼ N ph =N d is the mean photon count on the mean DCR ratio. Figure 2 shows the SNR gain versus the hot pixel DCR multiplication factor m for different α ratios. For a weak signal measurement (α = 0.1), the gain can be as high as 20 dB. Nevertheless, this assessment clearly states that the SNR may be slightly lowered if the m coefficient is too low, and thus it is not advisable to remove SPADs with a DCR greater than the mean DCR. Based on these simulations, an efficient rule of thumb is to disable only SPADs with an m coefficient greater the α ratio, with obviously m > 1. Previous works have reported that about 20% of the SPADs integrated in an array have a dark count about 10 to 1000 times higher than the 80% other diodes [7,14]. Consequently, there is a high probability of having a hot SPAD among the eight SPADs. Therefore, the proposed structure can lead to significant SNR improvement ranging from 0 to 20 dB. Dark pixel elimination algorithm The scenario that could lead to lower SNR is pixels with low light sensibility due to a manufacturing defect, a dust or as a result of the SPADs not being uniformly illuminated. To evaluate the SNR gain resulting from eliminating such pixels, we will consider the case where the eliminated SPADs are completely blind. This is the worst case of light sensibility and the elimination of these dark pixels results in the best SNR improvement ( Figure 3). Assuming n as dark pixels, the corresponding SNR is If all blinded SPADs are turned off, the SNR becomes Consequently, for n 6 ¼ 8, the SNR gain is given by Figure 2. Signal-to-noise ratio improvement using the hot pixel elimination scheme. SNR gain evaluation A low SNR could be the result of a low signal levels or high noise levels; consequently, the SNR could be improved by elimination of pixels exhibiting high noise levels (hot pixel elimination) or pixels exhibiting low light sensibility (dark pixel elimination). Both schemes require a calibration phase. In the case of dark pixel elimination scheme, the counting rate of each pixel must be measured under illumination to detect SPADs with low sensibility levels, and these measurements should be repeated if the test conditions change. The hot pixel elimination scheme on the other hand requires a onetime calibration phase to measure the individual DCR for each SPAD and deactivate the too noisy SPADs based on their DCR levels. Both approaches resulted in an improved SNR; however, the dark pixel elimination efficiency was relatively low, whereas the hot pixel elimination was found to be useful in most cases. Counting loss in TCSPC systems Typical TCSPC setup consists of a pulsed optical laser source, a photon detector such as a silicon photon multiplier (SiPM) or an SPAD, a time measurement block based on a time-todigital converter (TDC) or time-to-amplitude converters (TAC) and an external CPU to process the measurement results. When a photodetection occurs, a certain time is required for data processing; such time interval is referred to as 'dead time' because the system is incapable of processing any additional photons collected by the SPAD resulting in counting loss and a reduction of the SNR caused by the decreased counting efficiency which is at best equal to CV ¼ This issue is further complicated by the random nature of photon arrivals and the fact that TCSPC applications such as FLIM and HTS usually deal with high counting rates. In order to increase the rate of detected photons and improve the SNR, many parallelized imaging structures have been reported [5,15], but this trend leads to an increased data bottleneck which requires the use of complex readout circuitry [7] as well as very high output frequencies to ensure a reasonable dead time [5]. Another solution for the high output rate is the use of an embedded FIFO to store the measurement results, while they have been processed; nevertheless, FIFOs are very demanding in terms of power and silicon area, and to our knowledge, there has been no study done to properly determine the exact FIFO length required to achieve optimum results. It is therefore important to evaluate the counting gain resulting from the use of an embedded FIFO as a function of its depth and the readout rate. TCSPC system as a queuing model TCSPC systems are based on measuring arrival times of single-photon events. Processing these measurements requires several additional operation steps such as quenching the photon detector, shaping the regenerated signal, converting the time to a digital value and sending it into a processing unit or memory. While these operations are being conducted, the system is unavailable to process another measurement for a certain time interval referred to as 'dead time'. To simplify the study of the TCSPC system, the readout period is considered equal to the system's dead time. The dead time as well as the random nature of the single-photon detection events leads to random counting losses as the system is busy processing a previous photon arrival, thus limiting the system efficiency. To evaluate the counting loss, the TCSPC can be modeled using a queuing model with an arrival rate λ representing the average number of photon arriving at the sensor's surface per second, a departure rate μ representing the readout data rate given in sample per second and a service rate r representing the rate at which the TCSPC system can process photon detections which is equal to (dead time) À1 . Figure 4 illustrates this phenomenon; it is clear that even if the arrival rate λ is equal to or less than the departure rate μ, the random nature of the photon arrival leads to a quiet period followed by a peak of arrivals of photon, a well-known characteristic of a Poisson process. During this peak of activity, some photons will be lost as a result of the system's dead time. The simplest approach to limit such loss is the reduction of the dead time and the readout period, but reducing these times is limited by physical and electrical constraints to tens of nanoseconds. Another approach is the use of parallelized structures with the incoming light uniformly split ( Figure 5); assuming an equal distribution of the photon arrivals, this is similar to the division of the arrival rate λ into M equal parts where M is the number of parallel modules. This approach leads to a reduction of the counting loss as well as the pile-up effect, but it also creates a data bottleneck at the end of the processing chain, thus requiring the use of high output frequencies to process the resultant high counting rate. Consequently, the loss problem is not resolved but only shifted towards the final output. This problem could be mitigated by integrating a FIFO in the TCSPC system which allows a better flexibility in processing the stochastic arrival events. Indeed, a TSCPC system without a FIFO can be modeled as one buffer queuing system; similarly, a TCSPC system integrating a FIFO with N rows can be modeled as an N cell queuing system. We will assume that the FIFO's input data follow a Poisson process, a reasonable assumption when the average photon arrival rate is significantly lower than the TCSPC's operating frequency. Giving the stochastic nature of the measured phenomena, i.e. the photon arrival Poisson process, the system's behavior must be studied in terms of the traffic intensity in and out of the FIFO to determine the impact of its limited capacity on the sensor's sensibility due to missed arrivals when the FIFO is full. The FIFO can be equated to a size N queuing system where the input is a Poisson arrival process with a mean arrival rate λ and the probability function of n arrivals occurring during the time interval [t,t + τ] given as The FIFO's output follows a periodic departure process with a departure rate μ and a readout period T d = μ À1 which represents the time needed for one departure to be accomplished. The system can be modeled as a semi-Markov chain where Q n = Q(t = t n ) is the number of occupied cells in the FIFO immediately after departure moments {t n , n = 0,1,2…} [16]. Giving that the FIFO's capacity is limited to N cells, the number of occupied cells in the system cannot exceed N-1, and the embedded Markov chain contains N states labeled according to the number of occupied cells left soon after a departure S = {n, n = 0,1,2…N-1}. Figure 6 shows the embedded Markov chain with all the possible transitions from a random state 'i'. Steady-state probability evaluation Let X n be the number of arrivals during the readout period T d giving the Poisson arrival property; the probability of j arrivals occurring during the readout period is where r defined as is the photon rate to the readout rate ratio. The number of occupied cells after the (n+1)th period is increased by the number X n+1 of photon arrivals during this period and is reduced by one readout. If the number of photon arrivals overloads the FIFO, the number of occupied cells is clipped to N-1 and a loss of measurement occurs. If the FIFO is empty, i.e. Q n = 0, no readout occurs. Therefore, the relation between Q n and Q n+1 is defined as And, the transition probability from the state i to the state j after m transitions is In particular the one-step transition probability is. which allows us to define the K Â K transition probability Matrix 'P' of the one-step transition probabilities P i, j [16]: where the i,j of element P i, j of the matrix represents the probability of being in the state 'j' giving that the system was in the state 'i'. These probabilities describe the transient behavior of the system; however, as the system evolves, it will converge into a state of equilibrium known as the steady state with time-independent distribution [17] represented as a vector π = (π 0 , π 1 , π 2 …π NÀ1 , ) where π i is the probability to be in the state 'i' once the system has reached its equilibrium. Blocking probability The main goal of using this queuing model is to evaluate the system efficiency based on the probability of an arrival finding the FIFO fully, and as a result of being lost, such probability represents the blocking probability P B . In order to evaluate P B , we need to have the state distribution at all moment and not only at departure moments. Let us define the following system probabilities: P k : Probability of the system containing k registered arrivals (k = 0…N). π k : State probabilities at departure instants (k = 0…N-1). π a,k : State probabilities at arrival instants regardless whether the arrival joins the queue or not (k = 0…N). An important property of the Poisson arrival process is the Poisson Arrival See Time Averages [16] which implies that the distribution of occupied cells seen at arrival instants is the same as the distribution seen by a random observer: On the other hand, the probability that an arrival finds k < N occupied queue in the system is equal to the probability that a departure leaves k occupied cell giving that the new arrival is admitted: In particular for k = 0, we have Furthermore, arrivals entering the system occur at a rate λ as long as they are admitted into the queue; hence, we define the effective arrival rate as Simultaneously, departures out of the system continue to occur with a rate μ as long as the system is not empty which allows us to define the effective departure rate as Given that in equilibrium the traffic entering the queue system is equal to the one leaving the queue [7], we have And, the blocking probability is The described method was used to determine the blocking probability and the system efficiency η: where π 0 is defined in (26). It is clear that the system's efficiency increases with the FIFO depth although the amount of the growth decreases. As a result, when taking in consideration the resources needed for an embedded FIFO, it is safe to say that a FIFO depth of 8 is enough to reduce the arrival input loss due to the blocking phenomenon. Case study of a parallelized TCSPC system including an embedded FIFO The TCSPC system illustrated in Figure 8 was designed to be used for an HTS application that requires counting rates up to several MHz per channel. With a TDC dead time of 40 ns, the maximum data rate is equal to 25 MS/s. According to Figure 7, the use of a unique TCSPC module would lead to an efficiency η of, respectively, 98, 90 and 50% for a photon rate of 0.25, 2.5 and 25 MHz, i.e. a service rate of 0.01, 0.1 and 1. Obviously, for a service rate r > 1, the system's efficiency would tend to be 1/r regardless of the use of a FIFO. A photon rate of λ = 25 Mega photons/s is therefore not reasonable in the configuration of a single TCSPC module, but if the arrival rate is divided among the eight TCSPC ( Figure 8) and assuming that the arrival process is equally distributed among the eight units, each TCSPC i receives an arrival rate: resulting in a service rate r i ¼ 0:0125 and an efficiency η ph ¼ 90%, i.e. an expected departure rate μ TCSPCi ¼ 2:8 MHz out of each TCSPC unit which is similar to the value obtained in [19]. Giving the low service rate of each TCSPCi, the output of each TCSPC unit will have a distribution very similar to the Poisson process, and the resulting process is the sum of eight Poisson processes with their respective arrival rate λ i , i ¼ 1, 2, …8 and is therefore also a Poisson process with an arrival rate: Assuming an output frequency of only 33.33 MHz, the service rate will be r f ¼ 0:67. In the absence of the FIFO, the system can be assimilated to a buffer resulting in memory block efficiency η M ¼ 0:6 and a total efficiency: The efficiency of the system is therefore not improved by the parallelization of the TCSPC even with the reduction of the pile-up effect. However, using the eight FIFO cells leads to a memory block efficiency of η M ffi 100%; the overall TCSPC system efficiency is maintained at about 90%. Such efficiency level can only be achieved with a 3 GHz output frequency without the use of the FIFO which proves the great impact including the FIFO in the TCSPC system. Conclusion The random nature of photon and applications involing a high counting rate require a specialized TCSPC system scheme to process the resulting data and improve the SNR. This requires the optimization of the photon detection process through the reduction of noise effects and low sensibility. It also requires the optimization of the system's architecture such that photon events are not lost due to the dead time following a previous photon arrival. In this chapter, Figure 8. Parallelization scheme of the TCSPC system with the embedded FIFO as presented in [20]. we have discussed these two issues and presented solutions using mathematical models to assess the gain of such schemes. A low SNR could be the result of low signal levels or high noise levels. In the case of an SPAD, a low signal level is the result of low light sensibility, while a high noise level is the result of a high DCR. Thus, increasing the detector's SNR can be achieved by limiting the negative effect of these two cases. We presented a TCSPC macropixel architecture in which the SNR can be increased by deactivating dark pixels and/or hot pixels. A dark pixel is a pixel with an abnormally low sensibility level and a hot pixel is a pixel with high noise level in comparison to other pixel noises. The dark pixel elimination scheme requires a calibration phase to determine the activity level of each pixels and the low sensibility pixels that must be deactivated; this calibration phase should be conducted whenever the measurement conditions are changed and would lead to an SNR gain up to 1.5 times higher. The hot pixel elimination scheme on the other hand requires a onetime calibration scheme to determine the DCR of each pixel, and as a result, the pixels must be deactivated which allow an SNR improvement ranging up to 20 dB. The processing of detected photons can be optimized by means of a parallelized TCSPC architecture that make use of an embedded FIFO to limit counting loss due to photon detections' subsequent dead time. Using a queueing model, we demonstrated the impact of such approach and quantified the efficiency improvement as a function of the FIFO length, the counting rate and the readout rate. The proposed TCSPC architecture is capable of achieving a 90% efficiency rate with a counting rate of 25 MHz at a readout rate of 33 MHz. Without the use of the embedded FIFO; such efficiency would require the use of a 3 GHz readout frequency.
6,014.4
2018-01-01T00:00:00.000
[ "Computer Science" ]
Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D. Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position. INTRODUCTION Brain source reconstruction of EEG scalp potentials has benefited from the increasing use of advanced head modeling techniques. In addition, combining the use of MRIs together with precise positioning and coregistration of recorded electrodes has increased source reconstruction performance. The localization of deep brain sources may especially benefit from accurate electrode determination because it affects the solution of the inverse problem, eminently when the signal to noise ratio (SNR) is low (Wang and Gotman, 2001;Koessler et al., 2008). Several methods exist for registering sensor positions, including manual measurement approaches, based on electromagnetic digitization, infrared, MRI, ultrasound, and photogrammetry (Le et al., 1998;Koessler et al., 2007;Zhang et al., 2014). However, final electrode determination accuracy varies widely across those methods. As Adjamian et al. (2004) demonstrated, fiducial-based coregistration, relying solely on anatomical landmarks produces remarkable displacements on final alignments of electrodes. Beltrachini et al. (2011) contended that deviations of electrode positions of less than approximately 5 mm result in negligible dipole source localization error. Another aspect to consider in quantifying source reconstruction performance is the resulting source SNR under realistic conditions of low sensor SNR. With increasing agreement of the true source configuration with the head model (which is heavily influenced by sensor coregistration accuracy), source SNR greatly improves, effectively lowering the detection threshold for weak sources (Dalal et al., 2014). Laboratory protocols must also take practical considerations into account. Factors important in designing EEG lab protocols often include preparation time, spatial and practical demands, as well as equipment cost and operational complexity (Russell et al., 2005;Koessler et al., 2011;Qian and Sheng, 2011;Reis and Lochmann, 2015). Routinely scanning volunteers with MRI with an EEG cap in place, despite its high accuracy, is not practical for many research labs depending on laboratory proximity, scanner availability, and potential scanning costs. Common electromagnetic digitizers, in turn, may not have sufficient accuracy for optimal performance (Dalal et al., 2014;Vema Krishna Murthy et al., 2014). Photogrammetry using ordinary consumer-grade digital cameras can provide a cost-effective and accurate solution, and has already been used for a variety of applications in fields such as geomorphology or archaeology. For example, it has been used to create height maps of landscapes (Javernick et al., 2014), digitize cultural artifacts and monuments (McCarthy, 2014), and create cinematic effects like "bullet-time, " originally featured in The Matrix (1999). Existing applications related to neurophysiology include the localization of intracranial EEG electrode arrays in neurosurgery patients (Dalal et al., 2008). However, to the best of our knowledge, there are no low-cost, easy-to-use solutions employing photogrammetry-based scalp EEG electrode localization in practice. Qian and Sheng (2011) reported a proofof-concept using a single SLR camera to determine EEG electrode positions by installing two planar mirrors forming an angle of 51.4 • . A limitation of this approach is its high dependence on the precise mirror configuration and relative displacement of the measured head, and its use with human participants has not yet been reported. By using a similar procedure but employing a swivel camera over a head model, Baysal anḑ Sengül (2010) demonstrated a rapid and accurate localization of sensor positions. However, electrode positions were again simulated using colored circles on an even scalp surface. The actual detection of electrodes on a real subject wearing an EEG cap has not been previously demonstrated. In 2005, Russell et al. proposed a new photogrammetry-based technique featuring 11 cameras mounted on a dome-shaped structure that is able to simultaneously capture images from different view angles around the participant's head. Although this system provides highly accurate EEG sensor positions, the proprietary software limits its use to a specific kind of EEG cap, and requires significant time to complete since the experimenter must manually select each electrode on the respective images. Here, we present a rapid, accurate, and low-cost alternative method to register EEG electrode positions, using a single digital SLR camera and computer vision techniques implemented in our open-source toolbox, janus3D. Our method is based on photogrammetry, which has been demonstrated to provide highly accurate results with low-cost digital cameras (Baysal andŞengül, 2010;Qian and Sheng, 2011). Based on 2D DSLR camera images, 3D head models of subjects wearing an EEG cap are generated employing structure-from-motion (SfM) photogrammetry software. Electrode positions of a replica head model are determined using the photogrammetry-based approach and a common electromagnetic digitizer. Finally, electrode position accuracy and coregistration accuracy are analyzed and compared. Additionally, we introduce janus3D, a new MATLAB-based open source toolbox. This software was implemented as a GUI to allow the determination of highly accurate EEG sensor positions from the individual 3D-photogrammetry head models. Furthermore, it includes coregistration algorithms to align the models with their corresponding individual MRIs, as well as automatic templatebased electrode labeling. METHODS AND MATERIALS To evaluate the accuracy of our novel approach, we applied the method to a 3D printed full-scale replica head model of an adult subject wearing a 68-electrode EEG cap (Sands Research Inc., El Paso, TX, USA), as described in Dalal et al. (2014). The 3D-printed replica head was created after digitizing the subject's head with a high-resolution 3D laser scanner employing fringe projection (FaceSCAN 3D , 3D-Shape GmbH, Erlangen, Germany). This device has a measurement uncertainty of 0.1 mm. The obtained mesh was 3D-printed and the replica head was scanned a second time to generate a mesh without the imperfections caused by the printing process (e.g., offset due to the thickness of the 3D printing filament used). On this mesh, two researchers independently determined the electrode positions in 3D software. The two sets of electrode positions were averaged and used as "ground truth" in the following analyses. A more detailed description on how the ground truth electrodes were obtained can be found in Dalal et al. (2014). Our approach uses a single DSLR camera to capture 2D images that are necessary for the photogrammetry-based 3D reconstruction. Given that the replica head was printed in a uniform off-white color, but the reconstruction relies on color difference information, it was necessary to color the replica "cap." The fabric of the replica EEG cap was subsequently colored similar to a real cap, also serving to provide sufficient contrast crucial for later texture-based automatic electrode detection. Fifty-six high-quality photographs of the replica head were captured using a 24-megapixel DSLR camera (Sony Alpha 65, Sony Corporation, Minato, Tokyo, Japan) equipped with a Sony DT 3.5-5.6/18-55 mm SAM II lens (35 mm focal length) mounted on a tripod. The exposure index was kept below ISO 800 to manage image noise. Aperture size was fixed at f/18 to avoid focal blur and maintain consistent optical properties across the photos. Motion blurring was reduced by firing the camera using a wired remote release. The replica head was placed in front of a 6'×9' chroma key green screen backdrop fixed on the laboratory wall. Photos were taken by positioning the camera at 4 different height levels, in which the camera approximately described angles between 0 • and 45 • relative to the horizontal plane, in steps of about 15 • . At each height level, the replica was rotated around its vertical axis in steps of about 20 • -30 • before taking a new photo. If necessary, the position of the tripod was slightly adjusted to fit the model into the camera's field of view. A schematic depiction of this procedure can be found in Figure 1A. The reconstruction of the 3D mesh was performed using the commercial 3D photogrammetry software "PhotoScan" by Agisoft Agisoft LLC, St. Petersburg, Russia (2016). In general, any photogrammetry-based 3D reconstruction software can be used. However, PhotoScan was chosen because of its comfortable usability and fast reconstruction performance. It is able to compute 3D models based on the initial information provided by photographs and basic intrinsic features like focal length values, which are stored in the Exif metadata. Although prior camera calibration is recommended by the developer, it had little impact on the final results when the amount of pictures was sufficient (that is around 35 or more) and was therefore omitted from our final protocol. The implemented algorithm searches salient structures across all photographs and identifies matching points that are used to determine the camera position for each shot relative to the remaining. To prevent faulty reconstructions and to reduce processing time and the amount of extraneous feature information, we masked irrelevant features in all photos (i.e., all information outside the object of interest) beforehand. For this purpose, we automatically created a binary image mask for each single picture, using an appropriate chroma key threshold. The respective threshold was selected automatically, but can be adjusted to increase contrast, which in the present case was not necessary. After importing all images into PhotoScan, they were coupled with their corresponding masks. Figure 1B depicts an example of a picture-mask pair for a human subject. First, the algorithm creates a matching point cloud (MPC) and computes the corresponding set of camera positions based on that information. Afterwards, the MPC is densified by extracting additional points from corresponding high-resolution images in relation to each camera position. On the basis of this dense point cloud (DPC), PhotoScan generates a 3D polygonal mesh representing the object's surface. By following this procedure, we obtained a dense mesh of the replica's surface consisting of 1,717,422 faces and 859,513 vertices. Texture information was obtained by generic mapping after the geometry was computed. The final textured model was exported as Wavefront Object format (.obj) associated with a texture image file. Figure To evaluate the quality of the obtained replica mesh, we applied an iterative closest point (ICP) algorithm (Besl and McKay, 1992) to the reconstructed 3D model generated by PhotoScan and the ground-truth 3D model obtained from the second scan of FaceSCAN 3D . Before applying the ICP algorithm, the reconstructed 3D model is scaled using the same procedure as will be explained below when discussing MRI coregistration. After initial registration, the ICP algorithm attempts to minimize the sum of the squared distances for each point of the source point cloud to the closest point of the reference point cloud by a combination of translation and rotation, yielding a minimal distance solution. We evaluated the accuracy for each vertex of the reconstructed model, by localizing the closest vertex point in the ground-truth model and separately computed the offset in each orthogonal direction (L 1 -norm). A schematic depiction of this evaluation can be found in Figure 3A. Further we studied the influence of the number and resolution of the pictures on the accuracy of the model reconstructed with PhotoScan ( Figure 3B). For this purpose, we repeated the reconstruction procedure using downsampled sets of pictures where each factor was independently manipulated. To downsample image count (Dimg), we removed pictures in steps of 4, trying to keep a homogeneous coverage of the replica head. This resulted in 12 downsampled sets ranging from 56 images (full coverage) to 12 images, all with 24 megapixels. Image resolution (Dres) was downsampled using the same amount of 56 photographs. The resolution was reduced in software in steps of 4 megapixels, ranging from 24 to 8 megapixels. Additionally, we included in our analysis common FIGURE 2 | (A) Half textured 3D mesh of the replica head generated from 56 photographs. The white surface shows the untextured mesh and the dark red surface represents the texture we added. (B) Coregistration of the photogrammetry-based 3D reconstruction to structural MRI. The checkerboard surface was generated from the scalp surface of the segmented MRI and the glassy surface from the photogrammetry-based 3D reconstruction. Both models were coregistered using janus3D. (C) Example of a mesh obtained from a human subject wearing an ANT Waveguard 128 EEG cap. (D) Example of a mesh obtained from a human subject with an electrode attached to the nasion as commonly used for MEG head position measurement. The meshes were generated from 43 (C) or 55 (D) photographs, applying the same reconstruction procedure. image resolutions such as 4K, 1080p, and 720p, corresponding to 7.2, 1.75, and 0.78 megapixels respectively. This procedure resulted in 8 different sets of downsampled images. For later comparison, the downsampling rate was normalized by dividing the amount of pixels (image resolution × image count) by the highest value (24 megapixels × 56 pictures) and subtracting this value to 1. Hence, both 56 pictures at 12 megapixels resolution and 28 pictures at 24 megapixels correspond to 50% downsampling rate. Downsampling rates are indicated as values ranging from 0 (full information) to 1 (no information). Table 1 lists all downsampling steps including their respective downsampling rate. For each set of pictures, we registered the model resolution and the processing duration. All downsampled models, except those generated with image count of 12 and 16 pictures, were compared with the original one and the average mesh deviation was obtained. Given that the reconstructed models showed slightly different mesh extensions, mismatching parts across meshes were removed to facilitate the computation of the average distance between the respective meshes after ICP based fine registration. Those extensions can occur at the outer boundary of the mesh because the respective 3D models generated with photogrammetry cannot be obtained in isolation. For example each set of images may contain different information from objects surrounding the object of interest. Given that the surrounding objects are not sampled completely (e.g., the surface on which the object of interest is placed) the reconstructed raw models may be slightly different at the borders that are in contact with other surfaces. Removed parts assured that all final meshes covered the same area from the original object (i.e., full head/face including electrodes). Thus, the removal did not influence the process itself, but made the meshes comparable. Note that our purpose was to measure errors regarding the reconstruction of the scanned object and not the overall scene. Figure 4C depicts the original full resolution model, surrounded by the respective part that was removed depending on Dimg. Electrode position accuracy was obtained from the highestresolution model by coregistering the 3D model to the individual MRI first and acquiring the respective electrode positions afterwards. Note that accurate coregistration is critical for acquiring accurate EEG electrode positions. First, the 3D model was reoriented and rescaled from PhotoScan's arbitrary coordinate system into MRI space. Therefore, it was crucial to correctly select features that are shared by the reconstructed mesh and the rendered MRI. It has been proposed that face-to-face matching works best when selections include parts from below the nose to upper facial regionsexcluding cheeks-as illustrated in previous studies (e.g., Kober et al., 2003;Koessler et al., 2011). Thus, parts around the nose bone (i.e., forehead, cheekbones and eyebrows) turned out to be optimal, due to their high rigidity. We used the point clouds of the selected facial segments to compute a scaling factor between the 3D-model and the MRI. For this purpose, we divided the mean L 2 -norm (i.e., the Euclidean distance) between each vertex point v MRI i from the MRI segment and its centroid C MRI by the mean Euclidean distance between each point v model i from the Frontiers in Neuroscience | www.frontiersin.org Numbers in brackets indicate that a full model was not reconstructed. All other datapoints correspond to those in Figure 4B. Downsampling rates are given as values from 0 (full information) to 1 (no information) relative to the full-resolution model. 3D-model segment and its centroid C model , as indicated by the following equation: Each centroid was defined as the mean coordinate of all points within each segment, calculated across each dimension separately. Then, all points of the 3D model were multiplied by this scaling factor. Afterwards both segments were coregistered by applying an ICP algorithm. A rigid body transformation matrix transforming the facial selection of the model to the MRI was obtained. This transformation matrix was applied to the whole 3D model. An example of this step is shown in Figure 2B. Identifying electrode shapes on the textured 3D model took place hereafter, by recognizing circular structures on the mesh surface from various view angles. This was achieved by adding a binarized version of the model's texture to the mesh. The binary texture was created by thresholding the model's texture to maximize the contrast difference between electrodes and cap. From 10 different perspectives, a 2D Hough transform (Yuen et al., 1990;Atherton and Kerbyson, 1999) for circular shape detection was performed, as implemented in the function "imfindcircles" from MATLAB's Image Processing toolbox. Hereby, multiple view angles can compensate for ellipsoid electrodes at occluding boundaries of the head. Those points were back-projected into 3D space yielding the final electrode positions. Five slightly displaced electrodes were manually corrected on a 3D representation of the textured mesh. In our experience, the amount of electrodes that need manual adjustment is around 5% of all electrodes (depending on the contrast between electrodes and the surrounding texture). However, since electrode positions can be manually selected on a textured representation of the mesh, electrode selection can be done precisely due to instant visual feedback. In addition, electrodes were labeled automatically, based on a majority vote. For this purpose, seven independent sets of template electrodes were used. Those were coregistered using two automatically detected landmark electrodes (Fpz and Oz) followed by an ICP affine registration. The respective label was selected according to the label of the closest distance of the electrode to be labeled to each of the sets of template electrodes. This yields seven proposed labels of which the one was chosen that would receive the most "votes, " leading to an inaccuracy of around 5%. However, for the automatic labeling algorithm to work properly, it is crucial that electrodes that need to be labeled and the respective sets of template electrodes are in accordance with respect to number and relative position of electrodes. This automatic labeling procedure was implemented and performed in janus3D. Figure 3C depicts the full workflow used to coregister and obtain EEG electrode positions out of individual MRIs and 3D models. We estimated the error committed during the determination of EEG electrode positions and compared it to the performance of an electromagnetic digitizer, ANT Neuro Xensor (ANT Neuro, Enschede, Netherlands). Using the stylus pen, two experienced experimenters registered electrode positions 3 times directly on the replica. One electrode (TP10) was poorly reproduced on the replica head and was therefore removed from further analyses. The remaining 67 electrode positions were coregistered to the individual MRI using NUTMEG (Dalal et al., 2011), based on common fiducial points (i.e., nasion and pre-auricular points). For each method, Euclidean distances between electrode positions and the ground truth positions were determined (L 2 -norm). Both methods were compared applying Wilcoxon's signed-rank tests to the respective deviations. Figure 3D gives an overview of the steps used to compare the accuracy of both methods. The coregistration error was distinguished from the method-specific localization error. Note that to determine the electrode positions using the electromagnetic digitizer and the photogrammetry-based method, it is necessary to apply different coregistration approaches, which are based either on the fiducial points or matching of the facial surface, respectively. To compare the accuracy of both coregistration approaches, we repeated the coregistration on spatially shifted versions of the 3D model and the electrode positions determined previously, together with the fiducial points. The spatial shifts were achieved by applying random linear transformations, including rotations between 1 • and 360 • and translations between 1 and 100 mm for each orthogonal direction. Additionally, a random scaling factor between 1 and 5 was applied to the 3D model to transpose it into an arbitrary coordinate system, simulating the PhotoScan reconstruction. Next, we calculated the Euclidean distances between the original electrode positions and the Table 1. Top left plots the MPC density relative to the full-resolution model, top right likewise for DPC density, bottom left for overall time consumption relative to the full-resolution model and bottom right for the average deviation to the full-resolution model in mm. (C) An example showing the area that needed to be removed in order to make the meshes covering a comparable area for each downsampling step for image count. (D) Additionally loss of vertex points in order to make the meshes covering a comparable area for each downsampling step (Dimg). The dashed area indicates the not interpretable result of having a negative loss of information as a function of downsampling. At 50% downsampling, 7.8% of spatial information was lost. electrode positions after coregistering the modified versions. Finally, we compared both sets of Euclidean distances using Wilcoxon's signed-rank tests to obtain an estimation of the coregistration error committed in both methods (see also Figure 3E). Pure electrode position accuracy was evaluated by ICP aligning each set of electrodes to the ground truth electrode set and tabulating the residual error (i.e., Euclidean distance) for each electrode position. Subsequently, the performance of the coregistration methods were evaluated with respect to each other by applying Wilcoxon's signed rank test to these residual errors. Electrode determination and MRI coregistration as described, were implemented and performed in janus3D. Furthermore, this toolbox includes image-processing functions to facilitate the creation of binary masks from the photos captured by the DLSR camera. janus3D allows importing 3D models in Wavefront OBJ format (Wavefront Technologies, Toronto, Canada) and MRIs in NifTI file format. The software automatically generates a 3D mesh derived from the MRI's scalp surface calling the Fieldtrip functions "ft_read_mri, " "ft_volumesegment, " and "ft_prepare_mesh" setting the method to "projectmesh" (Oostenveld et al., 2011). A graphical user interface (GUI) is provided to allow the visualization and manipulation of the rendered MRI and reconstructed 3D model. After a manual pre-orientation into MRI space, it is possible to select similar facial sections in both meshes that will be coregistered employing an ICP algorithm. If necessary, manual corrections can take place hereafter, as the software provides functions for translation, rotation and scaling. Electrode determination and labeling are facilitated by comfortable GUI functions. The resulting electrode positions are provided as raw model positions and projected orthogonally onto the MRI's surface. For automatic labeling of arbitrary EEG cap layouts, janus3D includes an easy-to-use template builder. Figure 5 illustrates the workflow of the whole process, depicting example screenshots for each step. janus3D requires MATLAB 2015a including the Image Processing and Computer Vision System Toolboxes and Fieldtrip (Oostenveld et al., 2011). It is compatible with all platforms running MATLAB and is available as standalone application for Mac OSX and Linux. janus3D is available at https://janus3d.github.io/janus3D_toolbox/ under the MIT license. Frontiers in Neuroscience | www.frontiersin.org RESULTS We evaluated the accuracy of the 3D reconstruction by computing the minimal distance between each vertex point of the reconstructed model obtained from PhotoScan and the ground truth model obtained from FaceSCAN 3D . The average distance across all vertex points was 0.90 mm (median: 0.52 mm; SD: 1.00 mm). 95% of all vertex points showed a deviation smaller than or equal to 2.95 mm. Figure 4A depicts this difference for each vertex point represented on the surface of the second scan using FaceSCAN 3D . The influence of image count and image resolution are depicted in Figure 4B (top left) for MPC density, (top right) the respective face count of the mesh, (bottom left) the overall processing time consumption and (bottom right) the average deviation relative to the model with highest image count and resolution. MPC density reduction was more pronounced for Dimg than for Dres, whereas DPC density reduction was more pronounced for Dres. Although a 50% downsampling in both cases meant the same total amount of pixels contributing to the reconstruction, the MPC was 2.6 times denser in the Dres condition than in Dimg (Figure 4, top left panel) whereas the final mesh resolution, expressed by the face count, was 1.6 times higher for Dimg compared to Dres (Figure 4A, top right panel). The overall reduction in processing time was similar for Dimg and Dres. The mesh accuracy diminished with increasing Dimg downsampling, although at a low rate, reaching a maximal deviation of 0.22 mm. Dres showed remarkably low accuracy in the last two downsampling steps: at 1080p and 720p resolution, the deviation was of 0.46 and 0.62 mm, respectively. When the resolution was kept above 4K (equivalent to 7.2 MP), mesh accuracy was only influenced slightly, culminating at 0.23 mm. A detailed overview of error values related to the actual downsampling rates can be found in Table 1. The mean (SD) difference between electrode positions determined using the photogrammetry-based approach and the ground truth was 1.3 mm (0.6 mm). Electrode positions obtained using the ANT Xensor TM electromagnetic digitizer unveiled a mean difference of 7.8 mm across the 3 measurements (mean [SD]: 7.6 mm [2.2 mm], 8.0 mm [2.5 mm], 7.8 mm [2.1 mm]). Indeed, Wilcoxon's signed rank test revealed that electrode positions determined with the photogrammetry-based approach had significantly smaller errors than those measured with the electromagnetic digitizer (p < 10 −4 , for all 3 measurements). Figure 6A depicts the deviation for each single electrode of the respective method relative to the ground truth electrodes (top) for the photogrammetry-based approach and (bottom) for the first measurement using the electromagnetic digitizer. We also evaluated the accuracy of the coregistration methods used for each approach on spatially shifted versions of the electrode positions. The mean (SD) deviation of the new electrode positions compared with the original ones was 0.78 mm (0.24 mm) and 6.14 mm (0.65 mm) after coregistration based, on facial surface matching and the fiducials, respectively. Wilcoxon's signed rank test revealed that errors of electrode positions due to coregistration were significantly smaller for surface matching compared to the fiducial-based method (p < 10 −4 DISCUSSION Compared to the ground truth model, the photogrammetrybased 3D reconstruction deviates 0.52 mm (median) over all vertex points. This error is partially setup-dependent because both the amount and the resolution of the pictures that are used for generating the model can influence the reconstruction performance. Although a resolution of 7.2 megapixels yields negligibly small deviations of 0.23 mm compared to the fullresolution model, the deviation at 1080p (0.78 megapixels) and 720p (1.75 megapixels) increases up to 0.62 mm. Despite the increased error rate at the lowest resolution is relatively small, the final 3D meshes appear slightly smeared, due to the significantly lower resolution of the models. Downsampling image count caused negligible effects on the model's error. Nevertheless, the matching point algorithm was affected by image count. In fact, a reduction in the detected matching points observed at the highest downsampling rates (16 and 12 pictures) strongly impaired the reconstruction, making it impossible to obtain complete models. These results suggest that obtaining a complete model reconstruction will require at least 20 different camera perspectives. Furthermore, Figure 4D depicts the relative loss of information. The amount of vertices that additionally needed to be removed from the highest resolution model to make all models covering the same area increased noticeable when less than 40 images were used for the reconstruction. This means that the loss of information was higher than the expected loss due to downsampling itself. It is therefore advisable to acquire more than 32 images to keep additional information loss below 5%. Independent from that, the final mesh resolution increases with increasing resolution of the camera used. Differences in electrode localization performance can be assumed as of the same range that the whole model would expose after downsampling. Since the vertex points of each electrode are drawn from the same set of vertex points used for comparing model reconstruction performance, only a systematic bias specifically toward electrode vertices, could have had potential influence to the final electrode position. Therefore electrode position accuracy in dependency of downsampling was not tested separately as it was assumed being directly linked to the overall mesh reconstruction error. In the present study only DSLR cameras were tested. Conclusions on how other types of cameras would perform (e.g., compact cameras) cannot be drawn. Lens aberrations and inconsistencies could affect reconstruction quality, but prior camera calibration may compensate for these effects and allow the use of lower-end cameras. In our study, we did not use prior camera calibration because, in our experience, this step mainly improves the 3D reconstruction only under weak light conditions or when too few images were captured. Agisoft also recommends the use of prior camera calibration if images of different cameras are merged in a single set. Electrode localization accuracy benefits from the relatively small 3D reconstruction error associated with the photogrammetry-based approach. It outperforms common electromagnetic digitizers (see also Figure 6). As ANT states on their webpage, the technical inaccuracy of this electromagnetic digitizer is less than 2 mm ?. This is still a relatively high inaccuracy compared to the median deviation of the photogrammetry-based approach found here (0.52 mm), which may even be an overestimation. Remondino et al. (2014) compared different 3D reconstruction approaches for different kinds of objects. For static head models reconstructed using Agisoft PhotoScan they observed a measurement inaccuracy of 0.1 mm, which is even smaller than what we found. The high technical accuracy of the photogrammetry-based approach is reflected on the accuracy of the electrode positions. Whereas a standard electromagnetic digitizer had a mean error of 7.8 mm, the photogrammetry-based approach only deviated by 1.3 mm. These findings are in line with previous work (e.g., Baysal andŞengül, 2010;Dalal et al., 2014). The small errors observed across electrodes (p < 10 −4 ) suggest that our novel approach may significantly enhance the accuracy of EEG source reconstruction (e.g., Khosla et al., 1999;Michel et al., 2004;Dalal et al., 2014). Our analyses also show that an important part of the accuracy gain is due to smaller MRI coregistration errors. Whereas electromagnetic digitizers commonly use a fiducialbased coregistration (mean error 6.1 mm), our photogrammetric approach is based on a coregistration involving facial surface matching (mean error 0.8 mm), which is significantly more accurate (p < 10 −4 ). Fiducial-based coregistration only relies on a few points that are manually defined on the subject and on the MRI volume. On the other hand, coregistration based on facial surface matching can use several thousand points that are matched iteratively by an ICP algorithm. Nevertheless, facial sections selected from the rendered MRI and the 3D model should include the same facial region; otherwise the iterative alignment can fail. Taking this point into account, janus3D was designed to make this step as easy and reliable as possible. It features a facial selection that is based on the boundary of the first face selection, which can be performed either on the MRI or the reconstructed 3D model. The boundary shape is used as an overlay template for the corresponding second selection. Additionally, electrode detection also benefits from automatized algorithms implemented in janus3D. The software is able to automatically determine electrode positions using texture-based shape detection, only occasionally requiring manual correction. Even then, this procedure is faster and more reliable than the single electrode selection with an electromagnetic digitizer because the user is able to determine electrode positions on a static mesh. Direct visual feedback allows the user to detect and instantly correct inaccurate selections. Nevertheless, there are some limitations that need to be considered. As the photogrammetry-based approach relies on proper image quality, a well-illuminated environment is necessary when acquiring the photos. To avoid image noise, ISO values of the camera should be kept below ISO 800 and the aperture size should be f/8 or lower. Depending on the camera, in our experience, standard ceiling lights in a typical laboratory do not provide sufficient light. However, the models depicted in Figures 2C,D were acquired using standard ceiling lights. This explains the somewhat rough appearance of the facial features in our reconstruction. Setting up additional lighting is not only beneficial, but also necessary in most indoor environments. Multiple lights or diffusers should be installed to avoid creating shadows that may "travel" across the head with rotation. Due to the nature of human skin, reflections should also be avoided as they similarly impact the reconstruction results. The replica model we used was less reflective than human skin. For that reason more than 20 pictures are likely to be required when scanning a human subject. In our experience, sufficient reconstruction results are obtained at a number above 35 pictures. Further testing also revealed that using 3 cameras close to each other overcomes most of the imperfections. Shadows and reflections are recognized at the same time from different view angles and therefore compensate for each other. Another benefit of this setup is that only 20 rotational steps are necessary, if the cameras are aligned such that two cameras face the front from two opposing perspectives and one camera faces the top of the head. An array of cameras would be an alternative implementation that would acquire all viewpoints simultaneously, avoiding the need for rotation, and would likely further improve the measurement accuracy. Any facial movements or movements due to the subject's rotation would be eliminated and shadow information and reflections would serve as a feature instead of a possible source for inaccuracies. Furthermore, it would speed up image acquisition to just a few seconds. Our results imply that more than 20 cameras would be needed, with a corresponding increase in equipment costs. Finally, 3D-model based MRI coregistration could similarly improve MEG coil coregistration as Vema Krishna Murthy et al. (2014) showed by employing a Microsoft Kinect camera. Source reconstruction performance tested on a phantom head increased by 137% using Kinect 3D coregistration compared to a Polhemus electromagnetic digitizer. Since the Kinect camera yielded an average coregistration error of 2.2 mm, we would expect improvements of MEG source reconstruction performance using our novel approach on a similar scale. To achieve this, MEG reference coils would need to be referred to facial landmarks as those used for registering the head of the subject to the MEG's coordinate system. A possible solution could be the use of visible markers on the face of the subject that later could be found on the textured mesh. CONCLUSION Single DSLR camera photogrammetry serves as a rapid method for accurate EEG electrode detection. Additionally it is a costeffective alternative to common methods like electromagnetic digitizers and outperforms them in measurement and MRI coregistration accuracy. Finally, reconstructed 3D models of subjects wearing an EEG cap, created with a common DSLR camera and photogrammetry software may improve the results of ultimate beamformer solutions, when conducting source analysis (Dalal et al., 2014).
8,342.6
2017-05-16T00:00:00.000
[ "Computer Science" ]
GFCache: A Greedy Failure Cache Considering Failure Recency and Failure Frequency for an Erasure-Coded Storage System : In the big data era, data unavailability, either temporary or permanent, becomes a normal occurrence on a daily basis. Unlike the permanent data failure, which is fixed through a background job, temporarily unavailable data is recovered on-the-fly to serve the ongoing read request. However, those newly revived data is discarded after serving the request, due to the assumption that data experiencing temporary failures could come back alive later. Such disposal of failure data prevents the sharing of failure information among clients, and leads to many unnecessary data recovery processes, (e.g. caused by either recurring unavailability of a data or multiple data failures in one stripe), thereby straining system performance. To this end, this paper proposes GFCache to cache corrupted data for the dual purposes of failure information sharing and eliminating unnecessary data recovery processes. GFCache employs a greedy caching approach of opportunism to promote not only the failed data, but also sequential failure-likely data in the same stripe. Additionally, GFCache includes a FARC (Failure ARC) catch replacement algorithm, which features a balanced consideration of failure recency, frequency to accommodate data corruption with good hit ratio. The stored data in GFCache is able to support fast read of the normal data access. Furthermore, since GFCache is a generic failure cache, it can be used anywhere erasure coding is deployed with any specific coding schemes and parameters. Evaluations show that GFCache achieves good hit ratio with our sophisticated caching algorithm and manages to significantly boost system performance by reducing unnecessary data recoveries with vulnerable data in the cache. Introduction In recent years, unstoppable data explosion [Wu, Wu, Liu et al. (2018); Sun, Cai, Li et al. (2018)] generated by wireless sensors [Yu, Liu, Liu et al. (2018)] and terminals ; Guo, Liu, Cai et al. (2018)] keeps driving up the demand for larger space in various big data storage systems. Due to the ever-growing data volume and ensuing space overhead concern, erasure coding, blessed with its capability to provide higher levels of reliability at a much lower storage cost, are gaining popularity [Wang, Pei, Ma et al. (2017)]. For instance, Facebook clusters employ a RS (10, 4) code to save money [Rashmi, Shah, Gu et al. (2015); Rashmi, Chowdhury, Kosaian et al. (2016)] while Microsoft invents and deploys its own LRC code in Azure [Huang, Simitci, Xu et al. (2012)]. Conversely, countless commercial components of storage systems are inherently unreliable and susceptible to failures [Zhang, Cai, Liu et al. (2018)]. Moreover, as the system aggressively scales up to compensate for the influx of data [Liu, Zhang, Xiong et al. (2018)], data corruptions, either temporary or permanent, become a normal daily occurrence. For an erasure-coded storage system, a reconstruction operation is called to recover the failed data blocks, with the help of parity blocks. Unlike the permanent data failure, which is fixed through a background job, temporarily unavailable data is recovered on-the-fly, in order to serve the ongoing read request. This is because, while permanent failures are under system surveillance by monitoring mechanisms like heartbeat, temporary data failure is unknown until it is accessed. In big data storage system like Hadoop, an I/O exception will occur upon accessing unavailable data. Insistence on such exceptions after several repeated attempts leads to the situation of de-graded read. Although data recovery process is triggered immediately to fulfill a degraded read request, system performance is still degraded. This is due to disproportionate amounts of I/O and network bandwidth [Cai, Wang, Zheng et al. (2013)] consumed by each recovery process. For example, given a (6, 4) RS code and the block size to be 16 MB, a corrupted data block in a stripe needs a block of 16MB read and then downloaded from each of other six healthy nodes. In general, given a (k, m) MDS eras-ure code, k times of overhead is incurred to reconstruct one block. However, those newly revived data are either discarded immediately or not tracked after serving the request. Such disposal is somewhat reasonable due to the assumption that data experiencing temporary failures could later come back alive. On the other hand, such design overlooks the importance of keeping recovered data. For instance, due to the uncertainty of causal factors, like hardware glitches, which data is to be unavailable as well as when it occurs, conforms to no particular distribution. This makes failure pattern difficult to find, and to follow amid the various failure statistics. In other words, repeated temporary data unavailability is likely to occur for reasons such as persisting system hot spots, recurring software upgrades and so forth. Therefore, the existing disposal of a recovered data would inevitably lead to repeated recoveries due to its recurring unavailability, thereby straining system resources and performance. Furthermore, statistics [Subedi, Huang, Liu et al. (2016)] indicate that multiple-failure scenarios occur, and multiple blocks of a stripe can be unavailable simultaneously or incrementally. Given the current data recovering practice of one reconstruction operation per degraded read, multiple data corruptions on a stripe inevitably require multiple data recovery processes. However, those failed data can be produced by using only one recovery process. Therefore, such redundant recoveries on a single stripe lead to wasted system resources [Li, Cai and Xu (2018)] and degraded performance. Additionally, big data storage systems like Azure, often support multiple clients' access. Without a central storing recorder (medium) of recovered data to enable the sharing of failure information among multiple-clients, those repeated and redundant data reconstruction operation happening to one client would unnecessarily reoccur among different clients. Therefore, we argue buffering and sharing failure information is of instrumental importance in the avoidance of unnecessary data reconstruction, and will improve system performance. To this end, this paper considers a typical distributed setting of an erasure-coded storage system and proposes GFCache to cache corrupted data to serve those purposes. GFCache employs a greedy caching approach of opportunity to promote not only the failed data, but also sequential failure-likely data in the same stripe. Additionally, GFCache includes a FARC (Failure ARC) cache replacement algorithm, which features a balanced consideration of failure recency, frequency to accommodate data corruption with good hit ratio. The stored data in GFCache is able to support fast read of the normal data access. Furthermore, since GFCache is a generic failure cache, it can be used anywhere erasure coding is deployed with any coding schemes and parameters. Evaluations show that GFCache achieves good hit ratio with our sophisticated caching algorithm and manages to significantly boost system performance by reducing unnecessary data recoveries with vulnerable data in the cache. For instance, compared to the current system without a failure cache, GFCache manages to reduce the average latency of a request to 12.19%. Also, GFCache achieves 24.62% hit ratio than 22.74% of CoARC in Subedi et al. [Subedi, Huang, Liu et al. (2016)] under workload h3. The rest of the paper is organized as follows: Section II presents the background information. Section III illustrates the design of GFCache whereas Section IV evaluates with the experiments. Section V reviews the related work and Section VII concludes the paper. Erasure coding In comparison to replication, erasure coding essentially employs mathematical computation in the production of redundancy to provide data protection [Plank, Simmerman and Schuman (2008)]. In general, a stripe consisted of k +m partitions is used as the smallest independent unit to preserve the capability to reconstruct itself. Such reconfigurable capability is formed through an encoding process, where m parity partitions are produced by k data partitions through a matrix multiplication. In practice, as expressed in D D G P  =   , a generator matrix G is used to form a mathematical relationship between the original data D and the generated parity P partitions, and any one of the k data partitions is protected. Furthermore, if a matrix formed by any k rows of the generator matrix G is invertible, any combination of no more than m partition failures can be restored through a similar reverse matrix multiplication process with k surviving partitions. Such desirable property is called MDS (maximum distance separable) property [Plank (2013)] and the process to regenerate corrupted data partitions are called data recovering. So far the most widely-used MDS code is RS code. In comparison, non-MDS codes feature non-uniform numbers of participants involved in the encoding and decoding processes. For example, in LRC codes [Huang, Simitci, Xu et al. (2012); Sathiamoorthy, Asteris, Papailiopoulos et al. (2013)], fewer data blocks are needed to generate local parity blocks than that of global parity blocks. Failure scenarios Data can be corrupted due to various reasons, ranging from software glitches, and hardware wear-out to possible human mistakes. Although a large school of studies focuses on gathering failure statistics, the time and place of a data corruption are near impossible to find beforehand. Therefore, before actual data loss occurs, the process to somehow restore the corrupted data is of great importance. In order to prioritize data recovery in different failure scenarios, a clear failure classification is used to divide failures into permanent and temporary, according to the lasting of an event. Permanent failures refer to the permanently lost data, like a node breakdown or a malfunctioning disk. Since a permanent failure often involves a large amount of data and may cause heavy damage, it is always under system surveillance to be alarmed in a timely manner. For example, in Hadoop, each node comprising the cluster reports its health to the metadata server through a periodical heartbeat mechanism. Once a permanent failure is confirmed, repairing efforts are to be scheduled at less busy hours to revive data in a new replacement. This is because such repairing work requires a great deal of cooperation from other nodes and takes a long time (e.g. days) to complete [Rashmi, Shah, Gu et al. (2015)]. Conversely, temporary failures indicate a brief unavailability of the data caused by nondestructive reasons, such as a persisting system hot spot, software upgrade. In essence, a temporary failure is transient and may revive later by itself. Instead of causing a damaging impact of data loss, a temporary failure often slows down the current access request. This is because a temporary failure cannot be found until the data is accessed. Due to such essence, a temporary failure needs to be dealt with right away to finish serving the ongoing request. In systems like Hadoop, a healthy node is randomly chosen to initiate a corresponding recovery by data read from other surviving nodes on the fly. A large school of studies gathers failure statistics and show that temporary failures are more common, and account for the majority in distributed storage systems. Moreover, among all levels of failures, single failure accounts for the majority (99.75%), and multiple failures are not impossible [Pinheiro (2007); Ma (2015)]. Design This section details the design of the proposed GFCache for an erasure-coded storage system. Architecture Although big data storage systems feature good scalability with a distributed data storage cluster, clients still need to contact the centralized Metadata Server for metadata queries. For example, in Hadoop, an access request from the client is sent to the master NameNode, before the client contacts corresponding DataNode(s) for actual data access. This paper argues that providing a failure cache, which is installed in the Metadata Server, will not greatly impact the traditional flow of data access, but can significantly improve performance by avoiding unnecessary data recovery processes. Fig. 1 demonstrates the proposed architecture, with our GFCache installed upon a distributed storage system. In the system, two Metadata Servers manage the metadata and supervise a cluster of DataNodes. GFCache is implemented on top of an NVM storage device, such as an inexpensive, small-sized, solid-state drive (SSD). A failure cache management module controls both the promotion and the eviction of the data. The GFCache can be directly plugged into the Metadata Server, such as the NameNode in Hadoop, and can be shared by all clients. GFCache can also be installed as a standalone node, independent of the Metadata Server. Upon receiving queries from a client, the Metadata Server is able to check GFCache for speed access. Otherwise, a client continues to contact corresponding Data Servers to fetch the requested data. In terms of data recovery, recently failed data is stored in GFCache after being recovered from a designated Data Server as a background job with lower priority. In return, before performing a data recovery, the designated node (a Data Server) can check GFCache to gather participating data needed for reconstruction. As there is already a module for erasure-coding that maintains information of erasurecoded data (e.g. RaidNode in Hadoop), the Cache management module can communicate and coordinate for failure data eviction, such as the placement of the evicted data, and the update of the stripe metadata. Greedy caching Storing the failed data in a cache enables the sharing of failure information, which thereby avoids the process of repeatedly regenerating the same data. However, simply caching currently unavailable data does not reduce redundant recovery processes caused by multiple unavailable data in one stripe. This is because, unlike node failure that can be monitored via the heart-beat mechanism, temporary data failure is unknown until attempts to access it are made. Each failed data will incur one degraded read, which must recover the data before it can be promoted to the failure cache, as shown in subplot (a) of Fig. 2. In this figure, unavailable data blocks B and C in stripe 1, experience a degraded read and is cached after their recovery. (2) Our proposed architecture installs a NVM cache as a plug-in within the Metadata Server, which captures newly recovered data and supports fast data access Therefore, aside from recurrences of data unavailability, GFCache also strives to diminish redundant data recovery processes of multiple data in a stripe. In regards to the uncertain status of data pre-access, GFCache employs greedy caching. In essence, this is a means of opportunistically pre-fetching increased amounts of data, needed for a better hit ratio. Such greedy practice is based upon the key insight, in which the whole information of a stripe can be recovered by a single reconstruction process, given any combination of no more than m failures under a (k, m) MDS erasure code. In other words, data produced by many recovery processes due to multiple failures happening in one stripe can actually be done in one single recovery, thus leading to a good save. The subplot (b) of Fig. 2 illustrates our greedy caching upon each degraded read recovery. We can see that upon the recovery of data block B, all the blocks of stripe 1 can be produced. GFCache greedily caches block B and block C, and thus resulting a failure cache hit when unavailability of data block C is encountered upon its access. In this way, the redundant recovery of block C is avoided. Fundamentally, the greedy caching is opportunistic and how and how much of the greed is actually matters. In plain words, over-greed may lead to the abusive use of cache space and the eviction of more important failure data, while under-greed does not serve the purpose of reducing redundant recoveries. To this end, GFCache accordingly adopts three adjustments on respectively which data, and how much data to pre-fetch with greed, and how to manipulate such data in the cache. Firstly, GFCache caches sequential data after the current vulnerable block of the same stripe, e.g. the data block C behind data block B. This is due to the assumption that access locality may lead to failure locality. Additionally, GFCache maintains a greedy window of size m to reduce the abusive use of cache space. This is due to the upper bound of the failure tolerance of a (k, m) MDS erasure code. Adjustment of the window size can be made if using a non-MDS erasure code, like LRC codes. Last but not least, except for the actually corrupt data (e.g. block B), other data promoted to GFCache with greed (e.g. block C) is assumed with an unconfirmed possibility to failure in the near future. Therefore, GFCache treats such data as failure-likely and accordingly keeps them at the closest place to eviction, such that they will not occupy the cache space long if the prediction does not result in a hit. Pseudo codes of greedy caching are included in the replacement algorithm. Failure caching replacement algorithm Essentially, the significance of a cache depends on how data is managed within the device. As the core of caching, various innovative and powerful cache replacement algorithms are proposed with regards to the incoming normal workload accesses. GFCache differs itself by caching newly recovered data, which undergoes temporary unavailability recently. In other words, instead of interacting with the normal access of healthy data, GFCache only functions when failure happens and functions as a static ROM for normal access without dynamic adjustments. Since data failure statistics are quite random and do not follow a certain distribution, GFCache considers a comprehensive failure caching replacement algorithm with respect to both failure recency and failure frequency to aim for a higher hit ratio. In other words, this paper assumes that recently failed data is likely to fail again in the near future and a data which fails often is prone to failure again. By keeping more recently and frequently failed data longer in GFCache, more time is allowed for such temporarily unavailable data to revive healthy. The general idea behind this combined consideration can be expressed as C=W×R+(1-W)×F, in which, R stands for failure recency and F for failure frequency. F will increment by one if a data is promoted into cache due to actually corruption. For the failure-likely data which is cached by greed, the R is set to zero and the F does not increment for its corruption is not confirmed yet. Data with smallest C is evicted if the cache is full. Since the weights of failure recency and failure frequency are dynamically changing, an adaptive update of W is vital to maintaining a good hit ratio. Algorithm 3.1 provides details of our comprehensive caching algorithm with dynamic tuning, which gains inspiration from ARC algorithm [Megiddo and Modha (2003)]. In Algorithm 3.1, different treatments are applied separately to corrupted data and data cached by greed. If data is cached by its own corruption, Failure ARC (FARC) algorithm is used for adjustment. In comparison, data cached by greed is directly put into the place of GFCache for earliest eviction if it is a miss. For any evicted data, if its original copy comes back alive, it can be discarded. Otherwise, it is written back to its original node or a designated node. After that, the metadata information of the corresponding stripe is updated accordingly. Note that if the original residing node of the evicted happens to undergo a permanent node failure, the data can be directly written to the replacement node and save some reconstructing resources. Experiments This section experiments with real-world traces to compare GFCache with other approaches. The other approaches are: (1) Without a failure cache (No Cache), which is common in current big-data storage systems, as shown in Fig. 1; (2) FARC (failure ARC), which represents a simple adoption of the classic ARC [Megiddo and Modha (2003)]; (3) CoARC from related work [Subedi, Huang, Liu et al. (2016)]. Note that, GFCache differs from FARC with our proposed greedy caching. As opposed to CoARC, GFCache features a greedy caching and the consideration of both failure recency and failure frequency. Environments and workloads Simulator This paper adopts a trace-driven simulation method for evaluation purposes. Our simulator simulates a distributed storage system with a cluster of storage nodes and bases on PFSsim [Liu, Figueiredo, Xu et al. (2013)], which is widely used in various research works [Liu, Cope, Carns et al. (2012); Li, Dong, Xiao et al. (2012a, 2012b; Li, Xiao, Ke et al. (2014)]. Our simulator runs on node 19 of the computing cluster of VISA lab at ASU. The node has 2X Intel Xeon E5-2630 2.40 GHz processor and 62GiB of RAM with 2X1TB disk of Seagate 7200 and the model number is ST1000NM0033-9ZM. The operation system of the machine is Ubuntu 14.04.1 with Linux version 3.16.0-30. By default, all simulations emulate a cluster of 12 storage nodes with an RS (8, 4) code employed. Datasets The real-world traces we adopt come from Chen et al. [Chen, Luo and Zhang (2011)], which are widely used in academic studies and prototype implementations. In detail, the CAFTL traces come from three representative categories, ranging from typical office work-loads (Desktop), big data queries (Hadoop) to transaction processing on PostgreSQL (Transaction). More detail on the collecting of the CAFTL traces can be found in Chen et al. [Chen, Luo and Zhang (2011)]. Failure Creation Since there are no existing failure traces of CAFTL workloads, randomization is used to generate data corruption to emulate degraded read. We set the total failure rate to be around 1% of the whole working data set. In detail, we corrupt random size of data in a random stripe to cause unavailability. Results are averaged of 20 runs, in which the total failure rate varies slightly but manages to conform to an expected normal distribution. Metrics This paper adopts the latency and the hit ratio of the failure cache as the metrics to compare different approaches. Note that latency is averaged across all the requests and then normalized by that of GFCache to rule out the difference of units. Fig. 3 shows the significant difference made on the average latency of a request between having and having not a failure cache. We can clearly see that with a failure cache, the average latency of a request has been reduced drastically. For instance, an 88.56% latency reduction is seen under workload d1, whereas the gap grows to almost 94% under workload h1. Although the performance gap varies, the system performance is significantly improved as a whole. The reason behind the boost is that with failed data cached, fast data access becomes possible with cache hits in GFCache in the following two situations. Effectiveness of a failure cache (1) Normal data access is boosted with a shortcut to check GFCache during its communication with the Metadata servers in the first place. Therefore, a degraded read can be bypassed with a cache hit. (2) During a degraded read, helper data participating the data recovery can be fast downloaded from the GFCache, instead of from the corresponding node. In other words, the fact that buffering corrupted data in a cache manages to make a boosting contribution to the system performance justifies the installment of a failure cache, considering the decreasing cost of a storage device. Effectiveness of greedy caching Although it is straightforward to add a failure cache to an existing system, it fails to reduce redundant data reconstruction operations occurred in a stripe without our proposed greedy caching technique. In order to compare the difference made by the greedy caching, we implement a baseline failure cache called FARC (Failure ARC), which does not use the greedy caching technique. GFCache. In general, a shorter latency is experienced by a request under GFCache throughout all the workloads. The range starts from around 13% to 58% depending on the workload. For example, GFCache outperforms FARC by 13.06% under the workload of h1. This is because, with greedy caching, failure-likely data are aggressively cached in GFCache. If such opportunistic gambling with greed produces a cache hit in the near future, the performance is to be boosted without suffering redundant repairs of an otherwise de-graded read. If data cached with greed is a cache miss, little overhead is caused due to its early eviction from the GFCache. Comparison with CoARC The most related work to our GFCache is CoARC from Subedi et al. [Subedi, Huang and Liu (2016)], which features an LRF (least-recently-failed) failure caching algorithm and an aggressive recovery of all other temporarily unavailable blocks in the same stripe. Fig. 5 and Fig. 6 respectively compare our GFCache with CoARC in the latency and the hit ratio. In Fig. 5, a close performance in latency is witnessed under some workloads (e.g. h6), whereas GFCache contributes to larger latency reduction as opposed to CoARC in other cases. For instance, GFCache experiences a 5.74% smaller latency under d1 than that of CoARC. Under h1, CoARC is 8.46% slower than GFCache. In Fig. 6, unlike a normal cache, the hit ratio of a failure cache is generally low (no more than 30%) regardless of a specific replacement algorithm. This is largely due to two reasons. Firstly, as an input source, data failures are comparatively in much smaller amount than the normal data access. Secondly, the failure pattern of corrupted data is hard to capture even with caching algorithms which prove effective in a normal cache. Regarding the contrast between GFCache and CoARC, the gap in between is not very big in general. For example, CoARC and GFCache exhibit a resembling hit ratio to be respectively 28.14% and 28.92% under h6. However, GFCache surpasses CoARC in other cases. Under d1, GFCache achieves a hit ratio of 16.59% in opposition to 14.51% of CoARC. We argue those disparity gaps result from two aspects. One is that GFCache considers both failure recency and failure frequency to manage the cached data while CoARC's LRF only focuses on failure recency, thus leading to higher hit ratios in some cases. The other is that the aggressive approach in CoARC to recover all the failed data in a stripe needs to wait for the completion of identification of the last data, thus leading to idle wait time. In contrast, greedy caching adopted in GFCache causes no idle wait. However, greedy caching do incur cache miss due to its speculative opportunism. Related works This paper studies data recovery of an erasure-coded storage system with a failure cache. Therefore, we review related work in the following order. (2) searching for a more efficient recovery sequence with less data reads [Khan, Burns, Plank et al. (2012)]; Data Recovery of Erasure Coding (3) proposing optimization in different system and network settings [Fu, Shu and Luo (2014); Shen, Shu, Lee et al. (2017)]. All these works focus on facilitating per recovery process of either single failure or multifailure. In comparison, this paper treats the data recovery process as a black box and differentiates itself by reducing repeated and redundant recovery of failed data through buffering failed data. Therefore, this paper is perpendicular and complimentary to above work. Normal Caching Caching is one of the oldest and most fundamental and use techniques in modern computing, which has been ubiquitously employed in nearly everywhere in the entire computational stack. Although various caching policies [Mattson, Gecsei, Slutz et al. (1970); Megiddo (2003)] have been proposed with different trade-off, the common purpose of such caches is to accommodate incoming normal data access, rather than failed data. Therefore, this paper contrasts itself with conventional caches by buffering a completely different source of data source, the temporary unavailable data. Failure Caching In terms of failure caching in a setting of an erasure-coded storage system, very few research pays attention to the recurring data recovery problems. Sudedi et al. [Subedi, Huang, Liu et al. (2016)] treats per recovery process as a black box and first proposes CoARC, which essentially features a least-recently-failed (LRF) cache to buffering newly recovered data in order to eliminate repeated recoveries of the same data. This paper falls into the same track of reference [Subedi, Huang, Liu et al. (2016)]. However, this paper distinguishes itself with Subedi et al. [Subedi, Huang, Liu, et al. (2016)] in the following two aspects: (1) our GFCache employs a greedy caching policy to buffer all the data in a stripe upon its first recovery while CoARC waits to confirm all unavailable blocks in the failed strip to start recovery. (2) our GFCache features a more complicated eviction policy considering both failure recency and failure frequency while CoARC employs a simple LRU algorithm on failed data. Behinds, our GFCache is selfadaptive and scan-resistance while CoARC is not. Therefore, GFCache is able to achieve a higher hit ratio in general. Conclusion This paper proposes GFCache to address the repeated and redundancy data recoveries with the classic caching idea to buffer failed data. GFCache features a greedy caching each data recovery process and designs an innovative and self-adaptive caching replacement algorithm with a combined consideration of failure recency and failure frequency. Last but not least, cached data in GFCache provides fast read access to normal access workloads. Evaluations prove that GFCache achieves good hit ratio and manages to significantly boost system performance.
6,682.2
2019-01-01T00:00:00.000
[ "Computer Science" ]
ARMA MODELS FOR MORTALITY FORECAST Abstract. In the last several decades, many countries have been paying a lot of attention to mortality forecasting because of high longevity risk. The purpose of this paper is to analyze mortality characteristics of Baltic countries and make predictions using ARMA models. Research showed that mortality rate distribution is almost the same in Lithuania, Latvia and Estonia and all of them represent longevity trends. It means that men and women, children and adults have the same mortality structure in all Baltic countries and live longer than before. Introduction Time series analysis is one of the most popular methods to look through the data of different types with alternate characteristics.We will use an autoregressive moving average (ARMA) model as it helps to represent the data structure of the mortality rate in Baltic countries.The main objectives of this research are to analyze mortality rate data and its structure, compare mortality distribution by sex and age.We will answer the question whether mortality rate distribution is the same in all Baltic countries and forecast future mortality rates in case ARMA models work correctly.This information and the results obtained could be used by the social insurance systems of Baltic countries, insurance companies and pension funds as longevity trends can cause a lot of problems to their businesses.Many attempts were made to understand and explain these changes in different countries.Some of them are discussed by Tan K.S., Blake D. and MacMinn R. (see [11]).The application of AR-ARCH models for mortality analysis were discussed by Giacometti R., Bertocchi M., Rahev S.T. and Fabozzi F.J. (see [4]).An in-depth longevity analysis can represent changes in recent mortality rates and help make accurate forecasts. The paper is organized in the following way.In section 2, we describe the main tools which we use for the mortality data analysis.Section 3 is dealt with the mortality forecast for populations of Lithuania, Latvia and Estonia.In sections 4, we compare the empirical data with the forecast results.Finally, in section 5, we present calculations of the remaining life-time for the population members under condition that mortality rates vary according to ARMA processes. Autoregressive models Let X = (X t ,t ∈ T ), T ⊂ R, be a stochastic process.Then X t with EX 2 t < ∞, t ∈ T , is said to be (weakly) stationary if, for all t 1 ,t 2 ∈ T and for such h > 0 that t 1 + h,t 2 + h ∈ T , two following equalities are satisfied: To analyze stationary time series, different models are used.One of the most popular models is a discretetime stationary process called autoregressive moving average process (ARMA).The model ARMA(p, q) is a generalization of AR(p) and MA(q) models.The process X t , t ∈ Z, is an autoregressive process AR(p) of order p if X t is stationary and where µ and α 1 , . . ., α p are parameters.Here and everywhere W t ∼ W N(0, σ 2 ) is a stochastic process called white noise that has further characteristics: It means that model AR(p) tries to approximate the process value at the moment t as linear dependence on the previous values.The process X t , t ∈ Z, is called the moving average process MA(q) of order q if where ν and β 1 , . . ., β q are parameters and W t ∼ W N(0, σ 2 ).Process X t , t ∈ Z, is called the ARMA(p, q) process with mean a if X t is stationary and ARMA models are widely used for almost all types of data as they not only explain regression in data but also add a stochastic factor to a model. Tests for stationarity As we are going to use ARMA models, we should check whether our data is suitable for this kind of analysis, i.e. whether the time series is stationary.To make such analysis, we use two different tests. The first one is an augmented Dickey-Fuller test.The ADF test examines unit root existence for the process with test hypothesis The null hypothesis H 0 states that there is a unit root, so time series is not stationary.To find the answer, we use test statistic γ/SE(γ) (for details, see [2]) and then compare the result with the relevant critical value for the Dickey-Fuller test.If the test statistic is less than the critical value then the null hypothesis is rejected.Sometimes the ADF test states that analyzed data has a unit root.However, it is not the final answer to our question because the ADF test suggests only an approximate p-value and may be inaccurate.The shortest way to check results is to test data with different tests.We will do this with Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test. The KPSS test is very practical as modeling the ARMA process with a function auto.arima in R software this test is done automatically.Using the KPSS test it is important not to confuse given results.The null hypothesis H 0 in the KPSS test is opposite to that in the ADF test.Hypothesis H 0 of the KPSS test states that the process is stationary.For more information about the theoretical base and practical usage of these tests, see [2] and [6].So, we will use both ADF and KPSS tests in order to check whether our analyzed data is stationary.The confidence level selected for our tests is 95%.We will require that at least one of the tests results states that the analysed data is stationary. ARMA model selection. AIC criterion The main purpose of the analysis is to select the ARMA(p, q) model that approximates empirical data best.To achieve our objectives, we will use the Akaike information criterion (AIC): where σ2 W is the estimation of the white noise variance σ 2 W , calculated by the maximum likelihood method.It ensures that the optimized ARMA(p, q) model is selected according to the smallest AIC value.Using the function auto.arima in R software, we do not need to check the criterion as it selects the best model for the analyzed data automatically.As a result, we have ARMA(p, q) models where complexity and accuracy match best. Empirical data analysis Historical mortality data of three Baltic countries -Lithuania, Latvia, and Estonia -is represented in this research.All the required mortality tables have been taken from the Human Mortality Database, University of California [12].Baltic countries data is collected from year 1959 to 2013.1959-1979 years of data will be ignored because of high probability of inaccuracy; 1980-2008 years of data will be used for analysis and predictions.Consequently, 2009-2013 years empirical data will be compared with the predictions made.Research will be done for people aged 0-90. It should be noted that modeling will be done for a traditional variable m x,t , called mortality rate: , where µ x,t is the empirical force of mortality. Most graphs show the log mortality rate ln(m x,t ), as this form of data has a better level of representation differences.Figure 1 represents historical log mortality rates for the total population.From the first point of view, Baltic countries seem to have similar variance level and trends.In further analysis, all time series will be broken down by sex and year, i.e. we consider m (M) x,t for men and m From Figure 2, it can be seen that Latvian and Estonian data do not show a lot of fluctuation till before the year 1995, but then similar downward trends are presented that prove longevity existence.However, Lithuanian mortality data fluctuates depending on time without any exact trend. Besides, if we look at Lithuanian men's and women's data separately (see Figure 3), we will see absolutely different graphs according to sex.So, we will try to find these changes and forecast them using ARMA models. Forecast of m x,t for Lithuanian data We will perform a minute analysis of 0-, 18-, 40-and 65-year-old people mortality rate series.All other results will be summarized and presented in graphs and tables. Let's analyze newborn girls' data series m (W ) 0,t ,t = 1980, ..., 2008.In order to check whether the series is stationary, we will use KPSS test results.The KPSS test p-value is p > 0.1, so our analyzed data is considered to be stationary.As the stationarity requirement is fulfilled, we can apply an ARMA model.The suggested model is AR (1).Therefore, one step prediction is the following m(W) 0,t = 0.0095 + 0.8886m We also looked through the 18-year-old girls' series m 18,t ,t = 1980, ..., 2008.Again, the KPSS test suggests p-value p > 0.1, so we consider the data analyzed to be stationary.As a result, the model AR(1) is selected.Hence, m(W) 18,t = 0.0006 − 0.3268m The same analysis was done for 40-year-old women, and based on the ADF test result (p = 0.0196), the AR(1) model was selected.Therefore, m(W) 40,t = 0.0021 + 0.3761m The series of 65-year-old women's data is more complex as no stationary models were found.So, we are going to analyze time series of differences: ∆m The stationarity of 65-year-old men's data is stated only by the ADF test with p = 0.0279.We decided that this is enough to find the data approximation.In the case, the model selected is AR (1).So, according to this selection m(M) 65,t = 0.0353 + 0.639m The same analysis was done for people of different ages.Using suggested ARMA models for all the data, forecasts were made for the years 2009-2013.The results of ln( mx,t ) broken down by sex can be seen in Figure 4. Forecast of m x,t for Latvian data Again, we will analyze 0-, 18-, 40-and 65-year-old women's and men's data time series.We will start with women's data.Let us take data on newborn girls m (W ) 0,t ,t = 1980, . . ., 2008.This time series is considered to be stationary by KPSS test (p > 0.1).The selected model is AR (1).Hence, we can suppose that m(W) 0,t = 0.0107 + 0.8206m As series m (W ) 18,t ,t = 1980, . . ., 2008 is considered to be non-stationary by both tests, we will study time series of the data differences ∆m The stationarity of 40-year-old men's data is proven by both ADF (p = 0.027) and KPSS (p > 0.1) tests, and the AR(2) model is selected, we have that m(M) 40,t = 0.0076 + 1.0242m The AR(1) model is suggested for 65-year-old men's data and stationarity is checked by the KPSS test (p = 0.0914).Due to the obtained result we have m(M) 65,t = 0.0404 + 0.7064m After the same analysis is done for all the ages and appropriate ARMA models are chosen, 5-year forecasts for women's and men's data are presented in Figure 5. Forecast of m x,t for Estonian data We will repeat the same analysis for Estonian mortality data and present some of the results in more detail.Let us take time series of newborn girls.As m (W ) 0,t ,t = 1980, . . ., 2008 is not stationary itself, we will look through its differences.Time series ∆m (W ) 0,t ,t = 1981, . . ., 2008 is considered to be stationary by both ADF (p < 0.01) and KPSS (p > 0.01) tests.So, after processing of the data we select the MA(1).So we have that ∆ m(W) 0,t = 0.0004. Then we will analyze 18-year-old women's series of differences.The p-value of the KPSS test for data 18,t is p > 0.1, and the ADF test suggests p = 0.0194.So, time series of differences is stationary, the best model for data is MA(1), and, therefore, ∆ m(W) 18,t = 0.00094.The AR(1) model is selected for 40-year-old women's data.The stationarity is checked by the ADF test (p = 0.0173).Therefore, the following formula should be used for one step prediction m(W) 40,t = 0.002 + 0.4067m The 65-year-old women's time series is considered to be stationary by both tests.The p-value of the ADF test is 0.015, and p-value of the KPSS test is p > 0.1.As a result, the ARMA ( The stationarity of 65-year-old men's data is proven by the KPSS test (p > 0.0894).As the stationarity condition is fulfilled, we can proceed the approximation by some ARMA model.As a result, the AR(1) model is selected, and we can suppose that m(M) 65,t = 0.0386 + 0.323m The forecasts for both women's and men's data are presented in Figure 6.With reference to all the results and forecasts made for Baltic countries, the following findings were made: (i) the infant mortality rate is relatively high; (ii) the mortality rate distribution for women and men in all Baltic countries is almost the same; (iii) a considerable difference is seen in women's and men's mortality rates of average years: women's data surface is convex upward, men's -downward. Comparison of forecasts As it was stated before, a comparison of children's and adults' data would be made separately.So, we use empirical data for the years 2009-2013 and compare it with forecasts made by ARMA models.We calculate the mean square error by the formula: In our case, For better understanding of the results, let us look at figures 9 and 10, where a comparison of the empirical mortality rate and forecasts is presented for the years 2009 and 2013.According to the information that can be seen from the graphs, we can make some insights: Forecasts for the year 2009 have a high level of accuracy, while forecasts for the year 2013 are not so precise, in comparison with empirical mortality data.This is a typical situation.As information is aging and forecasts depend on previous values, the results grow worse as the number of forecast steps increases; (ii) During analysis, it was seen that women's and men's results have the same structure of √ MSE.As time increases, we see more errors.Still ,the situation of children's data is even more complex. √ MSE calculated from the children mortality data is much higher than for adults.Moreover, the accuracy of forecast has a strong fluctuation depending on time. (iii) ARMA models predict adult mortality rate more accurately, in comparison with results for children.Now we are going to discuss forecasts for the year 2016.From Figure 11, we can see that mortality rates in all Baltic countries are identical for all cuts.Again, boys' and men's mortality rates are much higher than girls' and women's respectively.The most significant difference is noticed in adults data forecasts. Unfortunately, after all the analysis, it was found that no long-term forecasts could be done.They go to the average too fast because of small ARMA model order.So, forecasts for the years 2016 and 2026 have almost no differences.In order to represent longer-term forecasts with a higher level of accuracy, we must use ARMA models of greater order or choose other time series models. Average time of life remaining In this chapter, we will use forecasts for the year 2016 to calculate the average time of life remaining for individuals of different sex and age, i.e. e (W ) x,2016 , e As k p x,t = p x,t p x+1,t+1 p x+2,t+2 . . .p x+k−1,t+k−1 , for all possible x, k,t and p x,t = 1 − q x,t ≈ 1 − m x,t we have: êx,2016 All the results given are shown in Table 2 and Figure 12.It is one more approval that mortality distribution in Baltic countries is very similar and women's and men's mortality rates have differences.In addition, these results for the Lithuanian population were compared with the data presented in the research ( [5]).It was noticed that forecasts for Lithuanian data are very similar.Even more, Lithuanian men live by almost 1 year longer in the year 2016 than 4 years ago.It is one more proof that the average number of years remaining is still growing for Lithuanian people. Concluding remarks Mortality rates of all Baltic countries were compared and forecasts were presented.After data analysis, it can be proven that the mortality rate distribution of three Baltic countries is almost identical.Latvian population lives a little bit shorter, Estonian population lives a little bit longer, in comparison with the Lithuanian population.The men's death rate is higher than women's death rate; children's mortality rate is lower than infant, but it rises as children grow up.In general, children's data has a strong fluctuation depending on age, while adults' mortality rate faces a visible upward trend. During the research, some problems were met.Time series for every year's category was not so long: it had only 29 observations and is not so much in order to find a stationary ARMA model that fits the data best.As data had a high level of fluctuation, especially in recent years, the accuracy of the forecasts dropped.As a result, we had quite big mean square errors, especially in children's data analysis.For the same reason, long time predictions do not tell us any additional information, which could be reliable for the strict statements. In order to achieve a high level of forecast accuracy, different ARMA model modifications should be used, or one may approximate mortality rate series by another models.Nevertheless, it was enough to state the longevity trend in all Baltic countries and see the forecasts still representing it. Figure 9 .Figure 10 . Figure 9.Comparison of the empirical log mortality rate and forecast for the year 2009 −1 ,t = 1981, . . ., 2008.This time series is considered to be stationary by both tests (the p-value of the ADF test is p = 0.027 and the KPSS test p-value satisfies p > 0.1).The suggested model is the ARMA(1,1) model.Consequently, one step prediction is Now let us turn to time series analysis for men's data.The AR(1) model is selected for newborn boys, as both ADF (p = 0.0254) and KPSS (p > 0.1) tests state stationarity of the data.According to such selection (1)ng analysis for 18-year-old boys' data we get that the KPSS test is suggested with p > 0.1, and the model selected is MA(1).So, m(M) 18,t = 0.0019.The stationarity of 40-year-old men's data is proven by both ADF (p = 0.0351) and KPSS (p > 0.1) tests and the model suggested is AR(2).Hence, (1,1) = 1981, ..., 2008.This series is stationary as the KPSS test suggested p > 0.1 and p-value of the ADF test is p = 0.0187.The model ARMA(2, 1) is chosen for the data.The analysis of 40-year-old women's data shows that this time series is stationary.It is proven by both tests: the ADF test p-value is 0.0367 and KPSS test p-value is p > 0.1.The selected model is AR(1).2008 is not stationary, so let us look through the time series differences which are considered to be stationary by the ADF test (p = 0.0194).Then, the model chosen is ARMA(1,1).According to this we have ,t = 1981, . . ., 2008.This time series is stationary because the ADF test suggests pvalue 0.0321, and the KPSS test p-value is p > 0.1.So, after processing ARMA(1,1) model we get that (W ) 65,t ,t = 1980, . . ., (M) 18,t ,t = 1980, . . ., 2008 is not stationary, so we will analyze data differences ∆m Table 2 . Estimation of the average number of years remaining e x,2016 for Baltic countries Figure 12.Estimation of the average number of years remaining êx,2016
4,467.4
2016-11-15T00:00:00.000
[ "Economics", "Medicine" ]
Contents Based on the thesis that baryons including protons and neutrons are Yang-Mills magnetic monopoles which the author has previously developed and which has been confirmed by over half a dozen empirically-accurate predictions, we develop a GUT that is rooted in the SU(4) subgroups for the proton/electron and neutron/neutrino which were used as the basis for these predictions. The SU(8) GUT group so-developed leads following three stages of symmetry breaking to all known phenomenology including a neutrino that behaves differently from other fermions, lepto-quark separation, replication of fermions into exactly three generations, the Cabibbo mixing of those generations, weak interactions which are left-chiral, and all four of the gravitational, strong, weak, and electromagnetic interactions. The next steps based on this development will be to calculate the masses and energies associated with the vacuum terms of the Lagrangian, to see if additional empirical confirmations can be achieved, especially for the proton and neutron and the fermion masses. Introduction In a recent paper [1], the author introduced the thesis that baryons, including protons and neutrons, are Yang-Mills magnetic monopoles.Based on this thesis, it was possible to predict that the electron rest mass is related to the masses of the up and down quarks according to     emerging following a Gaussian integration over three space dimensions.Subsequent calculations showed that the best known values of the up and down masses in turn lead a binding energy of 7.667 MeV per the proton and 9.691 MeV per neutron yielding an average binding energy of 8.679 MeV per nucleon ((12.6)through (12.8) of [1]), very much in accord with what is empirically observed, and to binding energies for 56 Fe which were predicted to be extremely close to what is observed for that nuclide.Noting also that the deuteron binding energy is extremely close to what is known from best available data to be the mass of the up quark, we further hypothesized that these might be one and the same, which could be explained if the energies released during nuclear fusion are based on some form of "resonant cavity" analysis in which the discreet energies which are observed to be released are based on the masses of the quarks contained within the nucleons and nuclides.This led to a prediction that 56 Fe has a latent available binding energy of 493.028394MeV ((12.14) of [1]), which we then contrasted to the empirical binding energy of 492.253892MeV.This small difference was understood as indicating that 99.8429093% of the available binding energy predicted by this model of nucleons as Yang-Mills magnetic monopoles goes into binding together the 56 Fe nucleus, and that the remaining 0.1570907% goes into confining the quarks within the nucleons.This in turn, lead us by the conclusion of [1] to a deepened understanding of how quark confinement is intimately related to nuclear binding and fission and fusion and the peak in per nucleon binding energies at 56 Fe, and perhaps to an understanding of the so-called First ECM effect (see [1], pp.62 and 66). A second paper [2] extended this analysis, and showed that based on this same "resonant cavity" analysis, the binding energies of the remaining 1s nuclides, namely 3 (in (7.2) of [2]) to better than 1 part per million.In Section 10 of [2], we explained why this should be regarded as an exact relationship, and therefore modified our earlier hypothesis that the deuteron binding energy is exactly equal to the up quark mass, into one in which these energies are very close-to just over 8 parts in ten million-but not exactly the same.In Section 9 of [2] we used these results to predict solar fusion energies solely from up and down quark masses, and found the results to also be in very tight accord with the observed data. The lesson taken from [1,2] together, is that empirical evidence strongly supports the thesis that Yang-Mills magnetic monopoles are in fact baryons on the basis of seven independent predictions which closely match the experimental data, specifically: 1) the electron mass in relation to the up and down masses, 2) the 56 Fe binding energy specifically, and the per-nucleon binding energies on the order of 8.68 MeV in general, 3) the proton minus neutron mass difference, and 4-7) the four distinct nuclide binding energies predicted for 4) 2 H, 5) 3 H, 6) 3 He and 7) 4 He.The study of solar fusion in Section 9 of [2] does not contain anything independent of the predictions 1) through 7), but rather applies several of these predicttions in combination, and underscores that a "resonant cavity" analysis of nucleons and nuclides does consistently lead to empirically-accurate binding energies, evidenced by all of predictions 3) through 7) above. While the theoretical foundation for all of these successful predictions was laid throughout [1], it was the field strength tensors for the proton and neutron, (11.3) and (11.4) of [1], reproduced below: P Tr 2 , " " " " when used to calculate the energy according to ((11.7) of [1]), which formed the specific basis for the calculations that led to all of these predictions.These field strength tensors, in turn, emerged as stable magnetic monopoles following specification of the SU(4) P "protium" and SU(4) N "neutrium" gauge groups in Section 7 of [1], followed by breaking the symmetry of these groups using the baryon minus lepton number generator  ((8.1) of [1]).So we take the thesis presented in Sections 7 and 8 of [1] that the protons and neutrons emerge following the B − L breaking of the SU(4) P and SU(4) N groups to be supported by the compelling evidence of predictions 1) through 7), and so regard SU(4) P and SU(4) N as subgroups that do describe the real physical universe, not just some arbitrary groups that may or may not appear in the natural world.In short, we take accurate empirical predictions 1) through 7) above as direct evidence of the physical reality of SU(4) P and SU(4) N . Based on all of the foregoing, we shall in this paper take SU(4) P and SU(4) N as physically-validated, reliable building blocks for developing a "Grand Unified Theory" (GUT) based on the empirically-confirmed thesis that baryons, including protons and neutrons, are Yang-Mills magnetic monopoles. Unification and Grand Unification in Physical Science At least since the time when Isaac Newton hypothesized that the terrestrial "force" which caused an apple to fall from a tree was the same as the celestial "force" which guided the movements of the planets, unification has been a central objective of physical science.The preeminent scientist, entrepreneur and statesman Benjamin Franklin catapulted to fame when he realized that the terrestrial sparks he was creating in his laboratory were of a unified piece with the lightning from the heavens, and applied that understanding in a very practical way to develop lightning rods which cured an epidemic of mid-18th century electrocutions throughout Europe brought about by the superstition of sending church bellringers to steeples at the highest place in town to clang large metallic bells to ward off the anger of the Gods every time a lightning storm approached.James Clerk Maxwell in 1873 elaborated what to that date was, and perhaps even to today's date is, the preeminent physical unification and at least the very paradigm of unification, as he pulled together the disparate threads of Gauss, Faraday and Ampere into a unifying set of equations for electricity and magnetism.This was deepened a generation later with Einstein and Minkowski's Lorentz-invariant unification of space and time.In these and similar endeavors the underlying theme has always been the same: to take what appear on their surface to be disparate natural phenomena, and acquire a deeper understanding which shows them to be governed by a single, common princeple.The success of past unifications leaves today's gen-eration of physicists with the firm conviction that further unifications can still be achieved, and that one day in the future, all of the laws of nature can and will be deduced from one common vantage point.After all, what is natural science other than an endeavor to explain what is observed through our direct senses and the clever instrumentation that extends our senses, by relating those observations to mathematically precise laws of nature which apply consistently, uniformly and replicably, without exception, in the broadest possible range of circumstances?So-called "Grand Unified Theories," or GUTs, are part and parcel of this esteemed tradition, and are based specifically on the advent of Yang-Mills gauge theories and the realization that these Yang-Mills theories have a remarkable capacity to explain what is observed in nature as evidenced though their already-successful application to weak and strong and electroweak interactions.The Georgi-Glashow SU(5) model [3] which was reviewed at some length in Section 8 of [1] was one of the first "GUTs" and is perhaps the best known.The basic idea of Georgi-Glashow and any other GUT is to be able to represent all of the fermions which are observed in nature, and all of their interactions, using a single, simple gauge group with a symmetry which is then broken in one or more stages to arrive at the particle and interaction phenomenology observed in a laboratory setting.The fermions are the up and down quarks, the electron and neutrino leptons, and ideally their higher-generational carbon copies distinguished from the first generation solely by larger mass.The generators of the gauge group represent "interactions" of which there are understood to be four: gravitational, strong, weak and electromagnetic.The eigenvalues of the diagonalized generators of the gauge group, which are linearly related to discrete natural numbers such as 2 3 and 1 3  and 1 2  and −1 and 0, represent the "charges" of these fermions with respect to these interactions.A particular fermion may be associated with a particular eigenstate (eigenvector) of a representation of the GUT gauge group if all of its eigenvalues for all of the generators match up with what are known to be the charges of that fermion with respect to all of these interactions.So, for example, an electron is by definition the fermion eigenstate for which the lepton number eigenvalue L = 1, the baryon number eigenvalue B = 0, and the electric charge eigenvalue Q = −1.And the transitions/decays of a fermion from one eigenstate into another, or its interactions in a given eigenstate, lead to the mediating vector bosons of the theory.The trick in any GUT, is to characterize all of the interrelated charges of all of the fermions in the "simplest" way possible, to understand the stages and ways in which the symmetry of the group is broken starting at ultra-high energies and working down to energies which can be reached in a laboratory setting, and of course, to end up with something that accurately comports with all observed empirical data. With this in mind, and as used in the discussion here, we distinguish "GUTs" from "unified field theories" more generally, as that subset of unified field theories which is specifically centered on understanding fermions and their interactions via their discrete charges using Yang-Mills gauge groups, and on making whatever observable predictions can be made based on such an understanding.So, for example, Kaluza-Klein theory, which to this day represents an exceedingly elegant classical unification of general relativity with Maxwell's electrodynamics using a fifth spacetime dimension that from today's vantage point is best understood as the "matter dimension" [4], is most certainly a form of "unified field theory" (and one which in the view of this author warrants more universal acceptance than it has at present, especially given that what we know of Yang-Mills gauge theory should permit both gravitation and electromagnetism in Kaluza-Klein form to be extended into non-Abelian domains).But Kaluza-Klein is not a GUT in the sense that GUTs are focused on the use of Yang-Mills gauge groups to represent fermions and their interactions, and Kaluza Klein, at least absent a Yang-Mills extension, has nothing to say about fermions.While one may define the term "GUT" more expansively to also include so-called "supersymmetric" theories, the foregoing defines by example, what we have in mind in this paper when referring to a "GUT", as opposed to a "unified field theory" without the GUT qualifier. The Galilean foundation for all of modern science is that theory must be the confirmed by observation, and that the goal or at least an important by-product of theory is to systematically explain observation.For physical theorists, the pursuit is about systematically comprehending nature and confirming that comprehension based on experimental data, or as Hawking and Einstein have more loftily put it, "reading the mind of God."Because GUTs necessarily theorize about the behavior of nature at ultra-high energies such as 10 15 GeV and even higher that are unlikely to ever be reached by human experimentation under any foreseeable circumstances (with the possible exception of what we can learn by peering back billions of years through astronomical telescopes), such GUT theories necessarily opine on physics that may forever be beyond the reach of direct experimental confirmation.So the only way to discern the primacy of one GUT over another is indirectly, by virtue of what it predicts about low energy phenomenology that we can or may soon be able to observe.So as we consider how to construct the "puzzle" which is a GUT and decide what "pieces" to use in that puzzle, we want to start with puz-zle pieces that already are solidly-grounded in empirical observation. Based on the seven independent predictions enumerated in the last section which closely match the empirical nuclear binding and related data based on the thesis that baryons are Yang-Mills magnetic monopoles, the GUT that we develop here will start with the SU(4) P and SU(4) N gauge groups developed in Sections 7 and 8 of [1], knowing that these groups now have been validated by over half a dozen independent pieces of empirical data from nuclear and particle physics.Additionally, because we have shown in [1,2] how to connect these gauge groups to energy numbers which can be and indeed have been empirically confirmed, an important objective in developing a GUT on the basis of SU(4) P and SU(4) N is to lay the foundation for perhaps obtaining additional, similar, successful predictions of other known energies which have been crying out for theoretical understanding for decades, most particularly, and most importantly, the free proton and neutron masses, and the observed fermion masses. If it should be possible on the basis of a particular GUT to make accurate predictions of the proton and neutron and/or fermion masses, then even absent the ability to ever directly observe the 10 15 GeV and higher energy phenomena which lead to these predictions, such predicttions would certainly be solid evidence, albeit through indirect inference rather than direct observation, that such a GUT has also explained to us how nature behaves behind the veil of energies that we shall most certainly never get to directly observe (again, with possible astronomical caveat). In other words, because a GUT, by its very nature, seeks to reach into energy domains that will likely be forever beyond human reach, it must fulfill the Galilean project by accurately explaining all of the masses and energies that we do observe through the instrumentation that does rest within our grasp, while at the same time teaching us about physics at energies that we shall likely never have the capacity to see directly.It is the prediction of the energies and masses we do observe, that gives us some measure of confidence that we are not being led astray by what the GUT tells us about the physics of unreachable energies.To use a different metaphor, GUTs seek to teach us about an entire iceberg, most of which we shall never be able to observe.So what the GUT teaches us about the tip of that iceberg which we can see, must be solidly-confirmed by empirical data every step of the way for us to have some confidence in what it teaches us about the rest of the iceberg which will forever remain out of sight. Based on the foregoing, the purpose of this paper is to develop a GUT rooted in the thesis that baryons are Yang-Mills magnetic monopoles and the seven success-ful predictions which have already emanated from that thesis in [1,2], and to lay the foundation for additional mass and energy predictions, including those of the free proton and neutron and the fermion masses. Some Clues for Pursuing the Proton, Neutron and Fermion Masses Before we can make predictions of the proton and neutron and fermion masses, we must construct a reliable, empirically-grounded GUT, and we must know how to break its symmetry.Why do we say this? We have already shown in [2] how the nuclear binding energies in the 1s shell arise from using the field strength tensors (1.1) and (1.2) to calculate an energy m m of the outer product PABCD E ((4.9) through (4.11) of [2]).But these binding energies are calculated using only the pure gauge field terms (3.1) of the Lagrangian developed in (3.12) of [2], written with the terms slightly regrouped: We have not yet even begun to develop these other terms at all, yet it is made very clear by the development in [1,2] that additional energy numbers can and will arise from complete development of these terms.So, we must develop these additional terms and we will look to them to perhaps lead us to the proton and neutron and fermion masses.But because all of these terms contain the vacuum  , the actual numeric energy values we obtain from these  -containing terms will depend upon the GUT gauge group we choose, and upon its vacua  and how these vacua are used to break symmetry.(We use the plural vacua because we have in mind breaking symmetry in sequence using the Planck vacuum on the order of 10 19 GeV, so called GUT vacuum on the order of 10 15 GeV, and the Fermi vacuum v F = 246.219651GeV used to break electroweak interactions to electromagnetic interactions via For example, given from (3.11) of [2] that: we see that terms in (3.2) with    D D     will mix gauge fields G  with vacuum fields  .So whereas the pure gauge terms (3.1) led to expressions such as (4.9) and (4.10) of [2], namely: we should be alert to opportunities to develop mixed gauge field/vacuum terms where one of these matrices is replaced by a vev, especially the Fermi vev v F = 246.219651GeV, so we can develop an energy "toolbox" with such expressions as Why the Fermi vev?And why these square root expressions?Because numerical inspection of the square roots of the three main masses in (4.11) of [2] used to calculate binding energies throughout [2], times the square root of the Fermi vev, shows that: The generators of a GUT! (The author's subsequent paper in this same special issue of JMP starts with (3.8) to indeed successfully explain the free neutron and proton rest masses.) So the proton and neutron masses, via the order of magnitude analysis above, straddle right down the middle of the Fermi vev and the masses of the quarks.One should therefore be on the lookout for ways to exploit this via the "mixed" gauge field vacuum G   terms in Lagrangian (3.2).And as noted at the end of Section 10 of [2], one should keep in mind that relation for the free neutron-proton mass difference now allows us to find the neutron and proton masses individually, so long as we can find sum expression which involves the sum of these masses.So it may well be that our target should be     M n M p  or some multiple thereof (perhaps the 4 He alpha nucleus studied extensively in [2]?) rather than either of these masses individually. For another example, we go all the way back to (2.1) of [1], Maxwell's charge equation: , and where in the final term we have hand-added a "Proca mass."Based on (3.3), we can readily specify an analogous field equation: , with the full progression Then, if we pursue the same course of development as in [1] from start to finish, when we finally reach the counterpart of (11.19) of [1] and collapse the propagators so that interactions occur essentially at a point, we will end up with a Lagrangian term of the schematic form: But this is the form of a Fermion mass term in a La-grangian, with the mass of the fermion specifically identified with Concurrently, the vev v should also enter into this when we break symmetry with a generator G by setting vG   . So this is a possible prescription, using the  terms in (3.2), for revealing a fermion rest mass out a Lagrangian while preserving gauge symmetry and thus maintaining renormalizability!But because the specifics of all of this center around the vacua  , it becomes essential to have the right GUT gauge group, and to know how to break its symmetry in appropriate sequence.As noted above, to do this, we begin to develop a GUT gauge group using the SU(4) P and SU(4) N gauge groups developed in Sections 7 and 8 of [1], knowing that these groups now have been validated by over half a dozen independent pieces of empirical evidence from nuclear and particle physics.We build upon these empirically-validated puzzle pieces in the hope that this run of positive empirical predictions will continue with the masses and energies predicted by the terms in (3.2) which include the vacua  . An Unbroken SU(8) GUT In Section 7 of [1], we demonstrated that at ultra-high GUT energies the proton was part of a larger gauge group which we dubbed the SU(4) P "protium" group which includes the proton and the electron, and that the neutron was similarly part of a larger gauge group we dubbed the SU(4) N "neutrium" group which includes the neutron and the neutrino.As we then showed in Section 8 and specifically (8.1) of [1], these two groups are broken by a vacuum and tries to find larger simple groups G which embed all of these and their associated fermions.The SU(5) model of Georgi-Glashow [3] reviewed at some length in Section 8 of [1] is a paradigmatic example.Here, we shall start with SU(4) P and SU(4) N which we know lead to accurate binding energy predictions, and seek to construct a larger simple gauge group which includes these two groups, and which also encompasses the usual phenomenological gauge group . The group we shall choose? . This is a larger group than SU (5), but as we shall see, it brings with it numerous benefits including 1) the ability to accommodate a non-zero neutrino mass and thus righthanded chiral neutrinos which are omitted from SU(5); 2) the ability to accommodate all flavors and colors of fermion, as well as protons and neutrons, all in the fundamental group representation (SU(5) splits the fermions into a fundamental 5 and a non-fundamental 10 representation while omitting the right-chiral neutrino); 3) the ability to accommodate different left and right-handed chiral projections with respect to weak hypercharge Y and weak isospin I 3 , for all fermions; 4) a solution, at long last, to the mystery of fermion replication into exactly three generations, and 5) interaction generators that may well be associated with gravitation based on the manner in which the elusive neutrino stands alone with respect to all other fermions by having an exceedingly tiny neutrino mass that is orders of magnitude smaller than the masses of the other fermions, and based on the ability to finally understand the origins of fermion generation replication.We construct this SU (8) (8) of course contains seven diagonalized 8 × 8 generator matrices, so rather than take up visual space with seven 8 8  matrices in which all but the diagonal elements are zero, let us construct this group using the tables below which convey the same information more compactly in an easier-to-follow form. First, as just noted, the electric charge generator is for SU(4) P , while it is , ,    , we end up with Ta- ble 1 below. In Table 1 The remaining generators for Q, Y R , Y L , are all linear combinations of the first three generators, and so provide no additional degrees of freedom, while 3 0 R I  can be trivially obtained from any other generator using the coefficient 0. We shall wish, in the course of our analysis, to maintain a focus on the independent degrees of freedom.What makes the upper neutrium quadruplet not unified with the lower protium quadruplet is the fact, as mentioned above, that although all the other generators have the same form (i.e., are invariant) as between the Table 1. Fermions and generators of SU(4) N and SU(4) P . Linearly Independent Degrees of Freedom Linear Combinations upper and lower quadruplets as denoted by the "dittos", the electric charge generators are defined by different linear combinations.So electric charge Q is not an invariant as between these two quadruplets.It is worth noting that for all of these fermions, L Y B L   , so L Y is not itself a linearly-independent generator from 15  . The one generator that we do not see explicitly represented in the above, of course, is the generator 3 1 2 I in combination with Y L which happens to be equal in all cases to B L  , to specify as is ordinarily done in electroweak theory.Then, having Q in hand, and given 3 0 So we now take Table 1 above, introduce all seven of the SU(8) diagonalized generators with the normalization , and specify suitable linear combinations of these.Then, we review not only how this accommodates the fermions and generators in Table 1 above, but also the new interaction generators that are introduced and their possible physical significance.Aesthetically, is very simple and natural for the eight fundamental flavors and colors of fermion  to each be made a member of the fundamental representation of SU (8).And, be-cause one does have eight fermions in nature (per generation), a natural question is, why not use SU(8)?Sometimes, what appears to be the simplest approach really is the simplest approach, and leads to the best results, and we don't have to try to unnaturally "squish" eight fermions into a smaller group like SU (5) and then lose the right-chiral neutrino and split the representations. In this regard, the question we shall explore largely throughout the rest of this paper-which is one of the reasons why one might not use SU(8)-is whether SU( 8) is simply too large and can or ought to be made smaller. (We shall answer this question, "no"!)By "too large," we refer not to aesthetics, but to superfluity: does this group introduce any extra, superfluous particles or interactions which simply do not appear anywhere in the natural world.Put concisely, the underlying question is this: is SU(8) sufficient, and is everything in SU( 8) necessary?Does it yield everything, and not one iota more?(We shall answer these questions, "yes"!) Specifically, in going from two disjoint SU(4) groups in Table 1 to one unified SU(8) group in Table 2, we have gone from three independent generators 15 8 3 , , to seven.Out of these four new generators, we have left three of these, 63 48 35 , ,    , in their "native" form without alteration, pending further exploration of these generators below.The fourth new generator, 24  , we do not show explicitly.Rather, we use the degree of freedom provided by 24  to introduce the left-chiral weak isospin generator 3 L I , which we define as a linear combination of six of the seven "native" generators according to: One can readily check that as in So we use (4.1) and (4.2) above to account for the two linearly-independent degrees of freedom in 24  and 15  .It is easy to check as in Table 2, that   Similarly, we cannot use As required from Table 2, a check finds that Finally, and similarly, we need to define a 3  according to: as in Table 2.The foregoing, (4.1) through (4.4), account for four of the seven linearly-independent degrees of freedom in SU (8).We have yet to explore the three native-form generators 63 48 35 , , From here, we define several other generators which are linear combinations of (4.1) through (4.4).First, via (4.2), we define: which happens to be exactly equal to B L  in (4.2) and so is not linearly independent.But L Y is non-chiral, i.e., it only applies to left-chiral projections.Next, we use (4.5) and (4.1) to define the electric charge generator in the usual manner, via: One can check to see that as required by Table 2.In the third expression we make use of 3 0 R I  , to show by way of contrast that Volovok's Equation (12.8) in [5] also leads via a different route to the exact same Next, we formally specify that the right-chiral generator is to be zero for all the fermions so that only left-chiral particles will interact weakly.At the same time we insist that the electric charge generator is to be defined as chiral symmetric for all fermions.This chiral insistence together with (4.6) and (4.7) finally leads to: So at this point, all of the known quantum numbers of the fermions are fully specified, including the left and right chiral projections for Y and I 3 .The fermions all reside in the fundamental representation of SU (8), and the proton and neutron are represented as well in the way that we have ordered the fundamental representation.And, while all of the foregoing certainly accounts for the observed fermions and their quantum numbers, we still have three extra linearly-independent degrees of freedom, which we can and do choose to associate with the generators 63 48 35 , ,    we have left in their native state. Now we return to the critical question: With these three apparently superfluous degrees of freedom, does SU(8) provide too much freedom?Does SU (8) provide more than what is necessary?Might we find some way, in the spirit of Georgi Glashow SU(5), to "squish" these fermions into a smaller group and take away some of this apparently-superfluous freedom?The answer is, no!And the reason is that this extra freedom is not superfluous, but is actually fully accounted for in the known particle phenomenology, and particularly, in the odd quirks of the neutrino and in the replication of fermion generations.Let us see how. First, the neutrino.One of the very perplexing features of the neutrino is that it has almost no mass, and is maddeningly-elusive.While the electron and the quarks do have different masses from one another, the neutrinos are in a league of their own, by orders of magnitude.The neutrino mass is almost zero, which means that it travels at very close to the speed of light.Because of the equivalence of gravitational and inertial mass, the fact that the mass of the neutrino is so very different from that of all the other fermions means that in some rough manner of speaking, it is gravitating differently as well.For example, the mass of the electron's neutrino is less than 2 eV [6], while the electron itself has a mass of about 511 KeV, which is over 250,000 times as large.which are the relationships between the quark masses and the electron masses based on the quark masses arrived at in (10.3) and (10.4) of [2].This appears to make the neutrino qualitatively different from all the other fermions, and we need to pinpoint the origins of this difference.Now consider the 63  in Table 2 and the fact that 63 1 4 7 for all of the up and down quarks and the electron, but that 63 7 4 7    has a completely different value for the neutrino.Moreover, not only is the magnitude different by 7 to 1, but even more importantly, the sign is different.Indeed, that is why we chose to place the neutrino as the very top member of the SU(8) fermion octuplet.That means that the neutrino will interact completely differently under the interaction associated with 63  -whatever that interaction may befrom any other fermion.But if there is any interaction under which the neutrino behaves differently than all the other fermions, it is the gravitational interaction, because the most pronounced way in which the neutrino differs from the other fermions is via its ghostly mass and thus its ghostly way of gravitating.Further, we know on general principles that for any Yang-Mills gauge group which unifies gravitation with the other three interactions, there will have to be at least one degree of freedom given to the gravitational interaction.The only question is where and how this appears.So, we now make a preliminary association of the 63  generator with a degree of freedom for a gravitational interaction, and we do so in a way that bakes in for the neutrino, an entirely different way of gravitating and thus displaying its mass, than any other fermion. So, now we have accounted at least in a general way (which we shall seek to deepen in the upcoming discussion) for all four of the known interactions, but we still have two more degrees of freedom unaccounted for, namely, those provided by 48 35 ,   .What are we to make of these?This brings us again to the question: does this not give us too much freedom?And again, the answer is, no!We still have to account for the replication of fermions into three generations, which is another oddity of the material world almost as mysterious as the oddities of the neutrino just discussed.Let's ask the question directly: even if 63  is related to gravitation and can explain why the neutrino behaves so differently from all the other fermions, 48 ,   provide too much freedom at the same time that we are seeking an explanation of the three fermion generations, and given that those two extra generators provide precisely the freedom needed to allow each particle to exist in one of three additional horizontal generational states, then perhaps these are not superfluous after all, but are instead the source of the generations.In that case, SU(8) becomes a perfect fit, large enough to accommodate all that is observed including the idiosyncratic behavior of the neutrino and the replication of fermion generations, and not one bit larger so as to contain anything superfluous that is not observed.So in Table 3 below, we shall use the schematic symbol  to denote a visual shorthand for Figure 1 2 in the form of Table 3 as shown below., Now, in Table 3, SU(8) has nothing superfluous, all eight fermions are represented with both left and right-chiral states, and each can exist in one of the three , , e    horizontal generation eigenstates.We see that there are now four vertical interactions: 1) the strong QCD interaction with three color states and two generator degrees of freedom 8 3 ,     ; 2) the weak isospin interaction represented by 3  interaction to which the electromagnetic interaction of (4.6) is linearly related by  providing a degree of freedom for a gravitational interaction, under which all fermions except the neutrino interact in one way, and under which the neutrino acts in a very different way, in a league by itself.This is the unbroken GUT group that seems best situated to fully accommodate not only all the known fermions and interactions and their key phenomenological properties, but the Yang-Mills magnetic monopoles which we now know are baryons, and which are very naturally grouped in this way of representing SU(8). Spontaneous Symmetry Breaking of SU(8) at the Planck and GUT Energy Scales, and the Emergence of Fermion Generations and Fermion Mass Degrees of Freedom In Section 8 of [1], we reviewed spontaneous symmetry breaking in the Georgi-Glashow SU(5) model, to provide a backdrop for breaking the protium group via and the neutrium group via . This of course led to stable protons and neutrons and later to the several accurate empirical binding energy predictions already noted.Here, we review a similar symmetry breaking based on the SU(8) group developed in the previous section.Specifically, we review three symmetry breaking operations: a first symmetry breaking operation using the contemplated "gravitational" generator 63  at or near the Planck scale; a second symmetry breaking operation using the L Y B L   generator at an ultra-high GUT energy perhaps in the 10 15 GeV vicinity, and a third break of the electroweak symmetry at the Fermi scale using the electric charge generator Q.It is this third symmetry breaking that we hope to use to accurately predict the proton and neutron masses as discussed in Section 3 and highlighted in (3.6) to (3.8).But to set the context, let us start with the first two high-energy symmetry breaking operations using 63  is indeed a gravitational generator, then its mass scale will be at or near (within an order of magnitude of) the Planck mass which is defined by GeV is nineteen orders of magnitude larger than the proton mass.It is theorized that at this energy, there is a violent sea of vacuum perturbations, and two of the best references to review this understanding are [7,8].We shall examine all of this more closely here as well, in the next section. Without yet going through all the details in this pass, if we employ the Lagrangian (3.2) and specify a Planck vacuum ; 1, ,63 , we may break symmetry at or near P P v M  using the 63  generator such that: (Again, we are not concerned here with the exact relationship which why we use  rather than =, but rather an order of magnitude examination of the qualitative features of this symmetry breaking.)This would immediately set the neutrino which is the top member of the elementary fermion octuplet   , , , , , , ,  on a course to behave differently from all the other particles.If 63  is indeed a gravitational degree of freedom which notion we began to entertain in the last section, then it makes sense to regard the degree of freedom that 63  provides to be a freedom associated with the rest mass of the fermion, i.e., to be a vertical mass degree of freedom.So with symmetry breaking of the neutrino from all the other fermions at the Planck scale, right below the Planck scale all of the fermions except the neutrino would have one mass, and the neutrino would have a different mass.Most notably, the neutrino would have an oppositely-signed generator from all of the other seven fermions, which we shall revisit in the next section.Thus, the neutrino can be expected right from the start, to behave very uniquely as regards its mass, and as regards to how it gravitates.This could be a root cause of why the quark mass to electron mass ratios are we expect this to be more than just "screening adjustments" as we go from high to low energies.We expect this to be "baked in" to the underlying structure of the GUT gauge group right from the start. Moving on, we now venture down to the vicinity of a second 15 10 GeV , where we break the symmetry with L Y B L   .Again, we are simply for the moment talking about orders of magnitude for this energy scale.In fact, we have already discussed B L  symmetry breaking at some length in Section 8 of [1].But in that earlier discussion, we regarded   4 P SU and   4 N SU as disjoint groups each breaking down via homotopy group with stable magnetic monopoles, essentially based on the disjointed groups of conducted in Section 8 of [1].It is also worth noting as reviewed in Section 8 of [1], that Georgi and Glashow also break symmetry using the Y generator, albeit such that So here, we are doing exact same thing as Georgi and Glashow insofar as using a Y generator to break the GUT symmetry circa 10 15 GeV, but we are merely using a different group SU (8) versus SU (5), with all the fermions in the fundamental representation as shown in Table 2. Now let's proceed. The group is now SU (8).Exactly as in (8.1) of [1], the vacuum we use is: Here, however, because of the SU(8) group, we have: Unlike Section 8 of [1], we no longer have Rather, here we have a included in the calculation of the above).One may then employ the procedure such as is outlined in (11.5) and (11.6) of [1] to obtain gauge bosons masses in the usual way, and these will have masses on the order of GUT v .But our interest here is in what happens at lower energies, after this symmetry has been broken, because that brings us into energy ranges with are experimentally observable. First, by breaking symmetry via which for which the generator eigenvalues are we "fracture" the eight fermions in Tables 2 and 3 into a 1 u d d is a neutron, so this sextuplet may also be viewed as a 1 and of course the two members of the lepton doublet also . This is the well-known "isospin redundancy" that exists and between quarks/baryons and leptons and leads some to consider "preon" models such as that discussed in Section 12 of [5].For quarks/baryons, we use   to represent their status following L Y B L   symmetry breaking.That is, the proton and neutron each containing an factor.For leptons, the neutrino and electron form an factor, albeit for a different value of L Y B L   than that of the quarks/baryons.Overall, with the detailed interrelationships just noted, we reproduce the phenomenological product group   for symmetry breaking at GUT v , all that we have just described should be readily apparent from Tables 2 and 3.But a bonus that we obtain here, which is not obtained in Georgi-Glashow SU (5), is the fermion generation replication.This is how: In SU(5) which is broken using , there are four degrees of freedom based on the linearly-independent generators 24 15 8 3 , , , T T T T .After symmetry break-ing there are still four degrees of freedom; they are merelyreshuffled into 8 3 , , and Y for   . None of these degrees of freedom disappear after symmetry breaking; they simply sit across one another in several "irregular" linear combinations. Here, however, in going from two "vertical" degrees of freedom "disappear", because SU(8) has seven diagonalized generators while SU(6) has only five, and the separate B and L subscripts in are all part of a single degree of freedom represented by L Y B L   .But this reduction-by-two in the degrees of freedom cannot vanish into thin air; it must show up in some other way.That is, following symmetry breaking using , there are two-free floating degrees of freedom from 48 35 ,   that have become decoupled from the remaining five vertical degrees of freedom.But, as shown in Figure 1, these free-floating degrees of freedom have precisely the properties needed to create a new horizontal freedom with exactly three states.So we label these three states , , e   as in Figure 1, we associate this with the fermion generation replication, and we therefore make a carbon copy of each fermion in triplicate, using the conventional symbols , , , , , u d c The fermions across generations are distinguished only by the mass values, and so apparently, it is the freefloating generators 48 35 ,   which provide the horizon- tal fermion mass degrees of freedom to enable each fermion of a given type to take on one of three mass values.Thus we may formulate Table 4 below. Studying Table 4 and the above comments about the generational mass freedom, we now can better develop our understanding of the so-called gravitational degree of freedom 63  which we discussed a short while ago in relation to (5.1).Whereas 48 35 ,   provide freedom for the fermions of any given type to take on one of three mass values, we also need a degree of freedom for each of the four basic fermion "prototypes" , , , e u d  to have different masses within a single generation, as is also clearly observed.This, in fact, is the role of 63  while that for all of the other fermions is the oppositely signed 1 2 28  with 1/7 the magnitude, the fact that all fermions but the neutrino have the same 63  tells us that at the Planck scale all of the , , e u d have the same mass, and that the differences among these masses that we detect at observable energies stems from the differences introduced by the other vertical generators 3 Finally, what this tells us is that in order to ascertain an answer to the question "why do the fermions have the masses they have?", the theoretical answer is this: follow the 63 48 35 , ,    generators, understand how 48 35 ,   separate out and start to act horizontally at v P and GUT v , and understand how the masses evolve as one moves downward in energy from there toward the masses we do observe in the laboratory.In this regard, if 63  is used to break symmetry at or near the Planck scale as in (5.1), then we immediately see a break via with the neutrino fractured from all the other fermions.So, we already lose one vertical generator, which we take to be 48  , which decouples and becomes horizontal.Thus, below the Planck scale but above the GUT scale, we would expect to see two fermion generations.Then, as we pass downward through the GUT scale and break the lepto-quark symmetry as in (5.2), we drop down to and now two of the generators have decoupled from vertical to horizontal giving rise to a third generation.It would therefore make sense to believe that the observed substantial variation from first to second generation masses, and then again from second to third generation, has it origin in this sequential breaking of symmetry that starts with one generation at the Planck scale, turns into two generations between the Planck scale and the GUT lepto-quark scale, and turns into three generations below the GUT scale.At each scale as one "cools down," the masses become "frozen" in a manner of thinking.And it would seem to make sense due to their relatively larger masses that the high mass fermions, namely the GUT scale, and that the , , , e e u d  which predominate and are the ground states at observable energies are the last generation to emerge, below the happens at each symmetry breaking stage is that the one (or two) generations which exist before symmetry breaking "spin off" a portion of their mass to make two (or three) fermions when the generators decouple.That is, for example, what is "one electron" above the Planck scale has to become "two electrons" below the Planck scale, and these then have to further turn into three electrons below the GUT scale, at the same time that the generators are decoupled.One final point before concluding this section pertains to chiral symmetry.Because the left-chiral generator L Y B L   for all fermions, at the same time that we break symmetry at the GUT energy using (5.2) and (5.3), we have also forced a breaking of chiral symmetry.That is, the weak interactions start to become chiral nonsymmetric at the GUT scale, as part and parcel of the  to create an "axial" object from another "dual" viewpoint, and "vector" and "axial" turn out to have a duality relationship that is integral to the Dirac algebra, all using "duality" based on the work of Reinich [9] later elaborated by Wheeler [10] which uses the Levi-Civita formalism (see [11] at pages 87-89).So given the degree to which baryon physics is fundamentally non-chiral courtesy of a Dirac algebra for which  is as integral to fermion physics as 1 ijk  is to spatial rotations, it makes perfect sense that as soon as protons and neutrons are crystalized into being as stable magnetic monopoles by L Y B L   symmetry breaking, we also bring about the non-chiral nature of the weak and weak hypercharge interactions. The Geometrodynamic Planck Vacuum, and What Makes the Neutrino Different (or, Let's Finally Catch that Mischievous Neutrino) With all that we have learned in Section [7,8] where this is developed in detail.It is also well-understood that energy fluctuations of this magnitude on such a small scale do have the effect of topologically creating microscopic black holes, also called wormholes, with a Schwarzschild radius at or near the Planck length.Let us now take a closer look at exactly what is believed to occur at this scale.Again, along the lines discussed in Section 2, it is unlikely that humans will ever be able to directly observe physics at the Planck length, but the development of such physics in the context of a GUT may lead us to low energy mass and energy predictions which-if they accord with empirical data-could then give us some confidence that the GUT which leads to such accord is also describing the Planck-length physics "behind the veil" with some semblance of accuracy. When Wheeler talks in his seminal work [8] about the geometrodynamic Planck vacuum, the vacuum he envisions is constructed from a series of simple algebraic calculations with which it is important to be familiar.So let us review those here.First, Newton's law of gravitation Gm m which has the same dimensions as the natural constant c  .So the Planck mass 2 P M is defined as the unique, natural mass unit formed out of the Newtonian numerator from G,  and c, namely: The above means that 2 F F G v c c   , with the 2 having historical origins based on how F G was first defined before electroweak interactions were well-understood.Comparing "apples to apples" the correspondence is . The reduced Compton wavelength of a Planck mass (6.1) is easily calculated to be: But this is simply the negative of the Planck energy!So as Wheeler first surmised, a collection of Planck mass fluctuations (on average) separated by the Planck length (on average) averages out to be a vacuum because the negative gravitational energy precisely cancels the positive Planck energies which are posited in the first place, on average.Nonetheless, in very localized regions on the order of P  , there are very violent fluctuations of very high energy occurring.This is the so-called "geometrodynamic vacuum." It is also important to note that the Schwarzschild "black hole" radius for a (non-rotating) Planck mass may be calculated to be: Because the black hole radius is twice as large as the Planck length, this means that all of these fluctuations are occurring out of sight, behind a black hole horizon. On top of this, Hawking [12] teaches seventeen years after Wheeler's initial elaboration of the geometrodynamic vacuum, based on general relativistic gravitational theory, that black holes emit a blackbody radiation spectrum.So if we recognize that the Planck vacuum is a vacuum in which the masses on average are Planck masses separated on average by the Planck length, and then like any good student of statistics we ask the natural follow up question "what is the actual statistical distribution of these energies about the average?"Hawking provides a clear answer: because these fluctuations are occurring behind an event horizon, the distribution is observed externally to the event horizon as a thermodynamic, blackbody spectrum.It would also make sense, therefore, to consider the prospect that when we observe blackbody radiation in the natural world, we are in fact observing a gravitational phenomenon from the Planck vacuum screened through over twenty orders of magnitude, which would render the blackbody spectrum that kicked off the quantum revolution in 1901 [13], a conesquence of gravitational theory.So much for disunion between gravitational theory and quantum theory!But returning to GUTs, the Wheeler vacuum also teaches us something about the generator 63 which we are associating on a preliminary basis with gravitation, which is this: One may look at the Planck vacuum in one of two entirely equivalent ways: First, one can say that there are a tremendous number of fluctuations with positive energy on average, separated by P  on average, thus giving rise to an equal amount of negative gravitational energies on average, thus resulting in a vacuum on average, which has a gravitational blackbody distribution of energy when viewed from outside the event horizon, and which is redshifted as our observational perch recedes to that from which Planck first characterized this distribution.Second, one can start with negative energy fluctuations, separate them by P  , and they will gravitate to produce positive energy fluctuations.Each way of looking at this is equally valid.It is a "chicken and the egg" question.One can develop an equally sensible description of the exact same physics no matter where one starts: positive Planck masses producing negative gravitational energies, or negative Planck masses producing positive gravitational energies.It does not matter.These are two alternative descriptions of exactly the same thing.Now, let's talk about specific fermions, such as the   , , , , , , ,  of our SU(8) GUT group.How do these actually take root in the vacuum?How are they "conceived" and "born"?Through the lens of 1957, referring to electromagnetic charge Q, Wheeler says in [8] that "classical charge appears as the flux of lines of force trapped in a multiply connected metric ... trapped by the topology of the space."In other words, charge gets "trapped" in the black hole wormholes.Updating this with all that we have learned in the intervening half century especially about Yang-Mills gauge theories and how charges such as the electric charge arise from the generators of Yang-Mills theory, we might say that these Planck-mass fluctuations "trap" the Yang-Mills internal symmetries (which include the electric charge), and that this is how particles are "born."Or, in parlance we introduce here, the physical fermions  , , , , , , u u arise when a Planck-scale fluctuation is "fertileized" by the Yang-Mills generators of internal symmetry.So a neutrino  is conceived when a fluctuation with Planck mass magnitude is fertilized by the generator eigenvalues in Table 2 corresponding to the neutrino.The same holds true for the up quark (in three colors), the down quark (in three colors) and the electron.Then, as Wheeler points out, the particles we observe from 20 orders of magnitude lower, have had all but the most miniscule portion of their original ~MP masses cancelled/averaged out by the positive and negative energy fluctuations of the vacuum, leaving behind only a small mass residue which results from the trapping of the field lines, i.e., from the fertilization.Those are the particles and masses we observe. But if the Planck vacuum raises a chicken and the egg question, the next question is this: how does nature decide whether the egg comes first or the chicken comes first?Does nature fertilize the positive energy fluctuations into observed particles, or the negative energy ones?Or, might she fertilize both?And what would a fertilized positive energy fluctuation look like, versus a fertilized negative energy fluctuation?And, fundamentally, how is this precisely-balanced positive versus negative energy symmetry in the Planck vacuum broken, in favor of the very miniscule (relative to the Planck vacuum) preponderance of positive energy over negative energy that we observe in the material universe? Now our generator provides the critical clue: If this is a gravitational generator as we have begun to surmise, and if this generator is actually used to break symmetry at or near the Planck energy as in (5.1), and given that this is the energy at which gravitation is dominant as is clear from (6.1) through (6.4), then this generator will have a great deal to do with how the Planck vacuum first gets fertileized to produce what we observe.So the gravitational charge of the neutrino being of opposite sign from the gravitational charges of all the other fermions suggests that perhaps neutrinos are fertilized negative energy Planck vacuum fluctuations and the up and down quarks and the electron are all fertilized positive energy Planck vacuum fluctuations.Not only would this neatly resolve the chicken and egg problem, but it would explain many other things as well, especially about the ever-elusive neutrino. First, this would truly place neutrinos in a class by themselves.They would be born of negative energy Planck scale fluctuations, brought about via the gravitational interactions of positive energy Planck scale fluctuations.Other fermions are rooted in "Planck matter"; neutrinos are rooted in "Planck gravitation."Second, above the Planck energy, behind the event horizon, we would expect there to be a complete symmetry among all of the octuplet members   , , , , , , ,  .Any one fermion can readily decay into any other, and all would exist in equal numbers as part of an octuplet set.Thus, any time there is a neutrino, there are also seven other fermions to go along with that neutrino.Then, after we break the symmetry and the neutrino hooks up with negative energy fluctuations and the other seven fermions hook up with positive energy fluctuations, we would have a seven-to-one ratio of fermions which are rooted in positive energy fluctuations over fermions rooted in negative energy fluctuations.So as we reached lower and lower energies, there would be a net dominance of positive energy-rooted fermions over negative energy-rooted fermions.As such, this could help to explain how the positive versus negative energy symmetry of the Planck vacuum becomes broken.This is especially so given the fact that at low energies the neutrino masses become so very much smaller than all the other fermion masses. Third, while we conventionally hold to the view that all matter gravitates the same way as all other matter, this would tell us that this conventional wisdom holds true for all matter except the neutrino.Below the Planck scale, the neutrino would fundamentally be a fermion rooted in negative energy fluctuations, while all of the other fermions would be rooted in positive energy fluctuations.This could certainly provide some degree of confidence that as we start to trace the development of the fermions from the Planck scale down to the laboratory scale, we Further, if the neutrino gravitates differently from every other fermion (which we shall explore even further in the next section), then its elusive, idiosyncratic behaveiors may be much better understood.From a technology viewpoint, this also suggests that if one ever hopes to develop technologies to "shield" gravitation or overcome gravitational attraction other than by the brute force of rocket propulsion, the neutrino would be central to that undertaking.Harvesting and controlling the elusive neutrino, however, would be the core technology challenge.And, since neutrinos do exist throughout the universe as elusive as they may be, this would also mean that cosmological theories based on the supposition that all matter gravitates in relation to all other matter in exactly the same way would have to be modified to recognize that the neutrino defies this supposition. As a consequence of the forgoing, let us now choose a negative gravitational charge for the neutrino to go with the negative energy fluctuations, as a matter of convention.Then, let us introduce the hypothesis-which needs to be borne out through detailed calculation of its conesquences-that the neutrinos are in fact conceived at or near the Planck scale when negative energy gravitational fluctuations in the Planck vacuum become fertilized with the negative gravitational charge of the neutrino And in this regard, choosing the convention of a negative gravitational charge for the neutrino to go with the negative Planck energy fluctuations, we now explicitly define a gravitational interaction generator: We may find occasion to adjust this coefficient 1 2 28 as we calculate from this point forward, but this sign reversal, and the identification of 63  with a gravitational generator G, makes clear 1) that the neutrino is understood to gravitate differently than all the other fermions as we shall further examine in a moment, and 2) that the neutrino is rooted in negative energy Planck fluctuations while all the other fermions are rooted in positive fluctuations.Or, as Wheeler might say, the neutrino lines of force are trapped in negative energy topological wormholes, and the quark and electron lines of force are trapped in positive energy topological wormholes. Spontaneous Symmetry Breaking, Fermion and Generator Fractures, and Intergenerational Cabibbo Mixing of Left-Chiral Hypercharge Doublets As we now return to spontaneous symmetry breaking, it will be important to develop an understanding of what we shall call "fermion fractures" and "generator fractures."While the fermion fracturing we are about to describe may already be implicitly understood as a feature a spontaneous symmetry breaking, it is important to make this understanding explicit, as this will play a crucial role in understanding generation replication, and especially, the Cabibbo mixing which for leptons leads to so-called neutrino oscillations (which have been largely responseble for demonstrating that the neutrino does have some tiny mass, contrary to what may have been believed two or three decades ago).When a gauge group has not been broken at all, and assuming that fermions have been assigned to the fundamental representation of that gauge group, then any one fermion is completely free to decay into any other fermion.SU(3) QCD provides a good example of this.As we can see from Table 1, or as will be understood in any event, there are three color eigenstates The symmetry is not broken, so any of these eigenstates may freely decay into any other one of these eigenstates, even though their quantum numbers are different.For example, all three color states R, G, B have  , yet they freely transition among themselves, which is central to QCD interactions.Similarly, as just discussed, above the Planck scale any fermion may transition into any other fermion.Once a symmetry is broken, however, some fermions become "fractured" from some other fermions, and they are forbidden from decaying into one another except under very limited conditions.It is these limited conditions which are of central interest in the discussion following. Let us first break the symmetry of SU(8) at the Planck scale using (5.1), which we recast in light of (6.5) as: What then happens?Of course, similarly to what was discussed in Section 8 of [1], the vacuum commutes such that , 0, 1 , ,48 It also self-commutes with G, that is, But our real interest here is to look at the fermions themselves. The neutrino, with 1 7 2 28 G     ,becomes fractured from all the other fermions with 1 2 28 G  , and can no longer decay into any of these other states via the generator G that was used to break the symmetry.It would be as if the red quarks in QCD were suddenly forbidden from decaying into green or blue quarks-but of course they can do so because the QCD symmetry is never broken.If G is a gravitational generator, then the neutrino can no longer undergo a gravitational decay through G into any other fermion.What does that mean?The neutrino will no longer gravitate with any other fermion except for another neutrino!But-and this is critical-it may still undergo other types of decay through the generators of other interactions.Let's elaborate: If the neutrino is to decay into any other fermion after the symmetry is broken via (7.1), it must decay into a fermion via an interaction governed by an interaction generator other than 63  gravitation such that the fermion has the same charge value under that other interaction generator as that of the neutrino.Referring to and so form a doublet under L B L Y   .This latter ability for the neutrino and the electron to decay into one another as like-charge members of a 1 L B L Y     doublet, lasts until the electroweak symmetry is finally broken at much lower (Fermi vev) energies into the electromagnetic interaction.Now let's look at the remaining seven fermions.Even after the symmetry breaking (7.1), these fermions are completely free to decay into one another via the gravitational generator G, because they are all like-valued 1 2 28 G  eigenstates of G.They all continue to gravitate with one another, while the neutrino steps aside and stops gravitating with them.Indeed, starting at the Planck scale, and until one drops down to GUT energies on the order of 10 15 GeV, these seven other fermions remain part of an SU (7) septuplet.Since all of these fermions are united by the common characteristic that they are born through the fertilization of positive (+) energy vacuum fluctuations, we shall refer to this group as SU (7) + .Thus, between the Planck scale and the GUT scale, the gauge group is , and the topologically-stable SU(7) magnetic monopoles with all the fermions of a 2 H atom are (6).These do not disappear entirely, but become horizontal as already discussed, in a manner we shall momentarily develop further.As to the remaining five linearly-independent vertical generators in Table 4, the electrons and the quarks still remain a gravitational septuplet and so can still interact gravitationally with one another (while the neutrino does not)!Following the rule that after symmetry breaking the only decays which are permitted are decays under a given generator for which the decaying fermions have a like-charge, the remaining decays options are as among members of the quark sextuplet of fermions with 1 3 B L   , and between the lepton doublet of fermions    lepton doublet, consist of weak decays between the neutrino and the electron with Now, however, most importantly, the quarks have become fully fractured from the leptons, and there is no more decay permitted between quarks and leptons.This is because, referring to Table 4, there is not a single vertical generator other than 63  for which any quark shares the same charge as any lepton, so hereafter, the only way for a quark to interact with a lepton is gravitationally.And the neutrino-the odd man out-does not interact gravitationally with any other fermions besides another neutrino, because its gravitational charge is different from that of all the other fermions and that gravitational generator was used to break the Planck symmetry. Further, as was developed in detail in Section 8 of [1], the breaking of B L  also creates stable magnetic monopoles which manifest as protons and neutrons forming   , p n doublets with 1 B  .So this is also the symmetry break at which protons and neutrons are born.And, with L B L Y   , as noted at the end of Section 5, the weak interaction becomes non-chiral to go along the with chiral non-symmetry of baryon interactions as discussed in Section 5 of [1]. So the B L  symmetry breaking is responsible for several interrelated phenomena: it brings about the three generations observed at low energy, it brings about protons and neutrons, it forecloses lepto-quark decays, and because L B L Y   , it brings about the broken chiral symmetry of the weak interactions.Now, at some level, everything discussed so far in this section about fermion fracturing due to symmetry breaking restates what is likely obvious, because it is known that one of the very basic consequences of symmetry breaking is that it forecloses certain decays which are permitted to occur in the higher state of symmetry before the symmetry is broken.From a thermodynamic view, it "freezes out" certain transitions below a certain critical temperature (recognizing too that some symmetries are not broken but are actually restored on the opposite end of the scale, near absolute zero, where electrons are superconducted freely without any apparent friction from the protons and neutrons from which they separate at GUT energies, which suggests that superconductivity may well be a phenomenon at which the SU (7) symmetry between electrons and quarks is restored so electrons can flow through rather than around protons and neutrons).But the reason for focusing on fermion fracturing in this way, is because we will now venture into the not-obvious realm of generation replication and apply these observations to understand what happens there as well. If the rule is that after symmetry breaking fermions can only decay into other fermions with like-charges under some interaction that was not used to break the symmetry, then what happens to the horizontal generators 48 So these generators now do yield the SU(3) configuretion shown in Figure 1, albeit with eight eigenstates, five of which are all zero-valued and trivial, and three of which are not.We can now label these three non-trivial eigenstates as: just as illustrated in Figure 1.However, these are now free-floating generators once the L B L Y   symmetry is broken, so they no longer provide vertical symmetry quantum numbers for any of the fermions, as illustrated in Tables 3 and 4. Rather, they appear to provide a replication of each fermion into three generations.But if this is the case, then they should lead to other facets of generation replication as well, including Cabibbo-type mixing, and to the observation that the only way a particle from one generation can transform into a particle of another generation is via left-chiral weak interaction decays from one weak isospin to a different weak isospin, and not directly.As we shall now see, this is a consequence of the fermion and generator fracturing highlighted above and the "freezing" restrictions that come into play after symmetry breaking. Because the generators 48 35 ,     have become fractured from the other generators, and given what we know about the fermion generations from experimental observations, it appears that each of the , , e   eigenstates is fractured from one another so that it is now forbidden for a direct transition to take place between any of the three states (7.4), (7.5), (7.6), i.e., no decays may take place any longer via the Any decays that do take place, must occur via another generator for which the charges are the same as among the fermions involved in the decay.The fermion has to find a "loophole."This is exactly like the discussion we had at the beginning of this section about the neutrino in relation to the remaining fermions from which it becomes fractured at P v , or the fracturing of the quarks from the leptons at GUT v .In order to undergo decay into a different fermion, a fermion must find a different generator and a different fermion which has the same charge as the original fermion with respect to that different generator.So for horizontal symmetry transitions, it appears that we have to tighten the rules even further.Specifically, it appears that for a horizontal transition to be permitted, not just one, but all of the vertical degrees of freedom in Tables 3 and 4 must be the same as between the two fermions involved in the decay.transition must occur either as a transition between the first and fifth, second and sixth, third and seventh, or fourth and eighth fermions in Table 3.These are the fermion doublets which share a common: So referring to     , transitions among (7.8), (7.9) and (7.10) because although QCD is never broken, the QCD generators are different as among red, green and blue states.If any vertical generators, or any horizontal generators are different as between two fermions, then based on what we observe, the apparent rule is that the horizontal transition is not permitted.So all that is permitted-the only "loophole" left for decay-are the e transitions, because these are the only transitions for which all of the generators listed are the same for both fermions.And here, because of the tightened rules when it comes to horizontal transitions based on fractured generators, even the right-chiral generator R Y is excluded, because this too is not the same as between the members of each of the above doublets.This is why we show L Y in the above but not R Y .This means only the left-chiral states may participate in transitions among the e     states in (7.4) to (7.6).Observationally, we know that this is also a characteristic of left-chiral weak generational interactions. These stronger rules for the horizontal generators may at first seem arbitrary, but they are not.They may be understood because for the horizontal generators, not only are some fermions fractured from other fermions, but the horizontal generators themselves are fractured from the vertical generators.It is the fracturing of both generators and fermions which leads to such stringency.So for a vertical generator that breaks symmetry but is not itself fractured from the other vertical generators, transitions are permitted so long as at least one other vertical generator provides the same charge as between the two transition states.But for a generator which has itself been fractured from the other generators, the rule is even more restrictive.Now, transitions are permitted only if all of the involved vertical generators provide the same charge as between the two transition states.Now, the astute reader may notice that the electric charge Q and left-chiral weak isospin 3 L I are also not the same as between the two fermions in any of the doublets in (7.7) through (7.10) above. as between the members of these doublets, as well as . And so, the question might be asked, why are even these interactions permitted?After all, this changes the generators also, so by these rules, shouldn't this be forbidden also?But further reflection makes this answer clear: the electric charge does not emerge as a physically-preclusive generator until it is used to break the electroweak symmetry at much lower energies determined by the Fermi vacuum F v  246.219651GeV .This is the same way in which B L  is not a preclusive generator until its breaks symmetry at GUT energies.So indeed, once we break electroweak symmetry, no transitions are permitted between generations.But at the same time, neiter will e   or u d  be permitted, but this is because weak interacttions are no longer permitted either (in the historical sense that the weak interaction becomes "weak").So what we learn from this, is that the ability of fermions to change generations will wax and wane in lock step with the weak interaction itself and the breaking of electroweak symmetry, just as is observed!By imposing the more stringent rule that once the , no horizontal transitions are permitted among the (7.4) to (7.6) states unless all of the remaining vertical generators-chiral symmetric or notare the same as between the fermions involved in the transition, we arrive at precisely the type of mixing that is observed in nature as among the three generations.This makes generation mixing part and parcel of weak interactions, while excluding the strong interactions and even the right-chiral states from participation in generational mixing. So, now we take the final, formal steps to mathematically represent all of these decay restrictions.Referring to Section 12.12 of [14], the two generators 48 35 ,     introduce two degrees of freedom and so define threenon-trivial horizontal eigenstates , , e   in (7.4) through (7.6) and Figure 1, representing eigenstates of SU(3), which states are precluded from direct transformation into one another according to the rules just outlined because they are fractured generators.SU(3) can be used to form unitary matrices U with 9 3 3   components.Because the only permitted transitions are (7.7)through (7.10), we can alter the phase of any of the 2 3 6   quark states which we designate   following Table 3, without altering the physics.Similarly for leptons.But one may omit an overall phase change which still leaves the physics invariant.This means that U must be a function of 9 3 3   minus 6 2 3   plus 1 parameters, i.e., 4 parameters.But an orthogonal 3 3  matrix only has   3, 2 3 C  real parameters, which leaves one residual phase.So for the leptons l, we may choose to form this matrix in the representation: and for the quarks q we form the analogous: To implement the lepton mixing, we keep in mind from (7.7) that for a e  , but then one of them can always be transformed into a pure state while the other is similarly transformed, without changing the physics.In other words, all that is observable is the relative transition as between   , e  .So following the usual conventions, we use (7.11) to transform the lower members of the   , e  doublet, that is, we define: Similarly for the quarks of each color C = R, G, B, we define: Because R Y is not the same as between the members of each of the (7.7) through (7.10) doublets, right-chiral transitions are also precluded, and the only permitted transitions are for left-chiral states.So these will be projected with   .Further, because 8 3 ,     are not the same except as between members of the four distinct doublets in (7.7) through (7.10), the only permitted transitions will be between one lepton and another lepton, and between a first quark of a given color and a second quark of the same color , , C R G B  .This keeps the strong QCD interaction out of generation-changing transitions (and also out of any CP violation), and makes this an exclusively weak, left-handed chiral phenomenon.So for leptons, the transition currents will be: And for quarks of each color , , C R G B  , they will be: This is exactly what the phenomenology demonstrates!So, returning to the question posed at the very outset of the discussion following Table 2, not only does SU(8) not provide too much freedom, but upon careful consideration and development, it provides exactly the right amount of freedom to explain the precisely observed fermion phenomenology of three generations.Further, by applying the rule that fermions which are fractured from one another after symmetry breaking cannot decay into one another except by a vertical interaction other than the vertical interaction that was used to break symmetry, and that decay with regards to a fractured generator which thereafter becomes a free-floating horizontal degree of freedom is only permitted between fermion eigenstates for which all of the surviving vertical generators are the same, we can use SU (8) to explain everything that we know about the qualitative features of the interactions we observe, from generation replication to weak chiral non-symmetry to Cabibbo mixing to the fact that this mixing occurs only via weak isospin decays between left-handed states.And in the process we have perhaps found that neutrinos do not gravitate with any fermions aside from other neutrinos, which is likely to be of tremendous consequence as this is better developed and understood and especially if it can ever be exploited. Before concluding this section, let us now return to the first three generators 63 48 35 , ,    of SU (8).Based on the earlier review of how 63  breaks symmetry near the gravitational Planck scale and sets the neutrino on a trajectory to have a mass orders of magnitude smaller than that of any other fermion; given how the 48 35 ,   fracture from the other vertical generators and form the basis for two horizontal degrees of freedom that underlie three fermion generations in which one fermion is distinguished from one another solely by mass and not by any other quantum numbers from a vertical degree of freedom, and given that mass and gravitation are inextricably linked such that gravitation is the "mass interaction," we now formally associate these three generators 63 48 , , 35  with the gravitational interaction, at the elementary particle level, below the GUT energy.Using (7.4) to (7.6) and (6.5), we highlight this connection in Table 5 of Section 5. The horizontal degrees of freedom from 48 35 ,   which to enable the fermions in each generation to have distinct masses in relation to their counterparts in the other two generations are shown horizontally, while the vertical degree of freedom G enabling each fermion within a generation to have a distinct mass is shown vertically.Of course, with SU(3) C remaining unbroken, different colors of the same flavor of quark within one generation have the same mass.As noted earlier, using the  notation, the vertical gravitational generator G does not distinguish the , , u d e    masses from one an-other within a generation.So at high energies, as noted, the fermions (other than neutrinos) within a generation all have the same mass.It is only through the stages of symmetry breaking and the remaining generators L B L Y   , 3 L I and Q, that the mass spectrum within a generation separates.This may be thought of as mass/ energy differences emanating from strong, weak, and electromagnetic interactions, i.e., one may regard quark masses to differ from electron masses because they are quarks not leptons, and up and down quark masses to differ because their weak isospins and electric charges are different.Gravitational generators provide the free- dom for these differences to occur. As to interactions, after all symmetry breaking includeing electroweak symmetry breaking is completed, the seven generators of SU(3) now are allocated as follows: three degrees of freedom go to gravitation in the form of 63 48 35 , ,      , two degrees of freedom go to strong QCD interactions via 8 3 ,     , one degree of freedom goes to left-chiral weak interactions via 3 L I , and the final degree of freedom goes to electromagnetic interactions via Q.Seven linearly-independent degrees of freedom, and eight vertical fermion eigenstates, thus account perfectly, with nothing missing and nothing superfluous, for the observed phenomenology of the fermions and their interactions, including generation replication and Cabibbo mixing, left-chiral weak interactions, and the elusive and perhaps gravitationally-defiant behavior of the neutrino. Summary and Conclusion We have in the foregoing focused on the breaking of symmetry at the Planck scale and the GUT scale, which, astronomical observation aside, is many orders of magnitude beyond what we may ever hope to observe directly.The final stage of symmetry breaking is electroweak symmetry breaking at the Fermi vev F v  246.219651GeV .This is in the realm of observation, and the generator used to break this symmetry is the electric charge generator Q.This final symmetry break gives rise to the electromagnetic interaction which dominates atomic and chemical structure and much of what is most directly observed in the natural world beyond gravitational interactions.That is, beyond objects falling to earth and planets wandering the heavens along prescribed trajectories, electromagnetic phenomena in electromagnetic and chemical and atomic form are our first line of direct experience of the natural world.Our experience of nuclear phenomena-based on the protons and neutrons which come to life as stable magnetic monopoles at the GUT scale as has been reviewed here and in [1]-comes to us through the laboratory instrumentation that we used to extend the range of our physi-cal senses, and gives rise to the vast preponderance of the matter that populates and animates the universe. When we break the electroweak symmetry we make use of the electric charge generator (4.6), and analogously to (5.1) through (5.3), employ the Fermi vacuum: which specifically means that: , , diag 3 3 3 3 3 3 Picking off the coefficients from the generators in (4.6), for each non-zero component of the vacuum we then have: which leads to: and consequently an electroweak Clebsch-Gordon coefficient: This is how the electroweak symmetry is broken for the SU(8) group that we have developed throughout this paper.This final symmetry break fractures all fermions of different electric charges from one another, and so precludes their decay into one another.Referring to Ta- ble 4, weak isospin transitions between up and down quarks with differing charges  are now precluded, as are similar transitions between electrons and neutrinos with     , 0, 1 Q e    .This shuts down the weak interaction (in the historical view, renders it "weak"; in hindsight it is probably better called the "faint" interaction), and because weak isospin decays as reviewed in the last section are the only avenues permitted for generation-changing transitions, generational transitions also are turned off in lock step.The only transitions still permitted after electroweak symmetry breaking, given that Q is a vertical symmetry generator and so not subject to the very stringent rules laid out in the last section for horizontal transitions, are the vertical, colorchanging R, G, B transitions of QCD, which are still allowed to occur because the quarks involved in these interactions are part of a triplet in which 1 3 B L   is the same for each, and the QCD symmetry remains unbroken.That is, the only permitted decays once electroweak symmetry is broken, are decays along the B L  generator for particles of like B L  with unbroken Now, following three stages of symmetry breaking-at the Planck scale, the GUT scale and the Fermi scale-all of the fermions have become fractured from one another, generation transitions cease, and the particles are frozen into the configurations of our everyday experience.The SU(8) symmetry with seven generator degrees of freedom that we started with in Table 2 still does exist, but it has become hidden and distorted behind twenty orders of magnitude of vacuum screening and three stages of symmetry breaking that have fractured neutrinos from the other fermions and broken off their gravitational communication, broken the Planck symmetry between positive and negative energy fluctuations, fractured quarks from leptons, fractured two generators from the remaining five to provide horizontal generational replication, brought about Cabibbo-type mixing among these generations for left-handed chiral projections only, and finally, fractured the upper and lower members of the like-hypercharge Y L (weak isospin) doublets from one another, turned off the weak interactions, and frozen the particles in place so that all we observe at the lowest energies are electromagnetic and strong interactions, as well as the bulk interaction of gravitating masses which is eluded by the neutrino. This GUT, which is based on the hypothesis that baryons are Yang-Mills magnetic monopoles and is rooted in the SU(4) P and SU(4) N subgroups developed in Section 7 of [1] which yielded over half a dozen accurate predicttions in [1,2] as reviewed in Section 1 here, leads systematically to all of the qualitative particle and interaction phenomenology which we are able to observe with our senses and the extension of our senses through experimental apparatus.But the confirmation of the particular GUT proposed here, versus other possible GUTs which reproduce similar phenomenally, needs to come through mass and energy predictions which continue the successful empirical matches developed in [1,2].As discussed in Section 3, one would expect that these energy predictions should come about by developing the remaining  -containing terms in the Lagrangian density (3.2) which we have not yet developed, and then making    L to be matched up with empirical data.Along the way, the development should proceed on a parallel course to that of Sections 2 through 11 of [1], making use of the non-Abelian Klein-Gordon Equation (3.10), representing scalar sources as J   , employing the same sort spin sums and the same Gaussian ansatz modeling of fermions that was developed respectively in Sections 3 and 9 of [1], and keeping in mind the clues we have elaborated in (3.6) through (3.8) and (3.11) here, all while employing the GUT and symmetry breaking that has been elaborated here. It is clear from [1,2] that it will be possible via this approach to calculate and predict definitive mass and energy values, just as has been done previously in [1] and [2].It will then be left to interpret those values as we did in Sections 11 and 12 of [1] and throughout [2], and to compare them with experimental data to try to ascertain the meaning of those calculations and predictions to obtain sensible numerical matches to observed energy data.That is, we clearly will be able to calculate energies.The question will be whether the energies we are able to calculate will match and make sense in relation to the empirical data as well as they did in [1,2]. Success in this endeavor, if it should arrive, would validate that this particular GUT may indeed be the one that nature has selected to govern the phenomenology of the material universe, and would provide some confidence that the development elaborated here does reach "behind the veil" to explain how nature really does operate in energy domains likely to forever remain beyond the reach of our direct senses and the extension of our senses gained through experimental devices and methods. 8 ) These clearly are at exactly the right order of magnitude to explain the free proton and neutron masses M(p) = 938.272046(21)MeV and M(n) = 939.565379(21)MeV, if and when we can put (3.6) through (3.8) and like expressions into the right context and obtain the right coefficients.And where do such coefficients come from? which is merely a linear multiple of 15  N .So if we lay out the eight fermions of this octuplet in a vertical column and show the three generators 15 ), and then show generators for electric charge Q, which are all linear combinations of one or more of the three generators15 8 3 3 LI of left-chiral weak interaction, and this is related very intimately to the different Q generators as highlighted above.So, let us now a) introduce and b) use this 3 L below: a condensed symbol that represents two degrees of freedom which are used to provide three distinct states , , e   which appear in Figure 1.And, let us replace the generators 48 35 ,   with this schematic to represent the horizontal symmetry of generation replication.Thus, we now rewrite Table envision that masses which are equal at the Planck scale might separate so that they differ from one another by factors of 4.35 to 1 or 9.60 to 1 at observable energies.But for a ratio 250, 000 e m m   , p n . Referring to Tables 2 and 3 , 2 I the weak isospin for each doublet          . 107 Note also that by virtue of how the triplets in .Each of the three quarks also enjoys two color degrees of freedom R, G, B associated with the SU(3) C see (4.3) and (4.4).So the group arrived at following B L  symmetry breaking is schematically represented by: s t b for the quarks, , , e   for the electrons, for the neutrinos.The vertical quantum numbers associated with each type of fermion , , ; u c t , , ; , , d s b e   and , , e      are identical for each triplet.  set on a different mass trajectory at the outset at the Planck scale because its63 at which the lepto-quark symmetry is broken and the 48 35 ,   decouple from other generators.Perhaps what  breaking.As discussed briefly at the end of Section 5 of[1], baryon and meson physics is endemically, organically non-chiral, which is consistent with what is experimentally observed, being the mainspring.Via what may be thought of as Dirac's "quinternian" progression 5 , any time one has what looks like a "vector" object from one viewpoint, one can use 5 vev energy v F is similarly defined using the Fermi constant via 2 4 start off in the Planck vacuum with a negative energy ~P M due to the fertilization of the negative energy gravitational fluctuation, while all the other fermions f would start off with a positive energy ~P matter fluctuations.Then, after screening of twenty orders of magnitude, the neutrino mass would end up very close to, and slightly larger than zero, and the rest of the fermion masses would end up more substantially above zero, with the observed masses as what is observed for the neutrino. that quarks and electrons are born at or near the Planck scale when positive energy gravitational fluctuations in the Planck vacuum become fertilized with the positive gravitational charge of a 1 2 LI decay into a R u quark because each has 35 0   .And it can still undergo a 3 L I decay into any up quark, because these and the neutrino all have 3  .Most importantly, as will become central in the discussion be-low, the neutrino can still undergo L B L Y   decay into an electron because both the neutrino and the electron have the same 1 L B L Y     LY  .The latter decays between the two fermions in the 1 B L with the eigenvalues shown in Figure 1 . 8  and 3  , and so redefine 48 35 , are the two fractured generators.Because these no longer differentiate an observable vertical symmetry, but still do provide two degrees of freedom as illustrated in Figure1in section 5, let us transform these two generators into No new calculation is required: we simply use (4.3) and (4.4) but without . 10 ) So in sum, one can have neither e states are all fractured from one another.One cannot have intergenerational transitions between   , e  and any of the quark doublets because these have been fractured from one another by B L  breaking.One cannot have intergenerational R G  B  place which alters the quantum numbers in (7.4) through (7.6), we cannot go directly from e     , but must engage in a vertical transition between the states   do not change.The only permitted tran-sition is e   .Now, one can always apply (7.11) to both of   , e generators, which, of course, are strong QCD interactions.With the exception of the R G  B  transitions of QCD, no fermion may transform into any other different type of fermion. Table 2 . Now, for the bottom quadruplet with   Table 1 . Now, in contrast, we have conjoined these groups into SU(8) as represented by Table2above.So the symmetry breaking we are about to explore is a "wholesale" breaking of Table 2 to make this clear, this means that the neutrino still can undergo a35 Table 4 , if a first-generation e fermion is to decay into a second generation μ fermion or a third-generation τ fermion, it must to do so via a generator other than Table 3 actually illustrates this rule the best, because this rule says that a horizontal e
22,253
2013-04-26T00:00:00.000
[ "Physics" ]
Transcriptomic profiling of microglia and astrocytes throughout aging Background Activation of microglia and astrocytes, a prominent hallmark of both aging and Alzheimer’s disease (AD), has been suggested to contribute to aging and AD progression, but the underlying cellular and molecular mechanisms are largely unknown. Methods We performed RNA-seq analyses on microglia and astrocytes freshly isolated from wild-type and APP-PS1 (AD) mouse brains at five time points to elucidate their age-related gene-expression profiles. Results Our results showed that from 4 months onward, a set of age-related genes in microglia and astrocytes exhibited consistent upregulation or downregulation (termed “age-up”/“age-down” genes) relative to their expression at the young-adult stage (2 months). And most age-up genes were more highly expressed in AD mice at the same time points. Bioinformatic analyses revealed that the age-up genes in microglia were associated with the inflammatory response, whereas these genes in astrocytes included widely recognized AD risk genes, genes associated with synaptic transmission or elimination, and peptidase-inhibitor genes. Conclusions Overall, our RNA-seq data provide a valuable resource for future investigations into the roles of microglia and astrocytes in aging- and amyloid-β-induced AD pathologies. Background Aging and Alzheimer's disease (AD) produce widespread effects on the central nervous system (CNS) that are characterized by cognitive decline, vulnerability to physical illnesses, elevated oxidative stress, and chronic brain inflammation [1]. These biological and pathological processes are also associated with diminished blood-brain barrier (BBB) integrity, which leads to the accumulation in the brain of blood-derived proteins [2,3] and the infiltration of peripheral cells [4][5][6][7][8][9], and multiple lines of evidence indicate that the innate-immune functions of microglia and astrocytes are involved in these processes. Microglia, the resident macrophages in the CNS, are originally derived from primitive myeloid progenitors that are seeded in the brain during fetal development and expand drastically after birth to account for 5-12% of all the cells in the brain [10][11][12][13]. In the CNS, microglia play crucial roles in the maintenance of brain homeostasis by regulating synaptic plasticity, remodeling neuronal circuits, defending against infectious pathogens [13][14][15], and promoting tip-cell fusion to participate in angiogenesis [16]. In aging and in mice with AD, microglial activation in the brain acts a double-edged sword: Although activated microglia facilitate the phagocytosis and clearance of infectious agents or amyloid-β (Aβ) deposits, constant exposure to proinflammatory cytokines exerts detrimental effects on the brain [17,18]. As compared to microglia, astrocytes, which constitute another type of glial cells in the CNS, have been less studied in aging and AD pathogenesis. Historically, astrocytes have been considered as supportive cells that either provide nitrites or serve as physical scaffolds for neurons. The perivascular endfeet of astrocytes ensheath 98% of brain parenchymal capillaries and thus contribute to BBB integrity and maintain osmotic homeostasis and gliovascular signaling [19,20]. Astrocytes have to date been documented to play essential roles in neurophysiology, such as in release of gliotransmitters (glucose, ATP, and glutamate), communication with neurons, and modulation of synaptic structure [21]. Over the few past decades, considerable research effort has been devoted toward elucidating the functions of microglia and astrocytes in the brain under both physiological and pathological conditions. Moreover, in previous studies, RNA-sequencing (RNA-seq) analysis of microglia and astrocytes has been performed in geriatric and young mice to identify the transcriptomic alterations that occur during aging [22,23]; however, in these studies, the sequencing samples were collected either at limited time points or over large time intervals, and thus the genes that were identified to be altered in aged mice could have been affected by unknown/unexpected insults that are not related to aging or specific diseases. Therefore, to clarify the effects of factors associated with late-age disorders of the CNS, we investigated aging-related genes in microglia and astrocytes isolated from the mouse brain at 5 time points. Here, we identify 2 age-related gene clusters whose expression increased with age as compared with the expression in mature-adult mice. Differential gene-expression analysis revealed that inflammatory-response genes constituted the most prominent class of consistently upregulated genes in microglia upon aging, whereas in astrocytes, synaptictransmission/elimination-and peptidase-inhibitor-related genes were most markedly increased. Furthermore, most of the aging-related genes also showed notable differences in AD mice relative to their expression in wild-type (WT) mice. Our results thus provide a novel transcriptomic dataset for microglia and astrocytes throughout aging that could offer new insights into the body's early intrinsic mechanisms involved in sensing CNS damage and protecting the brain against neurodegeneration. Mice APPswe/PS1ΔE9 double-transgenic mice, obtained from the Model Animal Research Center of Nanjing University (Nanjing, China), were originated from B6.Cg-Tg (APPswe/PS1ΔE9) 85Dbo/Mmjax mice (JAX#034832) of the Jackson Laboratory. C57BL/6 J WT littermates were used as WT controls. Mice (n = 3/group) were bred under SPF conditions in IVC cages at 23°C and 50-60% humidity and with circadian-rhythm illumination. Pups aged 21-28 days old were removed from their parental cages and genotyped using ear-biopsy samples; the DNA extracted from the biopsy samples was PCR-amplified using primers specific for APP and PS1 sequences. All procedures were approved by the Animal Use and Care Committee of Shenzhen Peking University -The Hong Kong University of Science and Technology Medical Center (SPHMC) (protocol number 2011-004). All mice used in the study were males. Efforts were made to minimize suffering and the number of animals used. Brain dissociation Microglia and astrocytes were isolated from WT and AD mice belonging to 5 age groups: 2-, 4-, 6-, 9-, and 12-month old (2-12 months). Mice were transcardially perfused under deep anesthesia with 1 × PBS, and then the brain was removed, dissected, and rinsed in HBSS. Next, after removing the meninges, the brain was cut into small pieces by using a sterile scalpel, and the samples were centrifuged at 300×g for 2 min at room temperature and the supernatant was aspirated carefully. Samples from a single brain were pooled as a single experimental group. Enzymatic cell dissociation was performed using an Adult Brain Dissociation Kit (130-107-677, Miltenyi Biotec), according to the manufacturer's instructions. Briefly, tissue pieces (up to 500 mg of tissue per sample) were transferred into the C Tube containing 1950 μL of enzyme mix 1 (enzyme P and buffer Z), and then 30 μL of enzyme mix 2 (enzyme A and buffer Y) was added into the C Tube. The C Tube was tightly closed and attached upside down onto the sleeve of the gentleMACS Octo Dissociator with Heaters (130-096-427, Miltenyi Biotec), and the appropriate gentleMACS program was run. After brief centrifugation to collect samples at the tube bottom, the samples were filtered through a 70-μm strainer (130-098-462, Miltenyi Biotec), washed with D-PBS, and then centrifuged again. Percoll density gradient and myelin removal Singe cells were resuspended in 40% Percoll and centrifuged at 800×g for 20 min at 15°C. After discarding the myelin-containing supernatant, the pellet was resuspended in cold MACS buffer (containing 1-volume dilution of PBS, 2 mM EDTA, and 0.5% BSA, pH 7.2), and then myelin-removal beads (Myelin Removal Beads II, 130-96-733, Miltenyi Biotec) were used according to the manufacturer's protocol to prepare cells for staining with fluorescence activated cell sorting (FACS) antibodies. Briefly, single-cell suspensions were incubated with the beads at 4°C for 15 min, and then the cells were washed onto the LS column on the autoMACS Separator; the column was washed thrice with PB buffer, and the cells in the flow-through were used for antibody staining. RNA extraction, quantification, and qualification RNA was isolated from flow-cytometry-sorted cell populations by using an RNeasy Micro Kit (74004, Qiagen) according to the manufacturer's instructions, which included a step involving incubation with DNase. For whole-brain RNA purification, we generated 1 brain/pool samples. Purified RNA was quantified using a NanoDrop 2000 (Thermo Scientific) and Agilent Technologies Bioanalyzer 2100 RNA Pico chips (5067-1513, Agilent Technologies), according to manufacturer instructions; the RNA integrity number (RIN) in all cases was > 9. Preparation of smart-seq2 RNA-seq libraries and sequencing For RNA sample preparations, 10 ng of RNA per sample was used as the input material. Libraries were generated using a SMART-Seq v4 Ultra Low Input RNA Kit (634892, Takara Bio USA, Mountain View, CA, USA), following the manufacturer's recommendations, and index codes were added to attribute sequences to each sample. Briefly, first-strand cDNA synthesis from total RNA was primed using 3′ SMART-Seq CDS Primer II A, and SMART-Seq v4 Oligonucleotide was used for template switching at the 5′ end of the transcript. PCR Primer II A was used to amplify cDNA, for 8 cycles, from the SMART sequences introduced by 3′ SMART-Seq CDS Primer II A and the SMART-Seq v4 Oligonucleotide. LD-PCR-amplified cDNA was purified through immobilization on AMPure XP beads and then quantified using the Agilent Bioanalyzer 2100 system. To prepare cDNA libraries suitable for Illumina sequencing,~200 pg of the cDNA was used with a Nextera XT DNA Library Preparation Kit (Illumina, Cat. Nos. FC-131-1024 and FC-131-1096, San Diego, CA, USA). Tagmented fragments were amplified for 12 cycles and dual indexes were added to each well to uniquely label each library. Concentrations were assessed using a KAPA Library Quantification Kit (KK4844, KAPA, Biosystems, USA), and samples were diluted to~2 nm and pooled. Pooled libraries were sequenced on an Illumina NovaSeq platform and 150-bp paired-end reads were generated. RNAscope and image quantification Mice were deeply anesthetized using pentobarbital, transcardially perfused with ice-cold PBS until the irrigation fluid was completely clear, and then perfused with icecold 4% paraformaldehyde (PFA) for 10 min. Brains were removed, fixed in 4% PFA in 4°C refrigerator for 12 h, dehydrated using an ethanol dilution series, embedded in molds containing Tissue-Tek OCT, and frozen in dry ice. The OCT-embedded brain samples were cut into 16-μm coronary sections that were placed onto Fisherbrand Superfrost Plus microscope slides (Thermo Fisher Scientific; 12-550-15). RNAscope experiments were performed using a Manual Fluorescent Multiplex kit v2 (323100, ACDbio), following the manufacturer's recommendations. Briefly, slices were incubated with hydrogen peroxide and then target retrieval was performed in a boiling bath beaker. Next, protease digestion was performed for 20 min at room temperature by using Protease III for fixed frozen tissues, provided in the kit, after which probe hybridization was conducted for 2 h at 40°C. A dualprobe set containing Mm-Itgam-c3 (311491) and Mm-Slc1a3-c3 (430781) served as the common probe in each set, and the companion probes were Mm-Cxcl10-c1 (408921) and Mm-Ptbp1-c1 (588721). Nuclei were visualized using 4′,6-Diamidino-2-phenylindole (DAPI). For each mouse, 3 images per region (technical replicates) were used for the quantification, and 100, 50, and 50 cells were counted in the cortex, hippocampus, and cerebellum, respectively. Images were captured as Z-stacks by using a 20 × objective (NA 0.8) and then maximumintensity projections were obtained. Lipofuscin autofluorescence was imaged in the blank channel (488 nm) and subtracted from the red channel (594 nm) and far-red channel (647 nm) images. Microglia and astrocytes were identified as RNAscope puncta generated from the Itgam and Slc1a3 probes. Lastly, a blind counting was performed to analyze the number of double-positive cells and the targetprobe dots per cell, with each data point representing the mean ± SD of 3 brain slices for each probe set. Then Hscore was calculated as follows: H-score ¼ P score 0−4 ðscore Âpercentage of microglia or astrocytesÞ . The weighting formula used for the scores is shown in Additional file 1. All parameters were maintained constant between images to allow unbiased detection. Quantitative RT-PCR validation of selected genes Flow-cytometry-sorted microglia and astrocytes were used for RNA extraction (see preceding sections on FACS and RNA extraction). Quantitative RT-PCR was performed in triplicate in 96-well plates by using a qPCR machine (LC480, Roche) and SYBR Green I Master mixture (4887352001, Roche) for detection of amplification products. The following thermocycling protocol was used: initial denaturation at 95°C for 10 min, followed by 40 amplification cycles of 95°C for 15 s and 60°C for 1 min, and a final cycle at 25°C for 15 s. Relative quantification of mRNA expression was performed using the comparative cycle method to obtain the following ratio: gene of interest/Gapdh. Relative quantification of geneexpression levels was performed using the 2 -ΔΔCt method. All primers were designed using NCBI Primer-BLAST; we designed primers to be~200-bp long. All primers are listed in Additional file 2. STEM (Short Time-series Expression Miner) analyses The Short Time-series Expression Miner (STEM) is a Java program for clustering, comparing, and visualizing short time series gene expression data from microarray experiments (~8 time points or fewer). STEM allows researchers to identify significant temporal expression profiles and the genes associated with these profiles and to compare the behavior of these genes across multiple conditions. The output gene expression value is normalized according to the first time point, usually by subtracting the gene expression value at the first time point, allowing different genes to be visualized at the same starting point. STEM is available for download for free to academic and non-profit users at http://www.cs.cmu. edu/~jernst/stem. Graphs and statistical analyses All statistical analyses were performed using the Graph-Pad Prism 8.00 software (GraphPad Software, La Jolla, CA, USA). Most data were analyzed using one-way ANOVA followed by Dunnett post hoc test for comparisons of > 3 samples, and two-sample unpaired t tests were used for comparing 2 samples; p < 0.05 was considered statistically significant. Sequencing data quantification and data analysis Quality control Raw data (raw reads) in fastq format were first processed using in-house perl scripts. In this step, clean data (clean reads) were obtained by removing reads containing adapter sequences, poly-N-containing reads, and lowquality reads from the raw data. Concurrently, Q20, Q30, and GC content of the clean data were calculated. All the downstream analyses were based on the highquality clean data. Read mapping to reference genome Reference genome and gene-model annotation files were downloaded from the genome website directly. An index of the reference genome was built and paired-end clean reads were aligned to the reference genome by using Hisat2 v2.0.5. We selected Hisat2 as the mapping tool because Hisat2 can generate a database of splice junctions based on the gene-model annotation file, and thus can yield superior mapping results as compared to other non-splice mapping tools. Quantification of gene-expression level Feature Counts v1.5.0-p3 was used to determine the number of reads mapped to each gene, after which each gene's FPKM (the expected number of fragments per kilobase of transcript sequence per million base pairs sequenced) was calculated based on the length of the gene and the number of reads mapped to the gene. FPKM calculation concurrently considers the effect of sequencing depth and the gene length for the read counts, and is currently the most commonly used method for estimating gene-expression levels. Differential expression analysis Differential expression analysis involving 50 conditions/ groups (3 biological replicates per condition) was performed using DESeq2 R package (1.16.1). DESeq2 provides statistical routines for determining differential expression in digital gene-expression data by using a model based on negative binomial distribution. The resulting p values were adjusted using the Benjamini and Hochberg approach to control for the false-discovery rate. Genes identified using DESeq2 that featured an adjusted p value of < 0.05 were regarded as differentially expressed genes (DEGs). Gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses of DEGs GO enrichment analysis of DEGs was implemented using clusterProfiler R package, in which gene-length bias was corrected; GO terms featuring a corrected p value of < 0.05 were considered significantly enriched. KEGG is a database resource for understanding highlevel functions and utilities of biological systems, such as of the cell, organism, or ecosystem, from molecular-level information, particularly large-scale molecular datasets generated using genome sequencing and other high-throughput experimental technologies (http://www.genome.jp/kegg/). We used clusterProfiler R package to test the statistical enrichment of DEGs in KEGG pathways. Purification of microglia and astrocytes and RNA-seq profiling To investigate whether microglial and astrocyte genes in mice are altered throughout aging, we performed RNAseq at 5 time points encompassing the mature-adult stage (2 months), when developmental changes in gene expression have ceased, and the middle-age stage (4 months, 6 months, 9 months, 12 months), during which age-dependent pathology develops (Fig. 1a). Microglia were sorted based on CD45 Low-to-Intermediate /CD11b expression and astrocytes were sorted based on ACSA2 expression after exclusion of doublets and Live/Dead analysis by using BDAriaIII. Single staining and isotypecontrol antibodies were included as controls (Additional file 3 A-B). The percentage of microglia and astrocytes at different time points was shown in Additional file 4. No significant difference was found for the percentage of the microglia and astrocytes overtime, indicating that that aging or AD has less effect on the cell composition. Next, we performed DESeq2 analysis on microglia and astrocytes, which revealed that both cell types showed a gradual increase in the number of age-associated DEGs (40 and 59 genes in 2-month AD microglia and astrocytes as compared to 2-month WT) (Additional file 5A-B). Therefore, we used 2-month WT mice as our mature-adult control for the follow-up analysis, in which DESeq2 R package was used to analysis polyA-selected mRNAs from microglia and astrocytes isolated from whole-brain samples, and we mapped > 85% of the reads in the case of all samples. The reproducibility between replicates was high (Additional file 6A), and the results of principal component analysis (PCA) showed a clear separation of expression between the different time points (Additional file 6B). Microglia genes changed upon aging include cytokinepathway genes We first determined the number of DEGs (adjusted p < 0.05, |log 2 fold-change| > 0.5) in the aging groups relative to 2-month control. We identified numerous DEGs in aging WT mice in comparison with 2-month WT controls: 1109 genes in 12-month mice, 819 genes in 9-month mice, 5709 genes in 6-month mice, and 681 genes in 4-month mice (Fig. 2). Compare to 2-month WT controls, the top 15 upregulated genes exclusively in 4, 6, 9, and 12 months microglia are shown in Fig. 2 b. To annotate these genes in different biological pathways, we performed GO and KEGG analysis. As compared with the expression in 2-month microglia, we detected altered genes involved in "blood vessel morphogenesis" and "cell-matrix adhesion" in 4-month microglia (Fig. 3a); "oxidative phosphorylation" and "ATP metabolic process" in 6-month microglia (Fig. 3b); "response to cytokine" and "innate immune response" in 9-month microglia (Fig. 3c); and "positive regulation of cellular component movement" and "chemotaxis" in 12month microglia (Fig. 3d). Additional file 7 shows the complete datasets. In this study, we hypothesized that age-dependent pathogenic or protective genes could be expressed at consistently higher or lower levels throughout the different time points of mice as compared with the expression in the 2-month control. Therefore, we constructed a Venn diagram of the genes consistently upregulated in aging microglia (4 months, 6 months, 9 months, and 12 months, relative to 2-month control), and from this we identified 48 genes (termed "age-up" microglial genes) (Additional file 8 shows the gene list with fold change and p values), which included a cassette of genes involved in the cytokine pathway. Cxcl10 was upregulated 3-, 1.9-, 3.5-, and 3-fold in 4month, 6-month, 9-month, and 12-month microglia relative to the expression in 2-month microglia. This result was further confirmed using RNAscope in situ hybridization. Other age-up microglial genes involved in immunoregulatory and inflammatory processes included Ccl2/Ccl12, Egr2, Nr1d2, Il6, Zfp36 (anti-inflammatory signaling), Nfkbia (negative regulation of NFεB transcription factor activity), H2-Q1 (MHC I proteincomplex member), and Ccrl12. We next applied the aforementioned filtering criteria to identify genes that are downregulated in microglia. Fewer genes were downregulated than those upregulated in microglia, and considerably more genes were differentially expressed in 6-month microglia (3381 genes) than those in 4-month, 9-month, and 12-month microglia (271, 320, and 582 genes, respectively) (Fig. 2c). Compare to 2-month WT controls, the top 15 downregulated genes exclusively in 4-, 6-, 9-, and 12-month microglia are shown in Fig. 2d. We identified 41 "age-down" microglial genes (Additional file 9 shows the gene list with fold change and p values), including well-known genes such as Man2b2, which encodes lysosomal acid α-D-mannosidase [24]; CYFIP1, which encodes a protein that functions in cytoskeletal remodeling to ensure proper dendritic-spine formation [25,26]; Wasf2, another cytoskeleton regulator [27]; the inflammationdriven cancer gene Ptbp1 [28]; and toll-like receptor genes (Tlr5, Tlr9). We also used STEM (Short Timeseries Expression Miner) analysis [29] to cluster gene sets that showed dynamic changes over time (Fig. 2e, f and Additional file 10). The gene alterations included the upregulation of Fos [30] and cd22 [31] and the downregulation of Csf1r (colony-stimulating factor 1 receptor gene) [32] and Cx3cr1 (C-X3-C motif chemokine receptor 1 gene) [33][34][35]. Previous studies showed that microglia age-related genes do not differ among brain [36], so the significant changed microglia genes might not be brain region specific. The reactive-astrocyte markers Serpina3n and Osmr were expressed at higher levels in 12-month astrocytes than in 2-month astrocytes, whereas Il33 expression showed significant differences in all comparisons. Specifically, 193 genes were upregulated ("age-up" astrocyte genes) (Additional file 12 shows the gene list with fold change and p values), including a well-known AD risk gene (Apoe) and a gene encoding a component of the complement cascade (C4b). Moreover, Snca (synucleinα) and Sncg (synuclein-γ) were also upregulated throughout aging. We also found that age-up genes included several peptidase-inhibitor genes. Two of these genes, Spock3 and Timp4, were upregulated 1.5-3.8-fold throughout aging. Cst3, an endogenous cysteine-protease inhibitor [37][38][39][40], was upregulated in the early stages of aging; the expression was increased 2.2-, 3.7-, 1.6-, and 1.9-fold in 4-month, 6-month, 9-month, and 12-month astrocytes relative to that in 2-month astrocytes. Among the ageup astrocyte genes, we also noted significant upregulation of the gene encoding Pcsk1n. We identified 192 genes that are downregulated in astrocytes ("agedown" astrocyte genes) were also identified in this study (Additional file 13), which included some of the genes involved in negative regulation of axon extension, such as Tnr, Nrp1, Ptprs, Slit1, and Sema4f. In addition, it also included a matrix metallopeptidase (Mmp16). The STEM analysis results are shown in Fig. 4e and f and Additional file 10. To estimate whether the changed genes in astrocytes are brain region specific, we compared our age-altered astrocytes gene dataset to the genes which are uniquely up/downregulated in astrocytes in different brain regions published before [22]. We found that astrocyte genes shifted their regional expression patterns upon aging. The age-up genes were upregulated in different brain regions, including 37 genes in cerebellum, 6 in visual cortex, 16 in hypothalamus, and 2 in all brain regions (Additional file 14 A-B). Interestingly, age-down genes were also downregulated in brain regions, such as 69 in cerebellum, 13 in motor cortex, 16 in visual cortex, and 102 in hypothalamus (Additional file 14 C-D). Importantly, we also found that the human [36] and mouse astrocyte genes affected by aging shared 17 orthologous genes (Additional file 14 E). Cross-sectional genes were shown in Additional file 15. Most DEGs were altered exclusively in either microglia or astrocytes. However, 4 genes were included among both age-up microglial genes and age-up astrocyte genes: Cxcl10, Ccl2, Scoc, which regulate amino acid- Fig. 4 Differential gene expression between adult and aging astrocytes. a-d Upregulated and downregulated genes, determined using DESeq2 analysis, between mature-adult mice (2 months) and aging mice (4 months, 6 months, 9 months, 12 monts); adjusted p < 0.05, |log 2 fold-change| > 0.5. a Venn diagram showing upregulated genes in astrocytes. b Heatmap of top 15 genes upregulated in astrocytes. c Venn diagram showing downregulated genes in astrocytes. d Heatmap of top 15 genes downregulated in astrocytes. e and f STEM analysis of upregulated genes (e) and downregulated genes (f) in astrocytes during aging starvation-induced autophagy [41], and Mri1, which is involved in the methionine salvage pathway. Conversely, 7 genes were downregulated in both microglia and astrocytes: Man2b2, Ptbp1, Prrc2a, which control oligodendroglial specification and myelination by functioning as a newly identified m 6 A reader [42]; Midn, which regulates glucokinase enzyme activity [43]; Fscn1, which is required for filopodial formation in neural crest cells [44]; Clcn6, which is related to voltage-gated chloride channel activity; and Pik3r4, which is involved in the formation of autophagosomes [45]. Interaction of microglia and astrocytes during aging and AD Previous study by Liddelow et al. [46] showed that activated microglia induced A1 astrocytes by secreting Il-1α, TNF, and C1q, also happening in normal aging [47]. In the progression of WT and AD, we also found that inflammatory inducer cytokines secreted by microglia appeared earlier than the upregulation of neuroinflammatory genes in A1like reactive astrocytes (Additional file 16), indicating that microglia might induced A1 astrocytes in aging and AD progression. Validation of RNA-seq profiles by using qPCR and RNAscope We validated our RNA-seq data through qPCR performed using a new cohort of animals. For age-altered genes, we selected 15 genes from microglia and astrocytes respectively (5 age-up genes, 5 genes expressed no difference in age, 5 age-down genes). For each time point of WT/AD mice, we selected 9 genes from microglia and astrocytes (3 showing elevated expression in 2month WT mice, 3 equally expressed, and 3 showing increased expression in 2-month AD mice; genes for 4 months, 6 months, 9 months, and 12 months were selected in a similar manner). Data were expressed as 2 -ΔΔCt by using the Gapdh transcript as an internal reference standard. The expression analyses performed on the selected genes yielded results that were superimposable with the results obtained using RNA-seq ( Fig. 6 and Additional file 17). To confirm mRNA changes in the case of age-up genes from microglia and astrocytes, we performed dual RNAscope in situ hybridization on samples from WT mice belonging to the 5 age groups. We used Itgam (CD11b) as a universal microglial marker and Slc1a3 as a universal astrocyte marker, and we examined an ageup gene (Cxcl10) and an age-down gene (Ptbp1) from the RNA-seq analysis (Fig. 7e, f). We determined the total dual-positive cell numbers for Itgam + microglia or Slc1a3 + astrocytes that also expressed Cxcl10 or Ptbp1 in the hippocampus, cortex, and cerebellum, as well as the number of target-probe dots per cell. We found a similar fold-change in FPKM as in the RNA-seq data ( Fig. 7a-d, g-f): Cxcl10 and Ptbp1 were significantly upregulated and downregulated, respectively, with age in both microglia and astrocytes. Association of age-altered genes in AD transcriptomes AD is a heterogeneous disease in which multiple detrimental factors contribute to cognitive loss and disease escalation [14]. To determine the aging transcriptomes in AD, we analyzed the expression of age-altered genes at 5 different time points in WT and AD mice. We found that most age-up genes were highly expressed in AD mice as compared to WT mice at the same time points. In contrast, age-down genes were highly expressed in WT mice (Fig. 8). Then we compared the expression of age-altered genes in microglia or astrocytes isolated from 12-month WT vs. 12-month AD mice. We found that among the 28 age-up genes in microglia from 12-month AD mice, 13 showed a significant increase (adjusted p < 0.05, |log 2 fold-change| > 0.5) relative to the age-matched control (Fig. 8a), whereas 4 of the 28 genes were downregulated. Among the age-down microglial genes, 7 genes were significantly downregulated in AD mice (Fig. 8b). We also analyzed age-related astrocyte genes in AD progression, and we found that 33 ageup genes were strongly upregulated and 53 age-down genes were markedly downregulated in 12-month AD mice (Fig. 8c, d). As shown in Additional file 18, we identified several cross-changed genes between the DEGs (genes are altered in both aging and Alzheimer's disease) and the DEGs (genes are altered in different AD groups compared with 2-month AD controls). Nonmonotonically changed age-related genes We used STEM analysis to cluster gene sets that showed similar trends in a certain category at 5 time points between 2 and 12 months. As shown in Fig. 9a, the expression of genes was upregulated sharply from 4 months, peaked at 6 months, downregulated again, and stabilized at 9 months. The GO analysis showed that DEGs mainly involved in "mitochondrion organization" and "cellular respiration". The DEGs were downregulated from 2 to 6 months and then upregulated are shown in Fig. 9i. These differentially expressed genes were involved in histone modification pathway. Other trends in microglial genes with the same pattern are shown in Fig. 9. Similarly, we also found that a cassette of genes in astrocytes at different time points showing a same variation tendency. We performed GO analysis on these Fig. 6 Validation of RNA-seq data between 5 time points WT samples. a Expression analyses performed on selected genes yielded results superimposable with results obtained from RNA-seq analyses of microglia. b Expression analyses performed on selected genes yielded results superimposable with results obtained from RNA-seq analyses of astrocytes. Columns represent means ± SEM; ****p < 0.0001, ***p < 0.001, **p < 0.01, *p < 0.05; left: comparisons of DESeq2-analysis values between 2-month and aging samples (4 months, 6 months, 9 months, 12 months); right: unpaired t tests for comparing 2 samples fixed trend genes and the results were shown in Fig. 10. "Transcription cofactor activity" (Fig. 10d, e), "mitochondrial protein complex/membrane/matrix" (Fig. 10a, b, c, and h) and other pathways were involved in different patterns of DEGs, indicating that they may play different roles at different stages. Columns represent means ± SEM; ****p < 0.0001, ***p < 0.001, **p < 0.01, *p < 0.05, for comparisons of DESeq2-analysis values between 2-month and aging samples (4 months, 6 months, 9 months, 12 months). e and f Representative in situ hybridization images for Cxcl10 (e) and Ptbp1 (f) showing colocalization with a microglial marker (Itgam) and astrocyte marker (Slc1a3) in cortex and hippocampus in 2-month, 4month, 6-month, 9-month, and 12-month mice. Scale bar, 20 μm. g-j Bar graphs depicting quantification of H-score of Itgam + microglia and Slc1a3 + astrocytes expressing detectable levels of Cxcl10 and Ptbp1 mRNAs upon aging: (g) Cxcl10 in microglia, (h) Cxcl10 in astrocytes (i), Ptbp1 in microglia, and (j) Ptbp1 in astrocytes. One-way ANOVA followed by Dunnett post hoc test; data are shown as means ± SEM; ****p < 0.0001, ***p < 0.001, **p < 0.01, *p < 0.05; n = 3 animals Fig. 8 Age-related gene-expression and variation between AD (12 months) and WT (12 months) mice. a-d (left), Heatmap represent the age-up microglial gene (a); age-down microglial gene (b); age-up astrocyte gene (c), and age-down astrocyte gene at 5 time points WT/AD samples. Z scores are calculated from gene FPKM values (upregulation in red, downregulation in blue, neutral in white). a-d (right), Age-up microglial gene (a); age-down microglial gene (b); age-up astrocyte gene (c), and age-down astrocyte gene (d) expression between AD (12 months) and WT (12 months) samples. Log 2 fold-change based on RNA-seq data, between 12-month AD and 12-mo WT mice; *adjusted p < 0.05, |log 2 fold-change| > 0.5. c and d (right) Only genes that exhibited statistically significant changes in expression are shown Discussion The results of this study indicate that microglia exhibit an increase in responsiveness to inflammation stimuli with age, which is reflected by the consistently elevated expression of inflammatory-response genes, whereas astrocytes appear to function as "preservers" of inflammation, which is reflected by the upregulation of peptidase-inhibitor genes upon aging. In this study, we tried our best to reduce the artificial effect of the dissociation process. Although there are three biological replicates at each time point, individual differences still cannot be ignored. In this study, we have not found a significant difference of the percentage of microglia and astrocytes overtime. Transcriptome differences between microglia and astrocytes during aging have been addressed in a few previous studies. Aged astrocytes (2 years old) were shown to upregulate genes involved in synapse elimination but to minimally alter the expression of homeostasis-related genes [22], and 2-year-old astrocytes were also reported to adopt the reactive phenotype of neuroinflammatory A1-like reactive astrocytes [47]. Our astrocyte gene-expression dataset agrees with the findings of the previous studies, because we also detected a significant decrease in Thbs1, an increase in Thbs2, C4b, Cxcl10, and reactive-astrocyte genes (Osmr, Serpina3n), and no change in homeostasis genes (Aldh1l1, Aqp4) in our 12month astrocyte samples relative to the expression in 2month astrocytes. Similarly, the DEGs identified between our 12-and 2-month microglial samples broadly agreed with previous RNA-seq profiles of microglia from the aged brain (24 months). Age-up genes in astrocytes, C4b, involved in synapse elimination [48]; Snca and Sncg, 2 genes related to Parkinson's disease pathogenesis [49] that also play several roles in synaptic activity, such as regulation of synaptic-vesicle trafficking and subsequent neurotransmitter release [50,51], suggesting that astrocytes play a critical role in synapse elimination and synaptic transmission. Spock3 and Timp4, encode proteins that participate in inhibiting matrix metalloproteinases (MMPs) involved in the degradation of the extracellular matrix [52], and the other genes encode proteins that inhibit lipoteichoic acid-induced NF-κB, MAP kinase, and Akt activities [53] and decrease the invasion and metastasis of tumor cells in the brain [54]. Pcsk1n, an inhibitor of prohormone convertase 1 that regulates the proteolytic cleavage of neuroendocrine peptide precursors [55]. Cst3, can function as a protective factor in the AD brain, and its mechanisms of action include inhibition of cysteine proteases, induction of autophagy, induction of cell division, and inhibition of Aβ oligomerization and amyloid-fibril formation [56][57][58][59]. Cst3 is known to be enriched in adult astrocytes throughout the brain [60], but the role of Cst3 in astrocytes during aging has remained obscure. Inhibition of proteolysis has been shown to protect neurons against ischemia [61,62], which suggests that astrocytes might protect neurons by commonly upregulating the expression of the aforementioned glial-cell-derived endogenous protease inhibitor. Previous studies [63] have reported that when the blood-brain barrier is destroyed, peripheral leukocytes are able to transit the astrocytic TJ barrier in inflammatory lesions and enter the CNS, secrete serine proteases and MMPs to cleave astrocytic CLDN4. Our results suggest that in normal aging, increased production of serine proteases inhibiter/MMPs inhibiter and decreased production of MMPs might control neuroinflammation and prevent the invasion of peripheral pathogens. Although we minimize ex vivo activation during the isolation procedure, there may still be some unknown activation when sorting cells from the brain. Evidence has been presented supporting the notion that the inflammatory responses of both astrocytes and microglia peak during the beginning of symptomatology [64]. Therefore, we focused here on the genes that start to change at an early stage and show sustained alteration throughout aging. Unexpectedly, we found sustained upregulated or downregulated expression in microglia throughout life of several inflammation-related genes. Ccl2 is a key mediator of spinal microglial activation, and blocking spinal Ccl2 alleviates heat hyperalgesia and augments glutamatergic transmission in substantia gelatinosa neurons [65], reduces immunosuppression, and augments vaccine immunotherapy [66]. Ccl12 also plays a pivotal role during the early stages of allergic lung inflammation [67,68]. These findings suggest that some of the predisposing or inflammatory factors associated with diseases might be related to the consistently elevated expression of Ccl2/Ccl12 in microglia, and blocking this might alleviate and minimize disease progression. Egr2, which has been proposed as a newly identified M2 (alternatively activated) marker for macrophages, is associated with the ability of these cells to respond to inflammatory stimuli [69,70], was included among the age-up microglial genes. Intriguingly, we found an increase in Nr1d2 (also known as REV-ERBβ), which acts as a nodal output of the circadian clock and thus links cellular circadian timers with innate-immune responses and thereby modulates the production and release of the proinflammatory cytokines Ccl2 and IL-6 [71][72][73][74]. Nr1d2 was increased 1.6-2-fold and Il6 was increased 2.2-3-fold among age-up microglial genes, which suggests that Nr1d2 upregulation might stabilize the diurnal variation in Ccl2 and IL-6 levels and immune function caused by aging. Other age-up microglial genes involved in immunoregulatory and inflammatory processes included Zfp36 (anti-inflammatory signaling), Nfkbia (negative regulation of NFεB transcription factor activity), H2-Q1 (MHC I protein-complex member), and Ccrl12. Taken together, these data support an active role for microglia in inflammation response throughout aging. We further found that neuropsin (Klk8), an extracellular matrix serine protease that induces neurite outgrowth and regulates Schaffer-collateral long-term potentiation (LTP) [75,76], was also significantly increased throughout aging, which suggests that microglia could play a notable role in the establishment of LTP and synaptic plasticity. Over time, altered genes involvement in angiogenesis to subsequent innate immune inflammatory responses, indicates roles of microglia in neuroinflammatory responses during aging. We compared aging microglia heterogeneity between mice and humans by performing comparisons of our mouse age-related microglial datasets with two previous human microglial aging profile s [77,78]. Limited overlap was observed in microglial genes regulated during aging between mice and humans, indicating that human and mouse microglia age differently. In addition to doing the analysis relative to the earliest time point (2 months), we also used STEM analysis to cluster gene sets that showed similar trends in a certain pattern. The GO analysis showed that different pathways were involved in different categories such as mitochondrion organization, cellular respiration, or mRNA metabolic process, indicating that different genes and signaling pathways may play certain roles at different stages. At the early stage (2 months), the number of differentially expressed genes between WT and AD was negligible, and showed a gradual increase with age. When we compared the age-up related genes in WT/AD at the same time points, we found that most age-up genes were highly expressed in AD mice as compared to WT mice. This suggests that age genetic changes in the AD process occur earlier than aging. And the change may also involve in the development of AD. In the later stages of aging, the internal environment of the CNS shows increased complexity, and this might involve the actions of peripheral factors together with BBB dysfunction. Imaging analyses have revealed that the BBB is localized at the level of tight junctions between brain endothelial cells [79]. Aging and AD are both associated with diminished BBB integrity and an opening for T cell transendothelial migration into the CNS [80][81][82]. In the parenchyma, bidirectional crosstalk occurs between the infiltrating cells and the resident glial cells; activated microglia impair BBB function by releasing several inflammatory modulators and thus lead to hyperpermeability; and the resulting T cell infiltration, in turn, favors increased microglial activation by secreting proinflammatory cytokines or acting in a protective manner toward senescent microglia [3,17,20,83,84]. Notably, transient early depletion of regulatory T cells was shown to reduce the recruitment of microglia toward amyloid deposits and alter the disease-related gene-expression profile in the brain [85].
8,933.4
2020-04-01T00:00:00.000
[ "Biology", "Medicine" ]
Rooting phylogenetic trees under the coalescent model using site pattern probabilities Background Phylogenetic tree inference is a fundamental tool to estimate ancestor-descendant relationships among different species. In phylogenetic studies, identification of the root - the most recent common ancestor of all sampled organisms - is essential for complete understanding of the evolutionary relationships. Rooted trees benefit most downstream application of phylogenies such as species classification or study of adaptation. Often, trees can be rooted by using outgroups, which are species that are known to be more distantly related to the sampled organisms than any other species in the phylogeny. However, outgroups are not always available in evolutionary research. Methods In this study, we develop a new method for rooting species tree under the coalescent model, by developing a series of hypothesis tests for rooting quartet phylogenies using site pattern probabilities. The power of this method is examined by simulation studies and by application to an empirical North American rattlesnake data set. Results The method shows high accuracy across the simulation conditions considered, and performs well for the rattlesnake data. Thus, it provides a computationally efficient way to accurately root species-level phylogenies that incorporates the coalescent process. The method is robust to variation in substitution model, but is sensitive to the assumption of a molecular clock. Conclusions Our study establishes a computationally practical method for rooting species trees that is more efficient than traditional methods. The method will benefit numerous evolutionary studies that require rooting a phylogenetic tree without having to specify outgroups. In many species tree inference approaches, gene trees are estimated first and are assumed as known in the following analysis [14][15][16][17][18][19][20][21][22]. However, such gene trees are often not fully informative, because they may be based on short sequences with few variable sites [23]. As a result, the gene tree estimation errors may potentially become a severe issue in species tree inference. Some coalescent inference methods, such as ASTRAL, do not directly infer the root of the estimated species phylogeny [14,15]. Still other coalescent inference methods (MP-EST, NJst) require rooted gene trees as the input in order to estimate a rooted species tree [18,22]. However, ancestor (rooting) identification is essential for complete understanding of the evolutionary relationships. Rooted trees benefit most downstream applications of phylogenies, such as species classification and comparative biology. In many cases, trees can be rooted using outgroups, which are known species that are more distantly related to the sampled organisms than any other species in the phylogeny. However, outgroups are not always available in evolutionary research. For instance, in numerous unresolved evolutionary questions such as animal evolution [24,25], placental mammal evolution [26][27][28][29], prokaryotic evolution [30,31], and even the beginnings of life [31,32], it is difficult to specify appropriate outgroups, because of issues such as long branch attraction [33] and variation in the substitution process [34]. Thus, rooting methods in the absence of outgroups are often necessary for phylogenetic inference. While other methods for rooting trees have been proposed (i.e., midpoint rooting, rooting with a molecular clock, as well as Bayesian versions of these [35]), each has its own drawbacks [36] and none were designed for use on species-level phylogenies that are subject to incomplete lineage sorting. For a recent review of rooting methods, see [37]. In our study, we develop a new method for rooting species tree under the coalescent model, by developing a series of hypothesis tests for rooting quartet phylogenies using site pattern probabilities. More specifically, the site pattern probabilities of every four-taxon quartet are used to construct rooted species trees based on an unrooted species tree topology. Our study establishes a computationally practical method of rooting species trees in the absence of an outgroup. Since a rooted species tree will provide more information about evolutionary relationships, the new method will benefit numerous evolutionary studies that require rooting a phylogenetic tree without having to specify outgroups. Methods The coalescent process [1,10,38] is a retrospective model of population genetics that is commonly used to model incomplete lineage sorting (ILS). Based on tracing the evolutionary history of sampled genes by considering the time from the present back to their most recent common ancestor [39], the coalescent model is used as the basis for different methods to estimate species trees (e.g. [18,[40][41][42][43]; reviewed in Edwards [44]). Under the coalescent model, our method uses relationships among the expected site pattern probabilities to develop a method to root phylogenetic trees. We define a coalescent independent site as a column in a DNA alignment for which all nucleotides have evolved from a common ancestor according to some evolutionary process. Coalescent independent sites are assumed to freely recombine with one another. Method for rooting phylogenetic trees by site pattern probabilities In a four-taxon species tree, there are 4 4 = 256 possible site patterns. Let p i A i B i C i D , (i a ∈ {A, C, G, T}, a = A, B, C, D) represent the probabilities of each site pattern i A i B i C i D , where i a refers to the nucleotide at tip a of the four-taxon species tree. Any site pattern probability of a rooted four-taxon species tree under the molecular clock assumption can be classified into one of 15 categories: p xxxx , p xxxy , p xxyx , p xyxx , p yxxx , p xyxy , p xyyx , p xxyy , p xyxz , p xyzx , p yxxz , p yxzx , p xxyz , p yzxx , p xyzw where w, x, y, and z denote different nucleotide states. To explore the rooting position for an unrooted four-taxon tree, which can then be used to infer the root position on a larger phylogenetic tree, we develop a series of hypothesis tests, based on expected site pattern probabilities (Table 1). These hypothesis tests are derived from the equivalence of site pattern probabilities in a four-taxon phylogenetic tree. For instance, if the rooting position is 1 (Fig. 1), it is clear that species C and D have equal probabilities of mutating under the molecular clock assumption. Therefore, p xxxy = p xxyx . On the other hand, species A can be considered as an outgroup in these four species, and the site pattern yxxx is more likely than xyxx, thus it is easy to see that p yxxx > p xyxx . Similarly, we can write expected relationships for the other four root positions (Table 1, Fig. 1). Note that p xyxz = p xyzx , and p yxxz = p yxzx , could also be used, but our preliminary results suggested that the values of p xyxx , p yxxx , p xxxy and p xxyx are larger than p xyxz , p xyzx , p yxxz , and p yxzx , thereby giving better performance when estimated from empirical data. Note that the analytical derivation of the site pattern probabilities arising from the coalescent model under the JC69 model is given by Chifman and Kubatko [45]. It is not surprising that under the JC69 model, many site pattern probabilities are identical due to the assumption of equal base frequencies and identical nucleotide substitution rates. Indeed, site pattern probabilities within each category described above are identical under the JC69 model. Therefore, based on the precise formulas for the site pattern probabilities derived by Chifman and Kubatko [45], the relationships in Table 1 can be mathematically proved under the JC69 model. Analytical proof is not given for other nucleotide substitution models due to increasing complexity in computing caused by unequal base frequencies and varying nucleotide substitution rates. However, with the clock assumption, it is still reasonable to apply the method under other nucleotide substitution models, because the probabilities of having specific classes of mutations (for example, a change from A to C) are identical for sister species and are always proportional to branch length under any nucleotide substitution model. The performance of our rooting method under varying nucleotide substitution models will be tested using simulation studies. Formal hypothesis tests To determine the root position, we first set up two distinct hypothesis tests: Note that there are 12 possible site pattern probabilities within each category of yxxx, xyxx, xxyx, or xxxy. For example, the site patterns ACCC, GCCC, and AGGG (and 9 others) all have the form yxxx. Thus, rather than consider all 256 of the possible site patterns, we consider five categories of site patterns: yxxx, xyxx, xxyx, xxxy, and "other", where the category "other" refers to the remaining 208 site patterns that don't satisfy one of the first four forms. Let X = [X 1 , X 2 , X 3 , X 4 , X 5 ] denote the vector of the counts for each of these five categories, and q = q 1 , q 2 , q 3 , q 4 , q 5 denote the vector of category probabilities. Then X ∼ Multinomial(M, q), where M is the number of coalescent independent sites. Under the assumption of a multinomial distribution, we can compute the mean and variance of each count and the covariance between them. We note that E(X s ) = Mq s s = 1, 2, 3, 4, 5, cov(X s , X t ) = −Mq s q t s = 1, 2, 3, 4, 5; t = 1, 2, 3, 4, 5; s = t. Note that the q i are defined above to be the probability of observing a site pattern from category i, i = 1, 2, 3, 4, 5. We estimate this probability by the frequency observed in the data. To be specific, we have the following where N jiii denotes the number of times site pattern jiii occurs in the observed data, for example. Thus, we have: Now, using Eqs. (2) -(5), substituting the estimated site pattern probabilities into Eqs. (3) and (5), we can compute test statistics for both hypothesis tests: Under the null hypothesis in Test 1 that p yxxx = p xyxx , 1) under the null hypothesis in Test 2 that p xxyx = p xxxy when M is large. Therefore, our rooting method can be applied by checking the test results and values of Z 1 and Z 2 . More specifically, for example, if we reject Test 1, accept Test 2, and Z 1 > 0, we can conclude that the root position is 1 . Similarly, the other test results and their conclusions are summarized in Table 2. Note that significance levels for the two tests, α 1 and α 2 , must be selected. In our study, we choose the significance levels α 1 = α 2 = 0.025. The significance levels can be adjusted for different studies. The performance of the rooting method are evaluated by simulation studies, as described below. Simulation studies Three sets of simulation studies were used to examine the performance of our method to root the species quartets. All simulation studies include DNA sequence data simulated from four-taxon species trees. More specifically, different numbers of gene trees are generated from the species trees with COAL [13], then coalescent independent sites or multi-locus DNA sequences are simulated by using Seq-Gen [46]. The simulation process is repeated 500 times to generate 500 independent data sets, the rooting method is applied to each data set, and the power (proportion of the 500 data sets for which the correct conclusion is made) for each simulation setting is recorded. The first set of simulation studies is designed to assess the performance of our method for coalescent independent sites when the molecular clock holds. Two groups of species trees with "long" and "short" branch lengths are used to simulate the data. Each group contains two species trees that have the same unrooted topology, but different rooting positions. Note that though there are five rooting positions for a 4-taxon species tree, four of them lead to asymmetric rooted trees ( 1 -4 in Fig. 1), and the rooting method is identical for them. Thus, only 1 and 5 are used in our simulation studies. For the "long branch lengths" group, the two species trees used are group have the same topologies as in the "long branch lengths" group, but all branch lengths are scaled by 0.5 (all branch lengths in our study are measured in coalescent units). A varying number of gene trees (5000, 10,000, 20,000, 100,000) are simulated from each species tree. To convert between coalescent units and mutation units, a value of θ = 4N e μ = 0.05 is used to scale the branch lengths of the simulated gene trees. The gene trees are then used to simulate coalescent independent sites (one site for each gene tree) with the program Seq-Gen [46] under the JC69, HKY85 (Seq-Gen command: -mHKY - replications are simulated to estimate the root position, and the proportion for which the correct conclusion is reached is recorded as the power of the study. The second set of simulation studies focuses on multilocus DNA sequence data instead of coalescent independent sites. In the first set of simulation studies, we simulate a number of gene trees, and only one site is simulated under each gene tree as a coalescent independent site. However, we also wish to explore the performance of our method for multi-locus data. The simulation studies have similar parameter settings to the first set of simulations, but instead of a single site, a DNA sequence of 500 base pairs is simulated from each gene tree using Seq-Gen [46]. The number of gene trees is adjusted to (50,100,200,1000) to keep the total number of sites identical to that used in the first set of simulations. The third set of simulation studies is designed to assess the robustness of the procedure when the assumptions are violated. First, we consider the case in which the molecular clock assumption is violated for coalescent independent sites and the "long" species tree setting. We wrote custom python scripts to simulate gene trees from both the symmetric and asymmetric species trees for which the branch leading to taxon A is extended, and for which the branch leading to taxon C is extended in the asymmetric case. We consider varying the length of the branches leading to either taxon A or taxon C from their original values of 1.0 in the first set of simulation studies to the values 1.1, 1.2, 1.3, 1.4, or 1.5. After simulating gene trees from these non-clock species trees, the procedure was identical to that above. Specifically, we simulate sequence data under the coalescent independent sites and JC69 models using Seq-Gen [46] and record how many times the correct tree was inferred. Second, we consider the case in which the true tree is a star phylogeny (i.e., there is no root to be identified), and record whether the method prefers a particular root in this case. Intuitively, we might expect the method to prefer the symmetric rooting along branch 5 since the two null hypotheses specified by Tests 1 and 2 in "Formal hypothesis tests" section will be satisfied for the star phylogeny with the symmetric rooting position when the molecular clock holds. Application to larger species trees To examine the performance of our rooting method for larger taxon samples, we assume that the unrooted tree has been previously estimated. In our example, we estimate the species tree using SVDQuartets, a full-data coalescent-based method based on site pattern probabilities, and we label each branch with a particular code (Fig. 2a). Our method works by randomly selecting a subset of four species from the n species under study, and determining the root position, as shown in Fig. 1. This is repeated many times, for many randomly selected quartets. If the number of taxa is not too large, all quartets can be considered; otherwise, a random sample can be taken. Note that there are multiple correlated hypothesis tests for a species tree with more than 4 taxa. To handle the issue of multiple tests, we use the Bonferroni correction. When an overall α-level test for an n-taxon species tree is desired, we use α/ n 4 as the critical value in the tests, when all quartets are sampled. To determine the root of a given species tree with more than 4 taxa after the selected quartets have been evaluated, we develop a method to combine the results from the individual quartet tests. This method assigns a weighted score for each branch based on the results of the analysis of the individual quartets. Suppose a particular species quartet is composed of five branches (Fig. 2a, b), where any branch contains one or more coded branches as shown in Fig. 2a. Denote the number of the coded branches within the five branches as n 1 , n 2 , n 3 , n 4 , and n 5 , respectively. Once a branch n i (i = 1, 2, 3, 4, 5) is determined to contain the root, any coded branch within the determined branch has score 1 n i , while other branches have score 0. Two examples are shown in Fig. 2b and c. The branch with the highest summed scores over all quartets evaluated will be selected as the location of the root. Accuracy of the method for rooting phylogenetic trees The power of the rooting method in the three simulation studies is shown in Figs. 3 and 4. In 500 simulations, the proportion of the data sets for which the correct rooting position is selected is summarized. The panels in the first column of Fig. 3 (panels (a) and (c)) represent the power for detecting the correct root positions for the simulation studies with coalescent independent sites. The panels in the second column (panels (b) and (d)) show the power for rooting phylogenetic trees in the second simulation set, where multi-locus DNA sequence data is simulated. Clearly, the simulation conditions that strictly follow the assumptions (free recombination and constant evolutionary rate) of the rooting method have very high power. When the assumption of free recombination is violated (e.g., for the multi-locus DNA sequence data in column 2), the tests have a slightly lower accuracy when the number of sites is small. Overall, it is safe to conclude that the new rooting method has high accuracy for rooting a fourleaf unrooted species tree. Notably, when the sample size is increased to about 10,000 bp, the accuracy is over 90% even for multi-locus DNA sequence data. In our simulation studies, DNA sequence data are simulated under three different nucleotide substitution models: JC69, HKY85, and GTR+I+ (labeled by black, red, and green in Fig. 3). Though the hypothesis tests for the rooting method are derived from the JC69 model, as Fig. 2 Example of scoring potential root psitions on larger phylogenies. a A six-leaf unrooted species tree. The branches are coded from a to i, and the internal branches are highlighted in red; b An example species quartet ABCF. If the root is determined to be on branch a, f or h, the corresponding branch on the tree in a will get score 1. If the root is determined to be on branch b + g (or c + l), branches b and g, (or c and l) in a will each get score 0.5. c An example species quartet AECD. If the root is determined to be on branch a, c or d, the corresponding branch in a will get score 1. Otherwise, if the root is determined to be on branch e + g (or h + l), branches e and g (or h and l) will each get score 0.5 Fig. 1. a, b: Root position at 1 , c, d: Root position at 5 . Solid lines in each panel represents the species trees in the "Long branch lengths" group, while the species trees in the "Short branch lengths" group are denoted by dashed lines. In the simulation studies, DNA sequences data is simulated under the JC69 (black), HKY (red), and GTR+I+ (green) models, respectively described in the Methods section, the results of the simulation studies suggests that the rooting method can be applied to more general nucleotide substitution models. Over all of the conditions we tested, there was no systematic difference between the results for the JC69 model and for the other two models. Furthermore, the performance of our method in rooting the phylogenetic trees depends primarily on the sample size. More specifically, species trees with more coalescent independent sites or longer DNA sequences sampled can be rooted more accurately. As shown in Fig. 3, the solid lines denote the results for the species trees with "longer" branch lengths, and the dashed lines show the results for the species trees with "shorter" branch lengths. In general, the power for species trees with "longer" branch lengths is sightly higher, especially when the sample size is small (around 5000 bp). Thus, including more coalescent independent sites improves the accuracy of the test. Based on our simulations, around 10,000 bp for both long and short branch lengths are sufficient to ensure 95% accuracy when the data consist of coalescent independent sites, and 90% accuracy when multi-locus DNA data is used. Notably, the ability to identify the root of symmetric species trees does not depend on the sample size ( Fig. 3c and d), since the accuracy of identifying the root of symmetric species trees only relates to the significance levels that we selected for the hypothesis tests. The effects of sample size are not surprising, since the site pattern probabilities are estimated more accurately with more coalescent independent sites or longer DNA sequences, which is helpful in estimating the evolutionary relationships. The results of the simulation studies for which the molecular clock assumption is violated are shown in Fig. 4a -c. Figure 4a and b show the power to detect the root for the asymmetric tree when the branch leading to taxon A or that leading to taxon C are extended, respectively, while Fig. 4c shows the power when the symmetric tree is assumed and the branch leading to taxon A Fig. 4 Accuracy of the rooting method when the molecular clock assumpon is violated. In each panel, the x-axis denotes the data size (number of coalescent independent sites in kb), and the y-axis shows the proportion of the data sets for which the correct rooting position is selected in a total of 500 simulations. a Asymmetric species tree with root position 1 for which the branch leading to taxon A has been extended; b Asymmetric species tree with root position 1 for which the branch leading to taxon C has been extended; c Symmetric species with root position 5 for which the branch leading to taxon A has been extended; d Proportion of times root position 5 is selected for the star phylogeny. All simulations used the JC69 model, since the first simulation study did not indicate systematic differences in performance based on varying the model is extended. We can see that the power decreases as the amount of deviation from the molecular clock increases. It is also clear that the power decreases with increasing sample size, a result which at first seems counterintuitive. We discuss this further in the "Discussion" section. Finally, Fig. 4d gives the results of applying the rooting method to a star phylogeny (i.e., a phylogeny for which there is not a root). In this case, we might expect the method to identify branch 5 as the root, since the star tree will satisfy the two relationships that the symmetric tree induces and on which our hypothesis tests are based. Figure 4d indicates that the procedure does indeed select the symmetric rooting about 95% of the time when a 5% significance level is used. Application to an eight-taxa North American rattlesnake data set The simulation studies above show good accuracy and efficiency of the rooting method in identifying the root of a four-taxon species quartet. The next step is to examine the performance of our method for a larger empirical data set. We choose as a test case a data set of North American rattlesnakes that consists of samples from three subspecies of Sistrurus catenatus (S. c. catenatus, S. c. edwardsii, and S. c. tergeminus), three subspecies of Sistrurus miliarius (S. m. miliarius, S. m. barbouri, and S. m. streckeri), and two outgroups (Agkistrodon contortrix and Agkistrodon piscivorus). This is a multi-locus DNA data set with 19 genes and a total of 8466 base pairs. One individual is selected from each taxon to estimate the species tree and the root position. The estimated species tree is shown in Fig. 5a, which is consistent with earlier analyses of Kubatko et al. [47] and Chifman and Kubatko [48]. With two known outgroups, A. contortrix and A. piscivorus, the putative root position is labeled in red lines in Fig. 5a. When the outgroups are unknown, the unrooted 8taxon species tree estimated by SVDQuartets is shown in Fig. 5b, with each branch labeled from 1 to 13 The inferred root position is labeled by a red line. b The unrooted 8-taxon species tree, with each branch labeled from 1 to 13 . The root position indicated by our method is labeled in red. c The unrooted 6-taxon species tree (with outgroups removed), with each branch labeled as in b. The rooting position indicated by our method is labeled in red to explore the root position based on our method, and the scores described in the Methods section are recorded for each branch ( Table 3, "8-taxon"). We also removed the two outgroup species and tested our method with the remaining six taxa (Fig. 5c), and record the scores in Table 3 (6-taxon). Note that the branches of the six-taxon species tree are given the same label as in the eight-taxon species tree (Fig. 5b). Thus, branches 1 , 2 , and 9 no longer exist in Fig. 5c. From the scores of each branch (Table 3), it is easy to see that branch 9 should be selected as the root position for the eight-taxa species tree, which is consistent with previous analyses (Fig. 5a). When the outgroups are removed from the analysis, our method can still accurately determine the root position on branch 10 and 12 (Table 3, "6-taxon"). Note that every single test of the 70 species quartets in the eight-taxon species tree correctly determined the root position, indicating an extremely high power for our method. Discussion In this study, we develop a new method for rooting species-level phylogenies using site pattern probabilities. More specifically, our method roots the quartet species trees under the coalescent model, and then applies the results of rooted quartets to infer the root location in larger species trees. The accuracy of this method is examined by simulation studies and by application to an empirical North American rattlesnake data set. Notably, our method for rooting phylogenetic trees does not require specification of an outgroup, which makes it useful under very general conditions. Rooting phylogenetic trees under different nucleotide substitution models For a given species tree, the probability distribution of all possible site patterns can be computed for different nucleotide substitution models (e.g., JC69, HKY85, GTR+I+ , etc.). Specifically, for the simplest model, JC69, the identical base frequencies and the constant nucleotide substitution rate produce identical site pattern probabilities in many cases. For instance, given a four-taxon tree, there are only 15 unique site pattern probabilities under the JC69 model [45,48]. That is to say, the site patterns that fall into the same category have identical probability, thus it is straightforward to use the mean of the site pattern probabilities within the same category to compute the test statistics we propose here. More complex nucleotide substitution models, such as the HKY85 and the GTR+I+ models, etc., can be specified by setting different rates for nucleotide changes. For example, HKY85 allows base frequencies to be unequal and considers one transition (substitutions between the two purines, A and G, or between the two pyrimidines, C and T) rate and one transversion (substitutions between a purine and a pyrimidine) rate, while the GTR model also allows unequal base frequencies, but defines a symmetric parameter-rich substitution matrix. Under these complex nucleotide substitution models, there will be a larger number of distinct site pattern probabilities and computing the probability of any site pattern probability will be more complex compared to the JC69 model. Indeed, the site pattern probability under the coalescent cannot be expressed as an analytic expression for the GTR+I+ model, for example. However, the SVDQuartets method that is based on site pattern probabilities can still be applied to estimate a phylogenetic tree under models like HKY85 and GTR+I+ [45,48], and it is not difficult to show that our rooting method can be applied to phylogenetic data under these complex nucleotide substitution models, as well. Although there are no explicit formulas and the site pattern probabilities may not be identical within the 15 categories described here, the relationship between site pattern categories yxxx and xyxx, and between categories xxyx and xxxy, for example, will still hold. What changes is that the probabilities of patterns ACAA and ATAA, for example, may differ from one another under more complex models, even though they will still match CAAA and TAAA, respectively, when the clock holds. We have simulated sequence data under both the HKY85 and GTR+I+ models in our simulation studies to verify that our method still applies under these complex nucleotide substitution models. Our results (Fig. 3) indicate that the method works equally well under the three different nucleotide substitution models, regardless of the equality of base frequencies and substitution rates between bases. Rooting phylogenetic trees using multi-locus DNA sequence data Note that our rooting method assumes free recombination among the sites. In other words, it is designed for coalescent independent sites. However, previous simulation studies and real-data analyses also indicated good performance of SVDQuartets in analyzing multi-locus DNA sequence data. Also, SVDQuartets is suitable for the case of variable substitution rates across sites (i.e., substitution rates drawn from an arbitrary Gamma distribution) [49,50]. The conclusion is similar for the rooting method presented here. As shown in Fig. 3, the method is highly accurate in identifying the root positions when varying substitution rates are drawn from an arbitrary Gamma distribution. Furthermore, the simulation studies that simulate multi-locus DNA sequence data also show good performance. This is quite reasonable, because under the coalescent model, the distribution of expected gene trees across loci for multi-locus DNA sequence data should be consistent with that obtained for independent sites, and thus the site pattern frequency distribution should be close to one another when each gene has a similar size. From Fig. 3, when there are more than 100 genes (10,000 bp in total), multilocus DNA sequence data can be safely used to estimate rooted species tree directly from the site pattern probabilities. The molecular clock assumption The method performs poorly when the molecular clock assumption is violated, as our test statistics are very sensitive to this assumption. Any deviation of the site pattern frequencies due to differing branch lengths is interpreted as evidence against a particular root location, and thus the tests become more likely to reject the correct root location as the sample size increases. Thus, we do not recommend that the method be applied when the assumption of a molecular clock is not reasonable. Though this limits the applicability of the method, we note that other rooting methods designed for gene trees (e.g., midpoint rooting and molecular clock rooting -see [35]) are also sensitive to this assumption. Because our method is the only method designed to accommodate the coalescent process, it contributes to the collection of methods available for rooting phylogenetic trees. It is an open question of whether test statistics that are not sensitive to the molecular clock assumption could be developed based on site pattern frequencies; we feel that this approach is promising. Control of familywise error rate Controlling the familywise error rate appropriately when performing multiple hypotheses tests is a well-studied topic. In our method, we considered two hypothesis tests at the same time. To ensure a 95% confidence level, we choose to control the total Type I error at level 0.05. Using the Bonferroni correction [51], the significance level for each test is selected to be 0.025 in all of our simulation studies. Based on the hypothesis tests, when neither test can be rejected, we infer the symmetric species tree. Thus, the probability that a symmetric tree is inferred when the tree is indeed symmetric should exceed 95%, since the Bonferroni test is conservative when the test are not independent, as is the case here. Figure 3c and d shows the results of correctly identifying the symmetric species tree. Obviously, with coalescent independent sites, the power of the tests is right about 95% on average, while for multi-locus DNA sequence data, the power of the tests is slightly lower than 95%, with larger variance. This can be explained by the violation of free recombination for multi-locus DNA sequence data. When each nucleotide is not independent from each other, it is reasonable for us to observe a larger variance and a slightly lower power. In general, even with multi-locus DNA sequence data, the power of our rooting method still exceeds 90%, indicating that this rooting method is an accurate and efficient way to locate the root position in a species tree. Setting the significance level at 0.025 for both tests gives very good performance in all of our simulation studies. However, choosing different significance levels is also possible. In fact, we recommend that users select larger significant levels with a small sample size, and choose smaller significant levels with a huge data set. The relationship between margin of error and sample size is well-studied [52,53]. Generally, larger sample sizes will lead to lower pvalues [54,55], thus requiring a smaller significance level. Additionally, the significance levels of the two hypothesis tests are not required to be identical. Once the sum of the two tests is smaller than 0.05, the overall error rate will be controlled at 5%. Thus, in general, differing significance levels can be picked for each test, depending on the relative importance for the application of interest. Conclusion We have described a novel method for rooting phylogenetic species trees under the coalescent model. Our method works by rooting quartet trees, and then using these rooted quartet trees to infer the root location on a larger phylogeny. The method is shown to perform well for both simulated and empirical data when the molecular clock assumption holds, but is shown in simulation studies to be sensitive to this assumption. Because the method is based on the frequencies of observed site patterns, it is computationally efficient and thus provides a useful rooting method for species trees in the absence of outgroup information. Abbreviations ILS: Incomplete lineage sorting
8,135.4
2017-12-01T00:00:00.000
[ "Biology", "Computer Science" ]
Evidence towards Improved Estimation of Respiratory Muscle Effort from Diaphragm Mechanomyographic Signals with Cardiac Vibration Interference Using Sample Entropy with Fixed Tolerance Values The analysis of amplitude parameters of the diaphragm mechanomyographic (MMGdi) signal is a non-invasive technique to assess respiratory muscle effort and to detect and quantify the severity of respiratory muscle weakness. The amplitude of the MMGdi signal is usually evaluated using the average rectified value or the root mean square of the signal. However, these estimations are greatly affected by the presence of cardiac vibration or mechanocardiographic (MCG) noise. In this study, we present a method for improving the estimation of the respiratory muscle effort from MMGdi signals that is robust to the presence of MCG. This method is based on the calculation of the sample entropy using fixed tolerance values (fSampEn), that is, with tolerance values that are not normalized by the local standard deviation of the window analyzed. The behavior of the fSampEn parameter was tested in synthesized mechanomyographic signals, with different ratios between the amplitude of the MCG and clean mechanomyographic components. As an example of application of this technique, the use of fSampEn was explored also in recorded MMGdi signals, with different inspiratory loads. The results with both synthetic and recorded signals indicate that the entropy parameter is less affected by the MCG noise, especially at low signal-to-noise ratios. Therefore, we believe that the proposed fSampEn parameter could improve estimates of respiratory muscle effort from MMGdi signals with the presence of MCG interference. Introduction Mechanomyographic (MMG) signals are used to record and evaluate the mechanical activity of the skeletal muscles during contraction. These signals, represent a non-invasive technique for measuring the low-frequency lateral oscillations of muscle fibers during contraction. Furthermore, it has been found that in striated muscle there is a positive correlation between amplitude parameters of the MMG signal and the force produced by the muscle [1], [2], [3], [4]. Like other skeletal muscles, the diaphragm vibrates laterally during contraction. These muscle vibrations can be recorded using microphones, piezoelectric sensors or accelerometers placed over the lower chest wall in the zone of apposition of the diaphragm to the rib cage [5]: the diaphragm MMG (MMGdi) signal. The main frequency content of this signal lies between 5 and 25 Hz [6], [7]. During the recording of MMGdi signals several potential sources of contamination in addition to environmental noise must be eliminated or controlled, cardiac vibrations, detected in seismocardiograms or mechanocardiograms (MCGs), typically causing the most interference. MCGs have a deterministic and repetitive pattern, and contain clearly defined points associated with the cardiac cycle [8], [9], [10]. The MCG signal can be detected in both hemidiaphragms, being stronger on the left side [11], and its frequency content is below 20 Hz [12], [13]. Therefore, there is an overlap between the frequency content of the MMGdi and MCG signals, and hence the potential for interference. Clinically, the measure of respiratory muscle strength is valuable to detect muscle weakness and to quantify its severity. The strength of these muscles is commonly assessed by measuring maximal inspiratory mouth pressure (IP), but values obtained in this way could be underestimated [14]. Analysis of MMGdi amplitude is a useful alternative approach for assessing respiratory muscle strength [6], [7]. Sample entropy (SampEn), developed by Richman and Moorman [15], is widely used to estimate complexity and regularity in biomedical signals, having been found to be useful for the analysis of this type of signal in many fields [15] [16], [17], [18], [19], [20]. SampEn is an improved measure of regularity to overcome the inherent bias observed in approximate entropy [21] because of the self-matching of vectors. Specifically, SampEn does not count self-matches and, thereby, removes the bias and is more robust to noisy and short data series than approximate entropy. The amplitude of the MMGdi signal is usually estimated by the average rectified value (ARV) or the root mean square (RMS). These amplitude estimators are, however, affected by various types of noise such as: motion artifacts due to breathing, impulsive noise, spurious spikes, and MCG interference, among others. In [6], [7], and [22], it was observed that traditional complexity parameters calculated using a fixed quantization interval and over a moving window are more closely related to amplitude variations than to complexity variations of the signal. In particular, the multistate Lempel-Ziv index [6] and approximate entropy [7] of MMGdi signals provided a better measure of respiratory effort (i.e., respiratory muscle strength) than the traditional amplitude parameters as the ARV and RMS. On the other hand, it was observed that multistate Lempel-Ziv index was less affected by impulsive noise [6] and SampEn was less affected by spurious spikes [22]. The objective of this study was to overcome the influence of MCG interference to obtain an accurate amplitude estimation of MMGdi signals applying the SampEn method over a moving window and with fixed tolerance values (fSampEn). These tolerance values are in the range of 0.1-1 times the global standard deviation of the original signal, and they do not depend on the standard deviation of each moving window used for the calculation. In this paper, we describe the behavior of fSampEn with simulated MMGdi signals with different signal-to-noise ratio (SNR) distributions. Furthermore, we apply also this technique to recorded MMGdi signals with different inspiratory loads. We also assess the feasibility of distinguishing respiratory cycles using fSampEn method compared to the ARV and RMS parameters. Finally, we evaluate the robustness of these amplitude estimators in presence of MCG interference and its relationship with respiratory muscle strength. Sample entropy SampEn is a measure that depends on the conditional probability of two sequences that are similar for m samples (where m is a positive integer) remaining similar within a tolerance r in the next sample m+1. A data sequence with many repetitive patterns (i.e., that is predictable or relatively regular) has a small value of SampEn, while one with few repetitive patterns (i.e., that is less predictable or more irregular) has a larger value of SampEn. Given a signal x(n) = x(1), x(2),…, x(N) of length N, and defined r and m, SampEn (m, r, N) is calculated as follows [15]: 1. Form the m-vector sequences X m (1)…X m (N-m+1), which can be defined by X m (i) = [x(i), x(i+1),…, x(i+m-1)]; where 1#i#N-m+1. These vectors represent m consecutive values of x(n). 2. Define the distance between X m (i) and X m (j) as the maximum absolute difference between their respective scalar components: 3. Define B i for each X m (i) as the number of j(1#j#N-m, j?i) such that d X m (i),X m (j) ½ #r, and then define: 4. Increase the dimension to m+1, and define A m i (r) and A m (r) for each X m+1 (i) such that d X mz1 (i),X mz1 (j) ½ #r: 5. Then, estimate SampEn as: Sample entropy with fixed r values In this paper, we propose the calculation of SampEn over a moving window and using fixed r values and m = 1 (fSampEn). These r values are in the range of 0.1 to 1 times the global standard deviation of the original signal, and they do not depend on the standard deviation of each moving window used for the calculation. Once the fixed r has been determined, the fSampEn is calculated following the steps and the equations described above. Synthesized diaphragm MMG signal The MMG signals are composed of low-amplitude vibrations generated during muscular contraction. These low-amplitude vibrations are related to the mechanical activity of the muscle [3]. The MMG amplitude progressively increases with contraction effort [3], [23], although this increase is not monotonous and it is muscle dependent. The frequency content of the MMG signal is mainly in the range between 5 to 50 Hz. In the case of the MMGdi signal, the frequency content lies mainly between 5 and 25 Hz and the amplitude varies cyclically with a frequency determined by the respiratory rate [6]. To better understand how fSampEn detects amplitude variations, we generated a synthesized signal based on experimental MMGdi data. The synthesized signal describes similar characteristics to those of the MMGdi signals acquired during an incremental inspiratory load respiratory test. To the authors' knowledge, no published models describe the properties of the MMG signal during voluntary contractions. Other researchers have developed models to simulate the behavior of the MMG signal generated during single motor unit contractions [24], [25], and in contractions evoked by artificial muscle stimulation (during artificial stimulation several motor units being activated simultaneously and behaving as a single large motor unit) [27], [28], [29]. However, the behavior of the MMG signal in such contractions is completely different from that during voluntary contractions: the simultaneous contraction of the motor units makes the waveform of the artificially evoked MMG signal more deterministic than random [27], [28]. Since most of the frequency content of MMGdi signal lies between 5 and 25 Hz and the MMG signal is random in nature [3], we assumed white Gaussian noise filtered using a zero-phase fourth-order Butterworth filter with a bandpass from 5 to 25 Hz to simulate the vibratory behavior. In order to simulate the cyclical behavior of the MMGdi signal, we first generated an amplitude modulation envelope ( Figure 1A). This envelope signal (ENV) was designed to simulate the IP increments produced when the inspiratory load increases. Specifically, the ENV amplitude increments were equivalent to those produced in the MMGdi signals for the four incremental inspiratory loads studied. Each inspiratory load consisted of 10 simulated respiratory cycles of the same duration (3.33 s approximately). The respiratory rate and total duration of the ENV signal were 18 cycles per minute and 133.33 s, respectively. The simulated inspiratory periods comprise 50% of the total respiratory period. The selected respiratory rate and inspiratory periods where selected based on data from a study of breathing patterns in healthy subjects [30]. During inspiration, the amplitude of the MMGdi signal progressively increases until reaching a plateau and then gradually decreases to the rest level. To simulate this behavior, each simulated inspiratory period was divided into three phases: (1) rise (25%), (2) plateau (50%) and fall (25%). The rising and falling phases were simulated by means of half-Hanning windows. Then, multiplying the simulated random MMGdi signal with constant amplitude by ENV, we obtained a modulated amplitude signal whose respiratory rate was similar to the MMGdi signals. Finally, to simulate the non-cardiac biological noise present in the MMGdi signal at rest we added background white Gaussian noise filtered through a zero-phase fourth-order Butterworth filter with a bandpass from 5 to 50 Hz to obtain the synthesized MMGdi signal ( Figure 1B) clean of cardiac noise (MMGc). The amplitude of this background noise was equivalent to the amplitude of the MMGdi signal recorded during apnea in the portion of signal where no heart activity is present. To generate the synthesized cardiac vibration signal (MCG), we simultaneously recorded the electrocardiographic and MMGdi signals during apnea in a healthy subject. During apnea the respiratory muscle activity is minimal so that this MMGdi signal mainly contains MCG activity. The MCG has a stable and repetitive pattern and contains clearly defined points associated with the cardiac cycle [8], [9], [10]. To obtain a good estimation of the MCG pattern we generated an MCG signal using a template. Specifically, we obtained this template by averaging 70 cardiac cycles extracted from the MMGdi signal, using the position of Rpeaks in the electrocardiographic signal to align the cycles. Next, we generated an impulse train synchronized with these R-peak positions. Finally, we obtained the synthesized MCG signal by the convolution of the MCG template and the impulse train ( Figure 1C). The complete synthesized signal with noise (MMGn) was generated by adding the MMGc and MCG signals ( Figure 1D). For each simulated respiratory load, we considered a different SNR: L1(28.7 dB), L2(21.7 dB), L3(0.6 dB) and L4(3.8 dB). The sampling frequency used to generate all the signals was 200 Hz. Recorded biomedical signals IP signal and MMGdi signal were simultaneously recorded while increasing the inspiratory load. These measurements were taken in a healthy subject with his written consent, and with the approval of the Ethics Committee of Hospital del Mar, Barcelona, Spain. The subject was required to sit quietly and breathe through a mouthpiece and a tube, while wearing a nose clip. During exhalation the tube allowed the air out with no obstruction, but during inspiration the airflow was restricted by a valve that allowed the application of different inspiratory loads. Increasing the load meant that breathing required greater respiratory muscle effort, and hence, triggered an increase in the intensity of the MMGdi component of the signal. Moderate to high inspiratory loads were used to obtain different SNR ratios: 100, 150, 200 and 250 g. A physician instructed the subject to perform the protocol correctly, guiding him to breathe at a constant rate and depth. The IP signal was recorded using a pressure transducer (Digima Premo 355, Special Instruments, Germany) placed in the tube through which the subject breathed. The MMGdi signal was recorded using a capacitive accelerometer (8312B2, Kistler, Switzerland) placed on the chest surface, between the seventh and eighth intercostal spaces in the right anterior axillary line. Signals were amplified, analog filtered, digitized with an A/D system of 12 bits at a sampling frequency of 2 kHz and decimated at a sampling rate of 200 Hz. Figures 2A and B show the IP signal and filtered recorded diaphragm MMG signal (MMGdi). The MMGdi signal was filtered through a zero-phase fourth-order Butterworth filter with a bandpass from 5 to 25 Hz. The duration of the signal was 485 s covering four inspiratory loads: 100 (126 s), 150 (122 s), 200 (115 s) and 250 (122 s) g. Each load was placed for approximately 21 respiratory cycles. Methods for evaluation of the fSampEn parameter To evaluate the behavior of fSampEn as an MMGdi signal amplitude estimator and the effect of cardiac noise on this amplitude estimation, we used the Pearson's correlation coefficient (R) and the mean relative error (MRE). First, for the synthesized MMGdi signal, we calculated the R between the ENV signal and the ARV, RMS and fSampEn parameters over the MMGn signal. In the case of fSampEn, the R values were investigated as a function of the tolerance value r. These R values are calculated separately for the four SNRs analyzed (i.e., for the four simulated loads) and reflect the capability of the methods to detect the amplitude variations produced by cyclical nature of breathing for different SNRs (not considering the amplitude variations due to the load increase). In addition, the MRE between the synthesized MMGc and MMGn signals was calculated for every inspiratory cycle in the three amplitude parameters under investigation (ARV, RMS and fSampEn). For an inspiratory cycle i of length N, where Xc(n) and Xn(n) for n = 1,…,N are the amplitude estimations of clean and noisy signals, respectively, the MRE is given by: The average and standard deviation of this error, estimated for every inspiratory cycle, was calculated separately for the inspiratory cycles of the four simulated loads, and for different values of r in the case of fSampEn. In the case of the recorded signals, similar to the analysis of the synthesized signals, we calculated the R between the IP signal and the three parameters under investigation over the MMGdi signal for the four inspiratory loads. The R for fSampEn was calculated as a function of r. Unlike for the synthesized signals, however, it is not possible to compute the MRE since we do not have the clean MMGdi signal (that is, without MCG activity). Finally, to evaluate the relationship between the respiratory muscle force and the amplitude of the recorded MMGdi signal, the R between the IP signal and the three parameters under investigation calculated over the MMGdi signal was recalculated considering the whole signal (without dividing it into portions corresponding to different loads). In this case, the R mainly reflects the relationship between the parameters analyzed and amplitude variations due to changing the inspiratory load (although it is also influenced by the amplitude variations produced by the breathing cycle). Fixed sample entropy as an amplitude estimator In Figures 1 and 2, we show an example of the waveforms of the ARV (Figures 1E and 2C), RMS (Figures 1F and 2D) and fSampEn ( Figures 1G and 2E) obtained from the synthesized (MMGc and MMGn) and recorded MMGdi signals, respectively. The waveforms were obtained using a 1 s moving window with an overlap of 90%. The values of fSampEn were calculated using a tolerance value of 0.3 times the standard deviation of the entire signal. In this case, we observed that the amplitude variation due to the respiratory cycles was best defined with fSampEn. That is, the entropy parameter provides a better amplitude estimation than the ARV or RMS parameters, especially for low SNR. Effect of cardiac noise in the synthesized MMG signals Changes in R between the ENV signal and the ARV, RMS and fSampEn parameters calculated over the MMGn signal are shown in Figure 3. The R values are shown for all SNRs analyzed and as a function of r for fSampEn. Values of r analyzed were in the range of 0.1 to 1 times the global standard deviation of the entire signal. For low SNR ( Figure 3A), we observe that R is higher for fSampEn than for either the ARV or RMS parameters. This means that the entropy parameter performs better for determining the presence of respiratory cycles (see load L1 in Figure 1). For high SNRs (Figure 3D), R is high for all amplitude estimators, and slightly higher for fSampEn for values of r greater than 0.5. Figures 3E and F show the R between the ENV signal and the ARV, RMS and fSampEn parameters calculated over the synthesized MMGc and MMGn signals, respectively. The R values for fSampEn were calculated using r = 0.3. The values showed for MMGn signals ( Figure 3F) are the values shown in Figures 3A-D for r = 0.3. As can be observed, when no MCG noise is present ( Figure 3E) the R values are very high, regardless of the load. However, when the MCG noise is present (Figure 3F), the R values fall rapidly as the SNR decreases. This decrease is more pronounced for the ARV and RMS parameters than for fSampEn. In Figure 4 we show the average and standard deviation of MRE between the synthesized MMGc and MMGn signals in the three parameters under investigation calculated separately for the four simulated loads. The MRE obtained for fSampEn for values of r = 0.15, 0.3, 0.45, 0.6 and 1 is shown in Figure 4A. It can be seen that increasing r, the mean value of the MRE also increases for fSampEn. In Figure 4B, we compare the means and standard deviations of the MRE of the ARV and RMS parameters, with those of the fSampEn calculated with a tolerance value r = 0.3. We observe that the average value of MRE is considerably smaller in the entropy parameter, in particular at low SNRs. Figure 5 shows the change in R between the IP signal and the three parameters under investigation over the MMGdi signal, for the four inspiratory loads. The R for fSampEn is shown as a function of r. Similar to the behavior observed with the synthesized signals, for a low load ( Figure 5A) we observe a stronger correlation for the entropy parameter than for the ARV and RMS. In this case, this trend is also observed for a high load ( Figure 5D) for almost all the tolerance values analyzed. Figure 5E shows the R values presented in Figures 5A-D, with the fSampEn focused on r = 0.3. In this case, the R values are shown as a function of inspiratory load. As can be observed, the correlation values are smaller at low loads (low SNR), but in this case, unlike in the synthesized signals, the correlation values were significantly higher for the entropy parameter for all loads (even at high loads). Evaluation of respiratory muscle force To evaluate the relationship between the respiratory muscle force and the amplitude of the recorded MMGdi signal, we investigated the R between the IP signal and the three parameters under investigation over the MMGdi signal this time considering the whole signal (without dividing it into the portions corresponding to different loads). Figure 6A shows the evolution of the R between the IP signal and all parameters analyzed calculated over the MMGdi signal. As before, the correlation for fSampEn is shown as a function of r. As we can observe, fSampEn is more strongly correlated with the IP signal than the ARV and RMS parameters. The maximum R values were obtained for r values between 0.3 and 0.6. Figures 6B, C and D are scatter plots of the maximum values of IP signal and all parameters analyzed (ARV, RMS and fSampEn) as a function of respiratory load, respectively. Values of fSampEn were calculated using r = 0.3. It is observed that the fSampEn behaves more linearly and has a smaller standard deviation than the ARV and RMS parameters. Discussion The analysis of amplitude parameters of the MMGdi signal is a non-invasive technique to assess respiratory muscle effort [31]. The amplitude content of the MMGdi signal is usually estimated using the RMS or the ARV of the signal. Nevertheless, as corroborated in this simulation study, these estimations are greatly affected by the presence of cardiac vibration interference that overlaps in frequency with the MMGdi signal. Furthermore, an increase in respiratory muscle effort results in an increase in the intensity of the MMGdi component: that is, the SNR is variable and increases with respiratory effort. Various methods can be applied to minimize the effect of heart vibrations in the analysis of MMGdi signals. The simplest method would be, similar to a method used in the diaphragm electromyographic signal [32], the detection and removal of the parts of the MMGdi signal with cardiac interference [33]. However, this method splits the signal and excludes portions of the signal that may contain essential information about the contractile activity of the diaphragm muscle. Adaptive noise cancelling algorithms have been also applied to reduce cardiac interference in MMGdi signals, but the operation of the adaptive canceller is based on an approximate estimation of a cardiac vibration reference signal and its performance varies considerably depending on the SNR of the signal [33], [34]. In this study, we present a method for improving the estimation of respiratory muscle effort from MMGdi signals that is robust against cardiac vibration interference. This method is based on the computation of the SampEn using fixed tolerance values (fSampEn), not dependent on the standard deviation of each moving window. In this way, the entropy measures are related to the quantity of information present in the signal: the entropy is higher if the signal covers a wide range of amplitudes or if it is highly complex. With signals where the standard deviation of the signal is not constant, the SampEn also increase with an increase of amplitude. Thus, the SampEn is not just measuring the complexity of the signal but also changes in signal amplitude. Since heart sounds have a deterministic and repetitive pattern [8], [9], [10] and MMGdi vibrations are random in nature [3], the fSampEn is less influenced by cardiac vibrations than the ARV and RMS parameters. Analysis of synthetic MMGdi signals has allowed us to explore the relationship between the amplitude of heart vibrations and the amplitude of the MMGdi signal. For low SNRs, the fSampEn shows considerably better behavior than the ARV and RMS parameters, and it also shows better behavior when small values of tolerance are used. For high SNRs, the fSampEn shows better behavior for large values of tolerance. However, we observed that increasing the tolerance value produces higher MRE between the values of fSampEn calculated over the synthesized MMG signal with and without MCG noise. This increase is more pronounced at low load (low SNRs). As the r value increases the fSampEn is less sensitive to the small changes in amplitude that are produced at low load. This behavior is in agreement with what is shown in Figure 3a, where it can be observed that the R between the entropy parameter and the ENV signal decreases with increasing r (unlike what occurs with high SNRs). Thus, there is a compromise in the selection of the tolerance value. It was found that a tolerance value of r = 0.3 was suitable in the current study for both low and high SNRs. As an example, the fSampEn method was also applied to recorded MMGdi signals, obtaining a similar pattern of results to those with synthetic ones. Further, in this case the performance of the fSampEn is much better than that of the RMS and ARV for all the respiratory loads analyzed (for both low and high SNR). For almost all the tolerance values analyzed, the R values between the IP signal and the fSampEn were notably greater than the R values between the IP signal and the ARV and RMS parameters, indicating that this entropy parameter is a better tool to assess respiratory effort. Furthermore, in general, the variance of fSampEn is lower than the variance of the ARV and RMS parameters. These results are in agreement with a previous study comparing the approximate entropy using fixed tolerance values and the RMS of MMGdi signals acquired in an animal model (dogs) [7]. The major motivation for us for developing this method was the need to improve the characterization of MMGdi signals with the presence of cardiac interference. This is important because the study of MMGdi signals could be useful in clinical practice as an alternative non-invasive technique to evaluate respiratory muscle effort and to detect and quantify the severity of respiratory muscle weakness. In the current study we have only examined the SampEn at a single time scale. Costa et al. [19] developed a method that considers SampEn computed at several time-scales: multiscale entropy analysis. This method has been shown beneficial at differentiating between different cardiac diseases [19] and has allowed to examine the affect of fatigue and contraction intensity on the short and long-term complexity of biceps brachii surface electromyography [20]. Such approach can be useful for further analysis of the respiratory muscle effort by means of MMGdi signals. In conclusion, we propose an algorithm for improving the evaluation of respiratory muscle effort from MMGdi signals that is robust against cardiac vibration interference. Author Contributions Conceived and designed the experiments: LS AT RJ JAF. Performed the experiments: LS AT. Analyzed the data: LS AT. Contributed reagents/ materials/analysis tools: LS AT RJ JAF. Wrote the paper: LS AT RJ.
6,246.8
2014-02-19T00:00:00.000
[ "Engineering", "Medicine" ]
Molecular architecture of the Dam1 complex–microtubule interaction Mitosis is a highly regulated process that allows the equal distribution of the genetic material to the daughter cells. Chromosome segregation requires the formation of a bipolar mitotic spindle and assembly of a multi-protein structure termed the kinetochore to mediate attachments between condensed chromosomes and spindle microtubules. In budding yeast, a single microtubule attaches to each kinetochore, necessitating robustness and processivity of this kinetochore–microtubule attachment. The yeast kinetochore-localized Dam1 complex forms a direct interaction with the spindle microtubule. In vitro, the Dam1 complex assembles as a ring around microtubules and couples microtubule depolymerization with cargo movement. However, the subunit organization within the Dam1 complex, its higher-order oligomerization and how it interacts with microtubules remain under debate. Here, we used chemical cross-linking and mass spectrometry to define the architecture and subunit organization of the Dam1 complex. This work reveals that both the C termini of Duo1 and Dam1 subunits interact with the microtubule and are critical for microtubule binding of the Dam1 complex, placing Duo1 and Dam1 on the inside of the ring structure. Integrating this information with available structural data, we provide a coherent model for how the Dam1 complex self-assembles around microtubules. Introduction Chromosomes must form bioriented attachments to the mitotic spindle to ensure equal partitioning of the genetic material during mitosis. Kinetochores are large macromolecular assemblies that form at centromeres to tether and couple the chromosomes to the plus ends of kinetochore microtubules. At the outer kinetochore, the conserved Ndc80 complex mediates a direct interaction with microtubules. However, the outer kinetochore must also harness the chemical energy released by depolymerizing microtubules and convert it to mechanical energy to move chromosomes. In budding yeast, the 10-subunit Dam1 complex localizes to the outer kinetochore where it attaches to the single incoming microtubule to facilitate chromosome segregation [1]. The Dam1 complex is also a major target of the Aurora B kinase Ipl1 to correct erroneous kinetochore -microtubule attachments [2]. In vitro studies have shown that the Dam1 complex assembles as a ring around microtubules [3,4]. This unique property enables the Dam1 complex to processively track the growing and shrinking ends of microtubules under load [5][6][7]. Thus, the Dam1 complex ring is an excellent candidate to couple microtubule depolymerization with chromosome movement [3,4,8]. Recently, rings or partial rings have been observed at the budding yeast kinetochore [9]. In addition, indirect data from quantitative fluorescence microscopy estimated 16 -23 molecules of Dam1 at a single kinetochore, which is compatible with Dam1 rings assembled in vitro [10,11]. Low-resolution structures of the Dam1 complex have provided conflicting models for its self-assembly and organization around microtubules, depending on the fitting of the monomeric Dam1 complex [11][12][13]. Reconstructions of the monomeric Dam1 complex reveal it forms an elongated structure with a protrusion perpendicular to the main axis, that contains the C terminus of Dam1 [11]. In addition, the locations of the N termini of four Dam1 complex subunits and the C terminus of Dam1 itself have been mapped [12]. However, in the absence of high-resolution information, it is not possible to define the structural organization of the Dam1 complex alone, its selfassembly, or its interaction with microtubules. Hence, the subunits and regions of the Dam1 complex that interact with microtubules and their arrangement within the complex remain unknown [11,12]. Here, we used cross-linking mass spectrometry to determine the molecular architecture of the Dam1 complex alone and bound to microtubules. Our data provide a map of the subunit arrangement of the Dam1 complex. Analysis of the Dam1 complex assembled around microtubules reveals that the Spc34 and Ask1 subunits are likely to be involved in self-assembly, whereas the Duo1 and Dam1 subunits both interact with microtubules. We also demonstrate that the C termini of Dam1 and Duo1 provide the microtubule-binding properties to the Dam1 complex. Our data provide key information on the organization of the Dam1 complex around microtubules. Architecture of the Dam1 complex in solution In solution, the Dam1 complex is predominantly monomeric or dimeric at low concentrations. It self-assembles in oligomeric complexes in a concentration-dependent manner and when it binds to microtubules [3,4]. To define the arrangement of the Dam1 complex, we conducted cross-linking of heterodecameric Dam1 complexes by incubating the label-free bis (sulfo-succinimidyl) suberate (BS3) cross-linker with the Dam1 complex. For these studies, we used low concentrations (0.05 mg ml 21 ) of Dam1 complex such that it forms predominantly monomers and dimers [11]. BS3 reacts with primary amines found at the N termini of proteins and with lysine side chains (and less favourably serine, threonine and tyrosine side chains) with a range of up to 11.4 Å corresponding to the maximal length of the BS3 spacer. This provides a distance from backbone to backbone carbonyl of 27 Å . The cross-linked samples were then separated by SDS-PAGE into three higher-molecular weight species with weights of approximately 110, 220 and 440 kDa (figure 1a). Using mass spectrometry, we first identified the composition of each band and the cross-links within each sample (figure 1b). The bands corresponding to the 220 and 440 kDa species both contained all 10 subunits of the Dam1 complex, whereas the 110 kDa species contained a seven protein complex lacking Dam1, Duo1 and Dad1 subunits (electronic supplementary material, figure S1 and table S1). The isolation of this Dam1 subcomplex is most likely due to incomplete cross-linking, followed by subsequent separation of the proteins by denaturing SDS -PAGE. These cross-linking data suggest these seven proteins form a core complex, and that Dam1, Duo1 and Dad1 may be associated with the rest of the complex through a smaller surface. Strikingly, this identified complex is very similar to that formed in the absence of Dam1 [10]. We did not pursue the Dam1 subcomplex species further. To define the molecular arrangement of the various subunits within the Dam1 complex, we analysed the monomeric and dimeric Dam1 complexes from the 220 and 440 kD species by mass spectrometry (electronic supplementary material, tables S2 and S3). For each sample, we identified all 10 protein subunits. For the monomeric Dam1 complex, we could detect subgroups and intramolecular interactions emerging within the complex (figure 1c). The C-terminal regions of Dam1 and Duo1 displayed longrange cross-links internally and with each other, highlighting that these domains share a strong interface and suggesting they may have a globular subdomain conformation. Our data reveal that the central coiled coil region of Dam1 (125-158) assembles in a coiled coil with Duo1 (152-180). This could explain why in the absence of Dam1, Duo1 is not found in the remaining complex [10]. Notably, the N terminus of Ask1 formed cross-links with the N termini of Dad4 and Dad2. Thus, Ask1-Dad4 -Dad2 and Hsk3 probably form the lower part of the Dam1 complex, with the N terminus of Ask1 at the most terminal part, as mapped in previous electron microscopy (EM) studies (figure 1d) [12]. We did not obtain many internal cross-links for Dad1, Dad2, Dad3, Dad4 and Hsk3, in part owing to their small size. The N termini of the Dad2, Dad4 and Hsk3 subunits containing a-helical coiled coils formed multiple cross-links, suggesting tight interactions between them (figure 1c). Based on their high a-helical content and our cross-linking data, Dad2, Dad4 and Hsk3 do not form globular structures, but instead assemble in a coiled coil. The N terminus of Spc34, previously mapped in the Dam1 monomer by EM, formed cross-links with the N terminus of Dam1 [12]. Thus, our data indicate that the N terminus of Dam1 is close to the N terminus of Spc34 (figure 1d). The middle part of Spc34 also cross-linked with Spc19, confirming previous yeast-two hybrid interaction studies [14]. After cross-linking at low concentration, in-gel digestion and extraction of the cross-linked Dam1 complex, the yield of cross-links was low. This low recovery is due to the small amount of cross-linked proteins added to each well and extracted from each band, the low peptide recovery from ingel digestion, and the general decreased efficiency of the cross-linker at low protein concentration. To increase our density of cross-links and the resolution of the architecture of the complex, we next cross-linked a higher concentration of Dam1 complex (0.3 mg ml 21 ) and digested the sample in solution rather than following gel extraction. This considerably improved our yield of cross-links, especially in regions for which we already had cross-links, thereby increasing the data for these interactions (figure 2 and electronic supplementary material, table S4). However, some cross-links may be from connections formed between oligomerizing Dam1 complexes, given that at concentrations around 0.5 mg ml 21 the Dam1 complex self-assembles into rings in the absence of microtubules [12]. Dam1 is one of the largest proteins in the decameric complex and occupies a central position in the complex. From our data, the N terminus of Dam1 is at the heart of a tight interaction network, with the N-terminal regions of Dad2-Hsk3-Dad4-Spc34 forming a strong network around the N terminus of Dam1, possibly resulting from oligomerization rsob.royalsocietypublishing.org Open Biol. 6: 150237 (electronic supplementary materials, table S4). This may explain why epitope tagging of the N terminus of Dam1 has proved impossible, and why expression of the nineprotein complex lacking Dam1 results in fragmentation of the complex [10,12]. Interestingly, the C-terminal region of Dam1 shares a large interaction surface with the C terminus of Duo1, displaying extensive cross-links both at low and high cross-linking concentration, in the region containing residues that are phosphorylated by Ipl1 (figures 1c and 2a) [2]. The C terminus of Dam1 forms the outer arm of the Dam1 complex (figure 1d) [11]. Our data suggest the C terminus of Duo1 is also part of this outer arm. At both low and high concentration of the Dam1 complex during cross-linking, we identified extensive cross-links between the two a-helical coiled coils of Spc19 (73-104 and 132-165), suggesting that they form an anti-parallel dimer (figure 2b and electronic supplementary material, tables S1 and S4). There were also multiple cross-links between Spc19 and Spc34, revealing a strong interface between these two proteins. At high concentration, we also obtained many cross-links between the C termini of Duo1 and Dam1 and specifically the N terminus of Ask1 and the C termini of Spc34 and Spc19 (electronic supplementary material, table S4). In the monomeric Dam1 complex and from the subunit mapping from the EM studies, Ask1 and Spc34/ Spc19 are spatially separated [12]. Therefore, these data suggest that Ask1 and Spc34 may be the regions involved in the Dam1 complex oligomerization interface. In total, our data provide a physical interaction map for the Dam1 complex and reveal the architecture and organization of the Dam1 complex into key protein subcomplexes. The Duo1 and Dam1 subunits interact directly with microtubule polymers After defining the direct interaction map between the subunits within the Dam1 complex, we next sought to determine the arrangement and self-assembly mechanism of the Dam1 complex around microtubules. There are two different structural models for the assembly of the Dam1 complex around microtubules, based on the structures of the Dam1 complex alone and bound to microtubules derived by cryoelectron microscopy [11,12]. One model (model 1, EMDB: 1371), obtained from a helical assembly of Dam1 complexes around microtubules, proposes that the C terminus of Dam1 is on the inside of the ring and points towards the microtubule. This is in agreement with previous studies showing Dam1 had its own microtubule-binding activity [3,10]. This model suggests a major conformational rearrangement of the Dam1 complex occurs on binding to microtubules, around the central core of the complex, based on the fitting of the monomeric Dam1 complex into the Dam1 helical reconstruction around microtubules [11]. The second model, derived from a single particle reconstruction of individual rings of assembled Dam1 complexes around microtubules [12,13] (model 2, EMDB: 5254), positions the C-terminal protrusion of Dam1 on the outside of the Dam1 ring, 300 Å away from the microtubule lattice. In this model, the Dam1 complex does not undergo any conformational changes upon oligomerizing on microtubules and the C terminus of the Dam1 complex is at the self-assembly interface. To determine which model is correct and to define which subunits interact with the microtubules, we assembled the Dam1 complex on microtubules (figure 3a) and crosslinked the sample with 1-ethyl-3-[3-dimethylaminopropyl] carbodiimide hydrochloride (EDC). EDC cross-links lysine side chains and primary amine groups (and less favourably serine, threonine and tyrosine side chains) to aspartate or glutamate carboxyl groups. Mass spectrometry analysis of the cross-linked products revealed that the C termini of Duo1 and Dam1 both made specific cross-links with a-and b-tubulin (figure 3b and electronic supplementary material, table S5) [16]. Surprisingly, the cross-links obtained involved the solvent-exposed folded domains of a-and b-tubulin rather than their acidic tails. Duo1 and Dam1 cross-linked to E417 and E423 in a-tubulin (a-tub cluster ), and to E108 and E111 (b-tub cluster1 ), and E157, E158 and D161 (b-tub cluster2 ) in b-tubulin (figure 3c). These interactions are reminiscent of the binding of the Ska1 complex, thought to be the human orthologue of the Dam1 complex, to microtubules [17]. The termini of Dam1 and Duo1 are thought to be flexible and unstructured [10], unlike the human counterpart Ska1 complex. However, they appear to bind to the same region on the microtubule lattice. Given that the maximum length that EDC can cover is 20 Å , this indicates that the C termini Spc34 and Ask1 interact during self-assembly of the Dam1 complex around microtubules To determine how oligomerization of the Dam1 complex around microtubules occurs, we compared the cross-links obtained for the Dam1 complex in solution and when bound to microtubules. We mapped the cross-links that were unique to the Dam1 complex self-assembled around microtubules. From the subunit mapping onto the EM structure of the monomer, Spc34 and Ask1 are at the opposite ends of the monomer [12]. However, in the presence of microtubules, we observed that Ask1 cross-linked to Spc34 with both BS3 and EDC cross-linkers (figure 4 and electronic supplementary material, tables S5 and S6). This indicates that, when the Dam1 complex is oligomerized around microtubules, Ask1 is in close proximity to Spc34 (figure 4). We also found Ask1 cross-links to the N terminus of Dam1, while still cross-linking to Dad2 and Dad4 (figure 4 and electronic supplementary material, table S5). In addition, Spc34 and Spc19 formed cross-links with Ask1 and the N terminus of Dad4 when the Dam1 complex was cross-linked at a concentration at which it oligomerizes (electronic supplementary material, table S4). Therefore, we suggest that Spc34 and Ask1 form intercomplex interfaces during self-assembly of the Dam1 complex around microtubules. Spc34 and Spc19 cross-links were obtained from the Dam1 subcomplex, as well as from cross-linked monomeric and dimeric Dam1 complex (figure 1c and electronic supplementary material, figure S1), arguing that these cross-links are intrinsic to the monomeric Dam1 complex, rather than between monomers [16]. In addition, yeast two-hybrid studies showed a direct interaction between the regions of Spc34 and Spc19 for which we obtained cross-links [14]. Finally, at high concentrations of the Dam1 complex, we observed extensive cross-linking of the Dam1 and Duo1 C termini with themselves and with other subunits, in particular the C terminus of Spc34. These cross-links suggest these proteins may promote self-assembly through self-interaction (figure 2a). In agreement with this observation, the Dam1 complex lacking the C terminus and the phosphomimetic Dam1 complex with mutations mimicking Ipl1 phosphorylation showed a decreased ability to oligomerize and a reduced affinity for microtubules [11]. However, most of these interactions were not observed when the Dam1 complex was assembled on microtubules. Therefore, Duo1 and Dam1 together play a pivotal role in recruiting the Dam1 complex to microtubules and promoting its self-assembly. The C terminus of Duo1 is essential for the integrity of the Dam1 complex and for its interaction with microtubules To test the role of the C termini of Duo1 and Dam1 on the assembly and microtubule-binding properties of the Dam1 complex, we generated mutants of the Dam1 complex lacking the C terminus of Duo1 (D184 and D211) and both the C termini of Duo1 and Dam1. We first purified the Dam1 complex Dam1-19, which lacks the last 138 amino acids of the C terminus (Dam1DC). It behaved similarly to the Dam1 complex by analytical size-exclusion chromatography (figure 5a) [3]. We then purified the Dam1 complex Duo1D184 and Dam1 complex Duo1D211, further termed Duo1DC (figure 5a,b). All 10 proteins could still be purified using the affinity purification tag on Hsk3 and identified by mass spectrometry. Unexpectedly, we could purify the complex of 10 proteins but also obtained some smaller complexes containing Hsk3 for both mutants (figure 5a,b). To further dissect the architecture of the Dam1 complex lacking the C terminus of Duo1, we conducted cross-linking on a fraction corresponding to the elution of the full Dam1 complex Duo1D184 (red box; figure 5b). We obtained multiple cross-links between the coiled coil regions of Dam1 and Duo1, as previously (figure 5c and electronic supplementary material, table S7). We still observed a tight network of crosslinks between Dad1, Dad2, Dad3, Dad4 and Hsk3. However, we did not find many cross-links between the other subunits, when compared with wild-type Dam1 complex. Thus, while all 10 proteins co-eluted as a complex, the structural integrity of the Dam1 complex was affected by the absence of Duo1 C terminus. Interestingly, Zelter et al. [16] found the C terminus of Duo1 to cross-link with Dad1, Dad2, Dad3, Spc19 and Spc34 in the absence of microtubules, which agrees with the role of Duo1 we reveal presently. Our cross-linking data suggested Duo1 and Dam1 were the two subunits contributing to the microtubule-binding properties of the Dam1 complex. To further test this, we purified a Dam1 complex Dam1DC/Duo1DC, lacking the C termini of Duo1 and Dam1. Similar to the Dam1 complex Duo1DC mutants, we also recovered smaller subcomplexes containing his-tagged Hsk3, which could be separated by size-exclusion chromatography (figure 5a). We then tested the microtubule-binding properties of the Dam1 complex Dam1DC/Duo1DC eluting as a 10-protein complex (dark green box) using 350 nM Dam1 complex in a microtubule co-sedimentation assay. The Dam1 complex Dam1DC/ Duo1DC remained in the supernatant in the presence of microtubules, whereas the Dam1 complex, the Dam1 complex Duo1DC and the Dam1 complex Dam1DC cosedimented with microtubules, as previously shown (figure 6a,b and electronic supplementary material, figure S2) [3]. Thus, we demonstrated that the Dam1 complex lacking both the C termini of Dam1 and Duo1 no longer has the ability to bind to microtubules. In lower ionic strength (60 mM NaCl), we observed weak binding of the Dam1 complex Dam1DC/Duo1DC to microtubules. However, the Dam1 complex Dam1DC/Duo1DC bound less to microtubules than the Dam1 complex Dam1DC, which could still robustly bind to microtubules (figure 6c). Thus, these data indicate that the C terminus of Duo1 contributes to the binding of the Dam1 complex to microtubules. The residual binding of the Dam1 complex Dam1DC/Duo1DC to microtubules is most likely due to both the presence of residues 160-205 in Dam1, which interact with microtubules electrostatically (figure 3b), and the cooperativity of the Dam1 complex to self-assemble on the microtubules, even when the affinity of a single Dam1 complex for microtubules is very low. Taken together, the C terminus of Duo1 forms the core of the interaction network within the Dam1 complex, is essential for the stability of the Dam1 complex in solution and is a major contributor to the microtubule-binding properties of the Dam1 complex. Duo1 and Dam1 as force-transducing couplers during chromosome segregation Structural insights into the Dam1 complex have been generated previously from a low-resolution map of the monomeric Dam1 complex combined with the positioning of the N termini of Spc34, Duo1, Ask1 and Spc19 and the C terminus of Dam1 in this EM density [12]. Using an orthogonal approach, we now report a structural map of the Dam1 complex that provides new structural insights, advancing our understanding of the Dam1 complex connectivity and organization. Our data are also in good agreement with previous yeast two-hybrid studies rsob.royalsocietypublishing.org Open Biol. 6: 150237 of the Dam1 complex [14]. In addition, we show unambiguously that the C termini of Dam1 and Duo1 are the two contributors to the interaction of the Dam1 complex with microtubules [16]. Dam1 and Duo1 bind to multiple sites at the surface of the microtubule lattice. These results explain why in the absence of the C terminus of Dam1 (in a dam1-19 temperature sensitive mutant), the Dam1 complex is still functional such that the yeast are able to grow, but have very short spindles [1]. The dam1-19 mutant complex has a reduced, but not absent, affinity for microtubules and its force-transducing coupling ability is reduced [8]. Thus, the C terminus of Duo1 still provides microtubule-anchoring function to the Dam1 complex, even in the absence of the Dam1 C terminus. The acidic tail of tubulin increases the strength of the Dam1 complex-microtubule interaction, but is not the major determinant of this interaction. Phosphorylation of the C terminus of Dam1 by Ipl1 reduces the affinity of the Dam1 complex for microtubules in vitro and constitutively mimicking this phosphorylation disrupts microtubule interactions in vivo [2,3]. The large number of cross-links between the lysine residues in the C terminus of Dam1 and the acidic residues in tubulin indicate that the interaction is electrostatic and that phosphorylation reduces this interaction through electrostatic repulsion. The fungal Dam1 complex and the metazoan Ska1 complex are structurally distinct, but are likely to represent functional homologues [18]. Interestingly, both the Dam1 and Ska1 complexes bind to the microtubule using a similar interface with tubulin rather than the acidic tails of tubulin [17]. The Ska1 complex binds both straight and curved microtubule protofilaments [19]. The Dam1 complex may also bind to curved microtubules, according to the conformational wave model [20]. Our data highlight that the Dam1 complex uses the acidic tails of tubulin to enhance its affinity for the microtubules, but primarily recognizes the microtubule surface. The acidic tails of tubulin may help with electrostatic lattice diffusion of the Dam1 complex, similarly to MCAK [21,22]. Our work and proteolytic experiments on the Dam1 complex associated with the microtubule show that the C termini of Dam1 and Duo1 are both flexible and necessary for microtubule binding [10]. In addition, the electron density for the Dam1 complex close to microtubules is very weak and averaged out during refinement, which is typical for flexible regions [10][11][12]. Taken together, the C termini of Dam1 and Duo1 are flexible and Duo1 and Dam1 interact with the microtubule lattice electrostatically rather than making a footprint on the lattice. Our data and Zelter et al. [16] also suggest that the C terminus of Duo1 is involved in an important conformational rsob.royalsocietypublishing.org Open Biol. 6: 150237 change during the solution to microtubule-binding transition. Although our data cannot determine the extent of the conformational change upon binding of the Dam1 complex to microtubules proposed by the structural model 1, we can rule out the structural model 2 for self-assembly of the Dam1 complex around microtubules based on the close proximity of the C termini of Dam1 and Duo1 to microtubules within 20 Å (figure 3d) [16]. The site occupied by the Dam1 complex on the lattice is also compatible with Ndc80 complex binding at the interface between the a-tubulin and b-tubulin (figure 3c) [15,23]. Finally, our work suggests we can uncouple the microtubule-binding properties of the Dam1 complex from its oligomerization properties ( figure 6). Future highresolution work is required to understand how the Dam1 complex oligomerizes around microtubules to couple chromosome movement to depolymerizing microtubules. Dam1 complex protein purification and cosedimentation assay The Dam1 complex was expressed and purified as described [3]. Site-directed mutagenesis of Duo1 was performed using DNA oligos for Duo1D184 5 0 -gatttaagccttgatttggttactttgctgg tgcagcatccttt-3 0 and 5 0 -aaaggatgctgcaccagcaaagtaaccaaatcaa ggcttaaatc-3 0 and for Duo1D211 5 0 -atgggtctttcttactcttcagtta tttgaaattcctgccg-3 0 and 5 0 -cggcaggaatttcaaataactgaagagtaa gaaagacccat-3 0 to insert a stop codon at position 184 and 211, respectively. The Dam1 complexes Duo1DC and Dam1DC/ Duo1DC were purified using Ni -NTA affinity purification (GE Healthcare) followed by size-exclusion chromatography on a Superose 6 column (GE Healthcare), as they did not bind to the ion-exchange column. Microtubule cosedimentation assay was performed as previously described [22]. The binding assay was performed in a final concentration of 150 mM NaCl, unless stated otherwise. The samples were resolved by SDS-PAGE (12% Bis -Tris or Tricine 16% NuPAGE, Invitrogen) and stained using Instant Blue (Expedeon). For Western blotting, the proteins were transferred to a nitrocellulose membrane and probed for Dam1 using a rabbit anti-Dam1 antibody (kind gift from Prof. Tomo Tanaka). Experiments were repeated three times. Protein cross-linking The mixing ratio of BS3 (Thermo Fisher Scientific) to complex was determined for the Dam1 complex using 5 mg aliquots and using a protein-to-cross-linker ratio (w/w) of 1 : 1, 1 : 2, 1 : 3, 1 : 4, 1 : 5, 1 : 6 and 1 : 7, respectively. Low concentration of the Dam1 complex, at 0.05 mg ml 21 , was used to minimize cross-linking of higher-order structures. As the best condition, we chose the ratio that was sufficient to convert rsob.royalsocietypublishing.org Open Biol. 6: 150237 most of the individual Dam1 subunits into a high molecular weight band corresponding to the monomeric and dimeric Dam1 complexes (1 : 5-1 : 7), but did not cross-link the Dam1 complex into aggregates, as judged by SDS-PAGE analysis. The reactions were resolved by SDS-PAGE (4-12% Bis-Tris NuPAGE, Invitrogen) gel separation and stained using Instant Blue (Expedeon). The bands were then excised, and the proteins therein were reduced using 5 mM BME for 30 min at room temperature, alkylated with 55 mM iodoacetamide for 20 min in the dark at room temperature and digested using trypsin (sequencing grade; Promega) overnight at 378C. Cross-linked peptides were analysed as previously described [17]. Preparation of cross-linked Dam1 complex alone (high concentration) Purified Dam1 complex of 6 mg was mixed with 20 mg BS3 dissolved in 10 ml BRB80 at a final concentration of 0.3 mg ml 21 and incubated at room temperature for 40 -60 min. The reaction was stopped by adding 2.5 ml of 2.5 M ammonium bicarbonate for 45 min on ice. The protein was then precipitated with 20% TCA and left overnight at 48C. Preparation of Dam1 complex cross-linked to microtubules The purified Dam1 complex (6 mg) was incubated with 4 mg taxol-stabilized microtubules in BRB80 (80 mM PIPES pH 6.80, 1 mM EGTA, 1 mM MgCl 2 ) for 10 minutes at room temperature. The complex was then spun at 13 000 r.p.m. at room temperature in a benchtop centrifuge to remove free tubulin and unbound Dam1 complex. The microtubule -Dam1 complex pellet was washed twice in warm BRB80 buffer containing 2 mM taxol. The Dam1 -microtubule complex was mixed with 10 mg 1-ethyl-3-[3-dimethylaminopropyl] carbodiimide hydrochloride (EDC, Thermo Fisher Scientific) and 22 mg of N-hydroxysulfosuccinimide (Sulfo-NHS, Thermo Fisher Scientific) dissolved in 20 ml BRB80 and incubated at room temperature for 1 h. The complex was then spun at 13 000 r.p.m. at room temperature and lyophilized. Preparation of samples for mass spectrometry analysis Cross-linked complex proteins were reduced, alkylated and digested following standard procedures [24]. Trypsin (Promega) and Lys-C (Roche) digestion was then performed according to manufacturer's protocols. Cross-linked peptides were desalted using C18 StageTips [25,26].
6,350.4
2016-03-01T00:00:00.000
[ "Biology" ]
BinVPR: Binary Neural Networks towards Real-Valued for Visual Place Recognition Visual Place Recognition (VPR) aims to determine whether a robot or visual navigation system locates in a previously visited place using visual information. It is an essential technology and challenging problem in computer vision and robotic communities. Recently, numerous works have demonstrated that the performance of Convolutional Neural Network (CNN)-based VPR is superior to that of traditional methods. However, with a huge number of parameters, large memory storage is necessary for these CNN models. It is a great challenge for mobile robot platforms equipped with limited resources. Fortunately, Binary Neural Networks (BNNs) can reduce memory consumption by converting weights and activation values from 32-bit into 1-bit. But current BNNs always suffer from gradients vanishing and a marked drop in accuracy. Therefore, this work proposed a BinVPR model to handle this issue. The solution is twofold. Firstly, a feature restoration strategy was explored to add features into the latter convolutional layers to further solve the gradient-vanishing problem during the training process. Moreover, we identified two principles to address gradient vanishing: restoring basic features and restoring basic features from higher to lower layers. Secondly, considering the marked drop in accuracy results from gradient mismatch during backpropagation, this work optimized the combination of binarized activation and binarized weight functions in the Larq framework, and the best combination was obtained. The performance of BinVPR was validated on public datasets. The experimental results show that it outperforms state-of-the-art BNN-based approaches and full-precision networks of AlexNet and ResNet in terms of both recognition accuracy and model size. It is worth mentioning that BinVPR achieves the same accuracy with only 1% and 4.6% model sizes of AlexNet and ResNet. Introduction Visual Place Recognition (VPR) uses visual information to determine whether a robot or autonomous agent has previously visited a place [1].VPR is an essential technology for robot localization and navigation [2].It enables a robot to localize itself and correct the incremental drift of its pose estimation during navigation.VPR can be considered as an image retrieval system that, given a query image, retrieves the most similar locations in the stored database.In recent years, VPR approaches based on Convolutional Neural Networks (CNNs) have attracted the attention of numerous researchers, which are more robust and discriminative than those based on hand-crafted VPR approaches. The success of CNN models depends dramatically on their high computational cost and heavy parameters.However, it is a great challenge for mobile robot platforms equipped with limited resources [3].Therefore, reducing the model size is a crucial way to make it applicable to mobile robots.Ferrarini et al. [4] proposed FloppyNet that used a Binary Neural Network (BNN) to reduce the model size while representing an extreme scenario of model quantization by using 1-bit instead of 32-bit floating for weighting and activation.AlexNet was binarized in [4].Its network depth was also reduced, the model size was decreased to 99% of AlexNet, and computation efficiency was significantly increased by seven times. However, current BNNs always suffer from gradient vanishing in the training process and a marked drop in accuracy, which results from the conversion of 32-bit values into 1-bit values, causing severe loss of information.Thus, the BNN-based VPR approaches require further investigation.Moreover, inspired by FloppyNet, this work constructed a baseline network based on ResNet [5].ResNet was selected due to its fewer parameters and better accuracy than AlexNet. For the gradient-vanishing problem, we proposed the following hypothesis: In the high-level structure of the binary network, too much information was lost, and the gradient could not be effectively accumulated during backpropagation, which caused the gradientvanishing phenomenon.This hypothesis was inspired by MobiNet [6] and verified by designing experiments.Based on this, we further proposed a feature restoration strategy to reintroduce part of the feature information into the subsequent convolutional layers.This paper explores which features should be restored and how the network should accept these features to minimize the gradient-vanishing problem.We divided the added features into three categories according to their source positions: basic, intra-block, and inter-block features.This work explored the effect of all added features, and the best one was obtained.Meanwhile, an exploration of the optimal position to accept these restoration features was also conducted.Finally, we concluded with two key principles for designing a BNN architecture that can effectively deal with the gradient-vanishing problem. For the marked drop in accuracy of BNNs, Ding et al. [7] indicated that it results from the gradient mismatch problem, which usually occurs in backward propagation.To weaken the gradient mismatch problem, an efficient way is to optimize the combination of binarized activation and binarized weight functions.Thus, taking a binarization ResNet-18/34 with a full-precision short-cut connection as an example, we studied all the possible combinations of activation and weight signs in [8,9].It owes to the brute-force approach.Finally, the optimal combination was found.With the above description, our BinVPR can be proposed.It achieved the same level of accuracy as the full-precision network but with a significantly reduced model size.The contributions of this paper are as follows: (1) A feature restoration strategy was proposed to solve the gradient-vanishing problem, which aims to supply some lost information to higher levels of the network.In order to restore the lost information, we explored different kinds of features to add different positions to accept these features.Thereby, the best one was obtained.(2) Two principles have been found that can solve the problem of vanishing gradients in the training of BNNs and determine the structure of BNNs.These principles include restoring basic features to solve the gradient-vanishing problem and restoring basic features from the high layer to the low layer by layer.(3) A brute-force approach was introduced to find the optimal combination of binarized activation and binarized weight function to further improve the dropped accuracy caused by the gradient mismatch.(4) A baseline network based on ResNet was designed, and the proposed binary network BinVPR was presented to realize visual place recognition.The performance of BinVPR was tested on VPR public datasets.The results show that BinVPR outperforms state-ofthe-art CNN-based approaches such as AlexNet and ResNet in terms of both accuracy and model size. This paper is organized as follows.The next section reviews relevant literature, while Section 3 presents a step-by-step process for obtaining concise BNNs for VPR.Section 4 proposes a comprehensive analysis of binary layers for VPR applications.Finally, Section 5 concludes this article. Deep Learning for VPR Deep Learning (DL) has received considerable attention for its remarkable success in Computer Vision (CV) [10][11][12] and Robotics [13], mainly due to the widespread use of Convolutional Neural Networks (CNNs).Pre-trained CNN models are usually used as feature extractors.Sünderhauf et al. [14] and Nasser et al. [15] used AlexNet pre-trained on the ImageNet dataset.PlaceNet [16] is based on the same principle and is trained on a large dataset called Places365, which is organized into 365 categories.In addition to using pre-existing CNN architectures, specific VPR models are also being developed and trained, such as NetVLAD [17], which replaces the last fully connected layer of the original CNN model to enable end-to-end training for large-scale place recognition.Patch-NetVLAD [18] combined the advantages of local and global descriptor methods to extract patch-level features from NetVLAD residuals.Sun et al. [19] proposed a modified patch-NetVLAD strategy, called contextual Patch-NetVLAD.Contextual Patch-NetVLAD first aggregated features from each patch's surrounding neighborhood.Then, this method utilized cluster and saliency context-driven weighting rules to assign higher weights to patches.Xin et al. [20] proposed a Landmark Localization Network (LLN) that predicts the discrimination of local features for corresponding regions within an image to generate discriminative landmarks.Izquierdo et al. proposed SALAD [21] (Sinkhorn Algorithm for Locally Aggregated Descriptors).Considering the feature-to-cluster and cluster-tofeature relationships, they introduced the "dustbin" cluster to discard the features that are considered to be non-informative and improve the overall quality of the descriptors.It is worth noting that Ali-bey proposed the GSV-Cities [22] dataset to address the challenge of lacking large databases with accurate ground truth.Ali-bey collected more than 40 cities across all continents over a 14-year period, providing the widest geographic coverage and highly accurate ground truth.To realize seamless adaptation of the pre-trained transformer model for VPR, Lu et al. [23] proposed the SelaVPR model.This model design is a hybrid adaptation that adjusts the lightweight adapter.Moreover, they proposed a mutual nearest neighbor local feature loss to guide effective adaptation, which reduces the matching time of the image. Binary Neural Network Convolutional Neural Networks (CNNs) exhibit strong learning ability owing to their multi-layered structure and millions of parameters.This results in a substantial size and significant computational burden.Therefore, a vital issue for VPR is to reduce the model size.Binary neural networks can drastically reduce resource demand and improve computational efficiency.BinaryConnect [24] proposed stochastic binarization that is used during forward propagation of the training to quantize weights.This method incorporates a clipping function to cancel the gradient when activation exceeds 1.0, which improves accuracy.Afterward, some works were proposed to improve BNNs.XNOR-Net [25] addresses this accuracy gap by introducing a channel-wise scaling factor, which is obtained by l 1 -norming weights or activations.This reduces the error of full precision.DoReFa-Net [26] investigated neural networks trained with 1-bit weights and 2-bit activations.DoReFa-Net investigated neural networks trained with 1-bit weights and 2-bit activations.Compared to the original AlexNet model, this method's accuracy decreased by 4.9% on the ImageNet dataset.Bi-Real-Net [27] is proposed on the basis of ResNet [5].It employs a highly sophisticated training strategy consisting of full-precision pre-training, multi-step initialization, and user-defined gradients, to achieve viability for real-world applications.Moreover, Wang et al. proposed BitNet [28] for a large language model, which is a scalable and stable 1-bit transformer architecture.They designed the trainable 1-bit fully connected layer BitLinear instead of the nn.Linear layer.BitNet achieved competitive performance while reducing memory footprint and energy consumption, compared to full-precision Transformer baselines.Xue et al. [29] proposed ReBNN to improve learning ability.This method reduced the loss of the binarization process by calculating the balanced parameter based on its maximum magnitude. FloppyNet [4] used AlexNet as a baseline network to construct a binary network for VPR, which improved operational efficiency and saved memory.Inspired by FloppyNet, we constructed a baseline network based on ResNet to overcome the gradient-vanishing problem and the decreased accuracy problem because ResNet has fewer parameters and better performance than AlexNet. Application of VPR in vSLAM VSLAM [15] (Visual Simultaneous Localization and Mapping) uses visual information to tasks if it is possible for a mobile robot to be placed at an unknown location in an unknown environment, at the same time, and builds a consistent map of this environment while simultaneously determining its location within this map.VSLAM systems have four main components: visual odometry, Loop Closure Detection (LCD), back-end optimization, and mapping.True loop closure reduces [30] the cumulative position errors caused by visual odometers and builds accurate and consistent maps.The LCD can also be regarded as place recognition [1] and is an important part of VSLAM systems [31].This problem involves giving a location image and determining whether the place exists in the location database. At present, the main applications of VPR in vSLAM include VPR based on local feature descriptors, VPR based on global feature descriptors, and VPR based on learning method.VPR based on local feature descriptors requires a detection phase that determines individual patches or key points within the image to retain as local features, including SIFT [32], SURF [33], and ORB [34].In contrast, VPR based on global feature descriptors such as WI-SURF [35] and BRIEF-Gist [36] does not have a detection phase but process the whole image regardless of its content.In recent years, with the development of deep learning, researchers have tried to use this method to learn the expression of global images from visual information.The learning-based VPR approach is described in Section 2.1. From the perspective of the specific application of VSLAM, VPR provides loop closure information for the backend.Incorrect loop information leads to drastic degradation of map quality.Therefore, a place recognition algorithm applied to VSLAM should have high accuracy.The advantages of local and global feature descriptors are compact representation and computational efficiency, leading to lower storage consumption and faster indexing when retrieving location images.VPR based on global features descriptors shows strong robustness to illumination, but cannot handle occlusion and incorporate geometric information.VPR based on local feature descriptors is robust to rotation and scale and can be recognized even under partial occlusion.In addition, these methods are incorporated into spatial information and then combined with the metric pose estimation algorithms, which are successfully applied in VSLAM, such as ORB-SLAM [37].VPR based on learning can combine the advantages of local and global features and has high accuracy in the changing scaling, illumination, and occlusion environment.However, VPR based on learning requires higher computing and storage resources.Therefore, reducing the requirement of deep learning on computing and storage resources has become one of the key issues in the study of VPR. Methodology In this section, the proposed BinVPR model is presented.Firstly, a baseline network based on ResNet's [5] plain network is constructed.Then, the brute-force approach is described to search for the optimal combination of binarized weights and binarized activation functions, thereby improving the accuracy of the binary network model.Moreover, the feature restoration strategy is interpreted.Basic, intra-block, and inter-block features were added to higher levels of the network to find the most suitable features to restore in order to solve the gradient-vanishing problem.Finally, we introduced the method of designing binary networks to solve the problem of gradient vanishing during the training process for BNNs. Baseline Network We designed our baseline network (see in Figure 1) based on the plain network of ResNet-18/34.Through experiments, we determined the final network structure on the baseline network. The basic block was established by stacking three convolutional layers whose network width is 128.The weights of the basic block are binarized, but the input activation of the basic block is not binarized.This approach helped to retain more information.The top layer of the basic block extracted features as basic features.Zagoruyko and Komodakis [38] highlighted the ability of wider networks to capture more features and allow for smoother training.Therefore, we increased the network width of the plain network.In the main part of the network (except the basic block in the baseline network), the width of the first block was increased from 64 to 128, the width of the second block was increased from 128 to 256, the width of the third block was increased from 256 to 384, and the width of the fourth block was unchanged. In the main part of the network, the inputs and weights were binarized.The fully connected layer outputted the feature vector.Thus, our baseline networks, i.e., Baseline-20 and Baseline-36, were constructed as Figure 1 shows. Original Binarization Binarization converts floating values to binary values, effectively reducing memory.Courbariaux and Bengio [24] first proposed the binarization function Straight-Through Estimator (STE) coupled with gradient clipping, as introduced by Hubra et al.The weights and activations are binarized using the following sign function: where x is a real-valued variable and x b is a binary-valued variable.In forward propagation, STE performs sign functions.In the backward propagation phase, x updates according to the network's loss gradient.Let l denote the loss function, r i be a real number input, and r o ∈ {−1, +1} be a binary output.Furthermore, t clip is a threshold for clipping gradients and was originally set to be 1.The function returns a clipped identity of gradient in the backward phase.Therefore, the final STE formula can be stated as follows: Backward : Gradient clipping helps the optimization process of the binary network because backpropagation no longer increases the absolute value of the input more larger than the clipping threshold, similar to regularization in the full-precision network. Optimization for Combination of Activations and Weights Sign Because of the limited expression ability of the original binarization function, the recognition performance of the binary network will be greatly reduced.Later, the binarized activation value and weight value used different binarization functions.However, this also presents a problem.As Ding et al. [7] pointed out, different binarization functions may have gradient mismatches during training.To solve this problem, we optimized the combination of binarization activation functions and binarization weight functions. We conducted comprehensive experiments training on "Forest" to find the best combinations of LARQ's built-in binarization activation and weight functions.The experimental results show that using "leaky tanh" for binarizing activation combined with DoReFa-Net's weight function yields the most favorable performance results.In Larq's framework, they provided the binarized activation function of "leaky tanh".The sign function is as follows: where a o represents the output of activation, a i represents the input of activation, and α is 0.2 in Larq. DoReFa-Net's binarization weight function used k − bit representation of the weights with k > 1 and used the STE f k w to weights as follows: Backward : ∂l ∂r i = ∂r o ∂r i ∂l ∂r o (6) where r i is the real-value input and r o is the binary output. . The maximum value is taken over all weights in this layer.The backward sign is as shown in Equation ( 3), and the forward function is as follows: It is worth noting that they used the tanh function to constrain the weight values to the range [0, 1] prior to quantization into k − bit.Through this process, 2 is designed to produce a number in the range [0, 1] that reaches the highest value of all the weights.The range of f k w (w i ) is then adjusted to [−1, 1] by the subsequent affine transform.This approach ensures standardized and bounded weight values for additional computation. Feature Restoration Strategy This section presents our three feature restoration strategies, i.e., basic feature restoration strategy, inter-block and intra-block feature restoration strategy. Basic Feature Restoration Strategy According to literature [6], the features in higher layers of the network suffer from more severe information loss, which leads to gradient vanishing during the training process.Thus, in the basic feature restoration strategy, basic features extracted from the basic block were added to higher convolutional layers to provide more information.Regarding which convolutional layer accepts basic features, this work gives two ways, as Figure 2 shows, i.e., restoration for a single layer and restoration for multiple layers.In the restoration for a single layer way, basic features were added to a single layer, ranging from the second layer of Block 2 to the last ReLU; thus, there are 16 ways in total.In the restoration for multiple layers way, basic features were supplied to multiple layers at the same time, and the multiple layers to accept basic features are added from the last ReLU, as Figure 2 shows.There are also 16 ways in total.Experiments were performed in Section 4 to compare the effect of restoration for different layers and to find the best one from these 32 ways.In the restoration for a single layer way, basic features were added to a single layer.Here, we just give a case when basic features were added to the last ReLU.In the restoration for multiple layers way, basic features were supplied to multiple layers at the same time.Here, we just give a case when basic features were added to the last convolutional layer and the last ReLU. Inter-Block Feature Restoration Strategy For the inter-block feature restoration strategy, the first convolutional layer of a block was defined as inter-block features.They were fed into the convolutional layer of the other block.As Figure 3 shows, it includes three restoration ways, i.e., the abbreviated "single block-single layer", "multiple blocks-single layers", and "multiple blocks-multiple layers".Here,"single block-single layer" represents the inter-block features of the current block that were added to a single convolutional layer of another block, as the black dotted line shows in Figure 3; "multiple blocks-single layers" means the inter-block features of the current block were simultaneously added to a single convolutional layer at the same location in other blocks, as the blue dotted line shows in Figure 3; and "multiple blocks-multiple layers" means the inter-block features of multiple blocks were supplied to multiple convolutional layers of another block at the same time, as the red dotted line shows.For Baseline-20, there are 4 blocks, leading to 42 restoration ways in total.Experiments were performed in Section 4 to compare the effect of the total 42 ways.For "single block -single layer", the black dotted line shows a case where the inter-block features of Block 1 were added to the last layer of Block 4. For "multiple blocks -single layers", the blue dotted line shows a case that the inter-block features of Block 2 were added to the second layer of Block 3 and the second layer of Block 4. For "multiple blocks-multiple layers", the red dotted line shows a case that the inter-block features of Block 1 were added to the second and third layers of Blocks 2, 3, and 4, thus 6 convolutional layers accepted the restoration. Intra-Block Feature Restoration Strategy The output of the first convolutional layer of a block was defined as basic features of this block.The intra-block feature restoration strategy extracts the basic features of a block to add into the rest layers of this block, as Figure 4 shows.It includes 2 restoration ways, i.e., the abbreviated "single block" and "multiple blocks".Here, "single block" represents a restoration way where the basic features of a block were added to one or more convolutional layers in this block, and "multiple blocks" denotes a restoration way where there are multiple blocks performing "single-block" restoration at the same time.For Baseline-20, there are 21 restoration ways in total.Experiments were performed in Section 4 to compare the effect of the total 21 ways. How Features Are Restored ResNet applied residual block learning for every few stacked layers.Figure 5 shows 18 and 34 layers of the ResNet building block.The block can be defined as: where x and y are input and output vectors of the considered layers.F(x, {w i }) represents the residual map to be learned.They add the latter extraction features after convolutional operations.Unlike short connection, feature restoration supplies features to the latter convolutional layers.We have defined this operation as: where x is input vectors and y is output vectors.x s represents the features to be restored.Function F represents the restoration features map to be learned.The function f (x s ) represents the number of dimensions aligned between x s and x, which uses a full-precision convolutional layer with a kernel size of 1 × 1, and the stride is a multiple of 2. The restoration features operation is shown in Figure 5a-c. Design Binary Network For the problem of gradient vanishing encountered in the training process of BNNs, we conducted an in-depth exploration.After analysis, we found that gradient vanishing is mainly due to the limited representation ability of binarization method, which made the accumulated gradient insufficient in the backpropagation process and led to this problem in the training process.Therefore, to address this problem, it needed to restore some of the features in the binary network.However, how should features be restored and what features should be restored so became the central question of our research. How should features be restored? Restoring features that retain more information can solve the problem of gradient vanishing.We have conducted in-depth research and found that basic features play a key role in solving the gradient-vanishing problem.In contrast, intra-block and interblock features failed to alleviate this phenomenon effectively.Even if we add all the intra-block and inter-block features to the high-level network, the problem remains.We further analyzed that these features showed different effects because the basic features were not binarized, but the inter-block and intra-block features were binarized.Binarization features lose a lot of information, so solving the gradient disappearance problem is difficult.Restoring those features that have been un-binarized can effectively solve the phenomenon of gradient vanishing. What features should be restored?To solve the gradient-vanishing problem, we adopted a strategy, feature restoration.This strategy is a layer-by-layer feature restoration from the high layer to the low layer of the network until the gradient-vanishing phenomenon is solved.During the experiment, we first tried to recover the basic features from the low-level network, but the effect was insignificant.We then turned to high-level networks for feature recovery and found that this solved the vanishing gradient problem.Because the low-level network had less information loss in the binarization process to accumulate enough gradients in the backpropagation, and it was less apparent in the gradient-vanishing phenomenon.However, with the increase in network layers, the information loss was gradually aggravated, and the gradient accumulated was gradually reduced, resulting in the gradient-vanishing phenomenon becoming increasingly apparent.Therefore, restoring features should start from the high-level network and advance layer by layer to lower layers to solve the gradient-vanishing problem. Experiments This section begins with an introduction to the experimental dataset in Section 4.1.Subsequently, a comprehensive ablation study was conducted in Section 4.2 to thoroughly evaluate the efficacy of the proposed feature restoration method.Following this, we conducted experiments to determine the optimal combination between activation and weight functions to address the challenge of binarizing input activations and weights.Finally, in Section 4.4, we compared our work with other state-of-the-art binary networks and full-precision neural networks regarding accuracy and model size. Training Dataset We chose the Places365 [16] dataset as the experimental dataset.Places365 conforms to human visual cognition and can be used to train artificial neural networks for advanced visual tasks.It contains over a million images and has 365 image categories, each with 3000 to 5000 images.VPR faces the challenges of perceptual aliasing and variability in a changing environment.Due to seasonal changes, lighting conditions, and object occlusion, the place's appearance changes greatly, leading to perceptual aliasing.In addition, when the location structure of the scene is similar, it is easy to cause perceptual variability.These challenges are particularly prominent in complex forest environments, and therefore, we grouped forest scenes in particular.We divided the other scenes into indoor scenes, outdoor human-made scenes, and natural scenes.Torii et al. [39] subdivided scenes into two broad categories based on whether or not the image has a repetitive structure: human-made and natural scene. Further, human-made scenes are divided into indoor scenes and outdoor humanmade scenes.The outdoor human-made scene has many repetitive structures and was greatly affected by natural factors such as light and seasonal changes.In contrast, in indoor scenes, VPR is usually unaffected by light and seasonal changes and has a large number of repetitive structures.The structures in the natural environment are similar but not repetitive and are also affected by natural factors.Therefore, to improve the accuracy of VPR, we subdivide the Places365 dataset into four categories Table 1 and use these data to test and verify BNNs in the experiment.Specifically, "Forest" was constructed by selecting a few scene classes from Places365, which includes deciduous forest, desert vegetation, rainforest, bamboo forest, and tree farm."Natural scene" chose 48 classes of natural environment images from Places365, including mountains, rivers, lakes, seas, etc. "Indoor scene" selected 156 classes of indoor images such as bars, classrooms, etc. Meanwhile "Outdoor human-made building" selected 156 categories of images representing outdoor buildings.See Table 1 for more details. Evaluation Metric We used Top-1 accuracy in the paper.Top-1 accuracy means the best guess (class with the highest probability) is the correct result.Top-5 accuracy means the correct result is in the top five best guesses (five classes with the highest probabilities).Compared with Top-5 accuracy, Top-1 accuracy is more serious and straightforward.It measures the model's ability to predict the most likely class correctly.Moreover, in some datasets we trained, the maximum scenarios are 10.Top-5 accuracy cannot provide enough discrimination, and most predictions are in the top five.Therefore, we chose Top-1 accuracy as the evaluation metric. Evaluation of Feature Restoration Strategy A series of experiments were performed to assess the effectiveness of different types of feature restoration on "Forest".The evaluation index is recognition accuracy. Firstly, in order to evaluate the effectiveness of the proposed basic feature restoration strategy and find the optimal position to add features, the restoration for a single layer and restoration for multiple layers, which include 32 ways in total, were compared.For restoration for a single layer, we added basic features to different places in the network, ranging from the first layer to the last layer.It is discovered that supplying basic features to the last convolutional layer and the last ReLU layer achieved the best results.They can overcome gradient vanishing: the recognition accuracies were 74.75% and 73.25%, respectively.However, supplying basic features to lower layers cannot the solve the gradient-vanishing problem.Therefore, it can be concluded that adding basic features to higher layers is more beneficial than lower layers. In the restoration for multiple layers way, basic features were supplied to multiple layers at the same time.It aims to conclude which layers are the best choices to accept basic features.Based on the above conclusion, basic features thereby were restored from the last ReLU to a few lower layers for Baseline-20 and Baseline-36.Results are shown in Figure 6, where the y-axis shows accuracy.Gradient vanishing appears when accuracy is lower than 0.3.The x-axis gives places where basic features were restored."Zero" represents a "plain network" and without features supplied.The abbreviated "LR." represents the place that accepts basic features in the last ReLU layer of the whole network."1st-LC" represents the place that accepts basic features, including the last ReLU layer and the last convolutional layer of the whole network.Similarly, "2nd-LC" represents the place that accepts basic features, including the last ReLU layer and the last two convolutional layers of the whole network, and so forth.Figure 6 indicates that Baseline-20 can solve the gradient-vanishing problem when basic features are supplied to place "LR." and after that.Moreover, when basic features were supplied to place "1st-LC", it achieved the best accuracy, nearly the same value as ResNet-18.In addition, the results of Baseline-36 show that it can solve the gradient-vanishing problem when basic features are supplied to place "6th-LC" and after that.When basic features were supplied to place "7th-LC", it achieved the best accuracy, nearly the same value as ResNet-34. It can be therefore concluded that basic feature restoration is able to solve the gradientvanishing problem.Meanwhile, the deeper the whole network, the higher the convolutional layers need to be to restore basic features after binarization of the full-precision model.Moreover, basic feature restoration is beneficial to improve the lost accuracy. Secondly, the proposed inter-block and intra-block feature restoration strategies were also evaluated.For inter-block feature restoration, a total of 42 restoration ways were compared, and the results are shown in Table 2.The first three parts of "Block 2", "Block 3", and "Block 4" belong to the inter-block feature restoration of "single block-single layer".For example, in the "Block 2" part, the location of row "2nd" and column "1st B1" indicates that the inter-features of Block 1 were supplied to the second convolutional layer of Block 2. Other representations are similar.The middle parts of "Block 2 + 3", "Block 3 + 4", and "Block 2 + 3 + 4" belong to the inter-block feature restoration of "multiple blocks -single layers".For example, in the "Block2+3" part, the location of row "2nd" and column "1st B1" indicates that the inter-features of Block 1 were supplied to the second convolutional layer of Blocks 2 and 3. Other representations are similar.The last three parts of "Block2+3", "Block3+4", and "Block 2 + 3 + 4" belong to the inter-block feature restoration of "multiple blocks -multiple layers".For example, in the "Block 2 + 3" part, the location of row "2nd + 3rd" and column "1st B1" indicates that the inter-features of Block 1 were supplied to the second and third convolutional layers of Block 2 and 3 at the same time.Other representations are similar.From this table, we can see that inter-block feature restoration cannot solve the gradient-vanishing problem. For intra-block feature restoration, as Section 3 described, there are 21 restoration ways in total.The experimental results are shown in Table 3.In the table, the rows from the "1st layer of B1" to the "1st layer of B4" belong to the intra-block feature restoration of "single block".For example, the location of row "1st layer of B1" and column "2nd" indicates that basic features of Block 1 were supplied to the second convolutional layer of Block 1, while the location of row "1st layer of B1" and column "2nd + 3rd" indicates that basic features of Block 1 were supplied to the second and third convolutional layers of Block 1, etc.The rows from "1st layer of B1+2" to "1st layer of B1+2+3+4" belong to the intra-block feature restoration of "multiple blocks".For example, the location of row "1st layer of B1+2" and column "2nd" indicates that basic features of Block 1 were supplied to the second convolutional layer of Block 1, at the same time, basic features of Block 2 were supplied to the second convolutional layer of Block 2. As another example, the location of row "1st layer of B1+2" and column "2nd + 3rd" indicates that basic features of Block 1 were supplied to the second and third convolutional layers of Block 1 at the same time; meanwhile, basic features of Block 2 were supplied to the second and third convolutional layers of Block 2. We can see that intra-block feature restoration cannot solve the gradient-vanishing problem.BNNs will have a more severe gradient disappearance problem as the network depth increases.Therefore, in inter-block and intra-block feature restoration, the features supplied from high layers are futile.Finally, we tested the effect of combining basic feature restoration with intra-block or inter-block feature restoration.Figure 7 shows the results of combining basic feature restoration with inter-block feature restoration.In Figure 7, the first convolutional layer of the current block was fed into the first convolutional layer of the next block, forming a connection as "skip".Here, we give an abbreviated "1st-skip" to express that the first and second blocks are connected (i.e., the first convolutional layer of the first block was fed into the first convolutional layer of the second block), abbreviated "2nd-skip" to express that the second and third block are connected, and so forth.Meanwhile, "full-skip" represents all blocks that are connected in sequence from the first block to the last ReLU layer of the entire network.From Figure 7, we can see that the recognition accuracies of Baseline-20-basic and Baseline-36-basic are higher than Baseline-20-inter and Baseline-36-inter.This indicates that supplying inter-block features actually weakens the performance of the network with basic feature restoration. In Figure 8, three blocks of Baseline-20 and Baseline-36 were fed into intra-block features.This shows that the recognition accuracys of Baseline-20 and Baseline-36 with only basic feature restoration are higher than those with both basic and intra-block feature restoration.This suggested that supplying intra-block features also weaken the network's performance with basic feature restoration.Thus, the above experimental results further demonstrated that the basic feature restoration strategy is optimal.Both inter-block and intra-block feature restoration strategies supplied excess features from the high network layer, while high layers always lose more information, and the deeper the layers, the more information is lost after the binarization of the full-precision model.It also brings in serious activation saturation, and the accuracy decreases. Therefore, the proposed BinVPR will select the basic feature restoration strategy, and BinVPR's network structure can be constructed.BinVPR-20 can be established based on Baseline-20, as shown in Figure 9.The basic feature restoration is performed on the last ReLU and the last convolutional layer of the network, achieving the highest accuracy.Similarly, BinVPR-36 can be established based on Baseline-36 as shown in Figure 10.Basic feature restoration is performed on the last ReLU and the last 1-8 convolutional layers, achieving the highest accuracy. Although the differences in network configurations and datasets may result in varied structures of BNNs, our experiment found key principles for designing effective binary network architectures to address the gradient-vanishing problem encountered during training.The key principles are as follows: (1) It is essential to restore basic features to address gradient vanishing.The phenomenon of gradient vanishing cannot be avoided by restoring the binarization of inter-block and intra-block features because these features lose too much information after binarization.(2) The gradient-vanishing problem appeared during the training phase, caused by the loss of information in the high level of BNNs.Firstly, Our approach restored basic features to the top layer of the network and then gradually restored the features to the lower layer until this problem was solved.The structure of the network was determined. Optimization for the Combination of Activations and Weights Function Taking a binarization ResNet-18/34 with a full-precision short-cut connection (abbreviated as "BiResNet-18" and "BiResNet-34") as an example, this work performed a randomized pairing procedure under Larq framework without separating the input activation and weight signs in BiResNet-18/34.In this section, experiments were conducted on "Forest", 48 potential combinations as described in Section 3.4 were tested and results are given in Tables 4 and 5. The six rows in Tables 4 and 5 give weight functions including Approx, STE, STE Tern, Swish, DoReFaNet and Magnitude Aware signs, while eight columns in tables give activation functions including Approx, STE, STE Tern, Swish, DoReFaNet, Magnitude Aware, Hard Tanh, and Leaky Tanh signs.We can see that the combination of "leaky tanh" as as an activation function and "DoReFa-Net" as as a weight function achieves the best accuracies for both ResNet-18 and ResNet-34, i.e., 77% and 75.75%.In terms of eight activation functions, we computed average accuracies for each one, results show that "leaky tanh" (with accuracy 63.04%) and "hard tanh" (with accuracy 63.54%) obtained the best performance for BiResNet-18, while "leaky tanh" [8] (with accuracy 63.54%) and "hard tanh" [8] (with accuracy 63.05%) achieved better accuracies for BiResNet-34.In contrast, "Magnitude Aware" is unsuitable for binarizing input activations in BiResNet-18/34 because it obtained "N/A" at all of rows, i.e., gradient vanishing emerges when combined with each weight function.Meanwhile, for BiResNet-34, more "N/A"s appear in Table 2, e.g., "Approx" also recieved "N/A" at all of rows, "Swish" signs recieved 5 "N/A"s.With respect to weight functions, both "Approx sign" (with accuracy 63.22%) and "DoReFa-Net"(with accuracy 63.41%) achieved better performance for BiResNet-18/34.It is worth noting that with all columns got "N/A", "STE Tern" is cannot to quantify the In terms of recognition accuracy, the proposed BinVPR-20/36 surpasses all of the other binary networks on the four-class datasets and achieves performance comparable to fullprecision networks.The performance of ResNet-34 is weaker than ResNet-18 in outdoor human-made scenes and forests, which is a real phenomenon.Compared with indoor and natural scenes, outdoor human-made buildings have a single repetitive structure and relatively simple image features.ResNet-34 will capture useless features, resulting in performance degradation.The forest dataset has fewer than ten scenarios.ResNet-34 may overfit in the training process, and its performance will be worse than ResNet-18.In particular, it can be seen that gradient-vanishing appears at XNOR-Net and RealToBinNet-34 on "Natural scene" and "Indoor scene".While, "Natural scene" consists of natural images, "Indoor scene" focuses on indoor environments.Although these two datasets contain a wealth of information, the binarization process brings in much loss of information, that XNOR-Net and RealToBinNet-34 are not competent to solve the gradient-vanishing problem.In addition, the proposed BinVPR-20/36 is superior to AlexNet, and BinVPR-Leaky-20 presents the best performance on all datasets with accuracy of 78.25%, 53.85%, 52.16%, and 54.32%.BinVPR-Leaky-36 has a slightly lower accuracy than BinVPR-Leaky-20, but it surpasses ResNet-34 in terms of accuracy.In summary, binarizing neural networks can significantly reduce the model size to meet the requirements of mobile robots.Moreover, by supplying basic features to high layers of the network, and combining "leaky tanh" as the activation function and "DoReFa-Net" as the weight function, the proposed BinVPR model achieves significant improvement in recognition accuracy and gradient vanishing problem also disappears. Conclusions This work proposed a new BinVPR model for BNN-based VPR methods to solve the gradient-vanishing problem that appeared in the training process and handle a marked drop in accuracy.For the gradient-vanishing problem, three feature restoration strategies were explored to add the lost information into higher layers, i.e., basic feature restoration, interblock feature restoration, and intra-block feature restoration.The experimental results show that basic feature restoration is able to solve the gradient-vanishing problem.Meanwhile, the deeper the whole network, the more information is lost after the binarization of the full-precision model; thus, higher convolutional layers are needed to restore basic features.Furthermore, we have identified two principles for designing a structure of BNNs to address the problem of gradient vanishing: restoring basic features and restoring basic features from higher layers to lower layers in turn.To improve the dropped accuracy, a brute-force approach was used to find the optimal combination of binarized activation and binarized weight function in the Larq framework.Then, "leaky tanh" was selected as an activation function and "DoReFa-Net" was selected as a weight function in BinVPR as the optimal combination. Finally, a baseline network based on ResNet was constructed, and the proposed Bin-VPR was established to realize visual place recognition.The performance of BinVPR was tested on public datasets.It was compared with state-of-the-art binary neural networks and full-precision networks (i.e., AlexNet, ResNet-18, and ResNet-34) in terms of parameters, model size, and recognition accuracy.Results show that BinVPR outperforms state-of-the-art BNN-based approaches and achieves the same accuracy with only 1% and 4.6% model size of AlexNet and ResNet.Further work will focus on exploring the complementary efficiency features in deeper networks and binarizing the VLAD layer in the VPR community. Funding: This research was funded by the National Natural Science Foundation of China (Grant number 62203059 and 32071680). Institutional Review Board Statement: Not applicable. Figure 1 . Figure 1.The baseline network: Baseline-20/36.For Baseline-36, "binary convolution layers" represents a few binarized convolutional layers; "xn" represents that it is constructed by cutting n corresponding binarized convolutional layers in Baseline-18, e.g., in Block 1; "x2" represents that "binary convolution layers" is established by cutting 2 corresponding binarized convolutional layers in Baseline-18, and the number of binary convolutional layers is 4; "x3" means the number of binary convolutional layers is 6, etc. Figure 2 . Figure 2.The strategy of basic feature restoration.In the restoration for a single layer way, basic features were added to a single layer.Here, we just give a case when basic features were added to the last ReLU.In the restoration for multiple layers way, basic features were supplied to multiple layers at the same time.Here, we just give a case when basic features were added to the last convolutional layer and the last ReLU. Figure 3 . Figure3.The strategy of inter-block feature restoration.For "single block -single layer", the black dotted line shows a case where the inter-block features of Block 1 were added to the last layer of Block 4. For "multiple blocks -single layers", the blue dotted line shows a case that the inter-block features of Block 2 were added to the second layer of Block 3 and the second layer of Block 4. For "multiple blocks-multiple layers", the red dotted line shows a case that the inter-block features of Block 1 were added to the second and third layers of Blocks 2, 3, and 4, thus 6 convolutional layers accepted the restoration. Figure 4 . Figure 4.The strategy of intra-block feature restoration.The black dotted line shows a case where the basic features of Block 1 were added to the last convolutional layer of Block 1.The red dotted line gives a case where the basic features of Block 2 were added to the third and last convolutional layers; meanwhile, the basic features of Block 4 were added to the third and last convolutional layers. Figure 5 . Figure 5.The original shortcut connection in ResNet18/34 and ours method: supply features for ReLU and convolutional layer.(a) Origin Shortcut.(b) Supply features in convolutional layer.(c) Supply features in ReLU. Figure 6 . Figure 6.Results of basic feature restoration for multiple layers.The y-axis shows accuracy, and the x-axis gives places where basic features were restored. Figure 7 . Figure 7. Results of combinating basic features and Inter-block features restoration.The y-axis shows recognition accuracy, and the x-axis presents cases where the block was connected in inter-block feature restoration.Baseline-20/36-basic represents the results when only basic features were added.Baseline-20/36-inter represents the results when basic features and inter-block feature restorations were combined. Figure 8 . Figure 8. Results of combinating basic features and intra-block features.y-axis shows recognition accuracy, and the x-axis represents the layers that accept the first convolutional layer in each block."Basic" represents only basic features were added to Baseline-20 and Baseline-36.(a) Combination of basic features and intra-block in Baseline-20.(b) Combination of basic features and intra-block features in Block 1 and Block 2 of Baseline-36.(c) Combination of basic features and intra-block features in Block 3 of Baseline-36. Table 1 . Summary of datasets for evaluation.
10,266.2
2024-06-25T00:00:00.000
[ "Computer Science", "Engineering" ]
Magnetic Fluid Deformable Mirror with a Two-Layer Layout of Actuators † In this paper, a new type of magnetic fluid deformable mirror (MFDM) with a two-layer layout of actuators is proposed to improve the correction performance for full-order aberrations with a high spatial resolution. The shape of the magnetic fluid surface is controlled by the combined magnetic field generated by the Maxwell coil and the two-layer array of miniature coils. The upper-layer actuators which have a small size and high density are used to compensate for small-amplitude high-order aberrations and the lower-layer actuators which have a big size and low density are used to correct large-amplitude low-order aberrations. The analytical model of this deformable mirror is established and the aberration correction performance is verified by the experimental results. As a new kind of wavefront corrector, the MFDM has major advantages such as large stroke, low cost, and easy scalability and fabrication. Introduction Adaptive optics (AO) is a technology that enables us to achieve complex aberration corrections for a wide range of applications [1,2]. Conventional AO systems utilize spatial light modulators [3,4] or solid deformable mirrors (DM) [5,6] to compensate for the phase fluctuations that result from non-uniformity in the properties of the medium through which light travels or imperfections in the geometry of the optical components. The spatial light modulators are available in both reflective as well as transparent modes. This type of wavefront corrector has the advantage of very high spatial resolution provided by extremely small liquid crystals. However, they are limited by the relatively small magnitude of correction that they can provide, usually in the range of a few micrometers. Solid deformable mirrors have evolved as the most widely used wavefront correction elements in optics systems, which can offer relatively high strokes. Generally, solid deformable mirrors consist of a solid reflecting membrane or plate surface to which an actuator structure is attached. Through manipulation of the actuators, the shape of the mirror can be modified to fulfill the compensation of distorted wavefronts. The common drawbacks of the solid deformable mirrors are the high cost per actuator channel and the complex fabrication process. Most of the currently available solid deformable mirrors offer small inter-actuator strokes and the maximum deflection magnitudes are limited to tens of micrometers. In practice, studies have shown that in many applications such as laser beam shaping [7][8][9] and ophthalmic imaging systems [10][11][12], AO system needs to effectively deal with the low-amplitude high-order aberrations along with some low-order aberrations that come with very high amplitudes simultaneously. For instance, high-resolution retinal imaging technology based on AO plays an important role in vision science and will aid in the early clinical diagnosis of retinal diseases. In view of the characteristics of ocular aberrations for a large and diverse population, e.g., myopic eyes, a set of adaptive optics systems using two deformable mirrors have been designed [10][11][12]. The first large-stroke DM with a limited number of actuators is used to correct large-amplitude low-order aberrations, and the second one with a low stroke but a high spatial correction resolution is used to compensate for the small-amplitude high-order aberrations. However, its practical application in ophthalmology is restricted by the complexity and the high price. In [13][14][15], a new type of liquid deformable mirror is proposed based on the actuation of the magnetic fluid. Though the disadvantage of the liquid mirror is that the mirror is constrained to remaining horizontal, the magnetic fluid deformable mirror (MFDM) has major advantages such as large strokes, low cost per actuator and easy scalability. The strokes of the single actuator or inter-actuator can both easily reach more than 100 µm with limited power consumption. However, in order to produce a large mirror surface deformation, the size of the electromagnetic coils is normally designed to be large, which results in a low density of actuators and thus is unfavorable for the correction of high-order aberrations. In order to realize the correction of full-order aberrations with high spatial resolution, the design of a two-layer layout of the miniature electromagnetic coils is adopted in this paper. The dynamics model of the mirror is established, and the aberration correction performance is verified by the simulation and experimental results. As a new kind of wavefront corrector, the proposed MFDM has major advantages such as large stroke, low cost, easy scalability and a simple fabrication process, and thus can be easily customized for different applications. Design of Magnetic Fluid Deformable Mirror (MFDM) As shown in Figure 1, the primary elements of the MFDM are a layer of magnetic fluid, a thin film of a reflective material coated on the free surface of the fluid, a two-layer layout of the miniature electromagnetic coils placed underneath the fluid layer, and a Maxwell coil. The properties of the magnetic fluid used in this paper are given in Table 1, and they are stable colloidal suspensions of nano-sized, single-domain ferri/ferromagnetic particles and can be coated with a silver liquid-like thin film to improve the reflectance. simultaneously. For instance, high-resolution retinal imaging technology based on AO plays an important role in vision science and will aid in the early clinical diagnosis of retinal diseases. In view of the characteristics of ocular aberrations for a large and diverse population, e.g., myopic eyes, a set of adaptive optics systems using two deformable mirrors have been designed [10][11][12]. The first large-stroke DM with a limited number of actuators is used to correct large-amplitude low-order aberrations, and the second one with a low stroke but a high spatial correction resolution is used to compensate for the small-amplitude high-order aberrations. However, its practical application in ophthalmology is restricted by the complexity and the high price. In [13][14][15], a new type of liquid deformable mirror is proposed based on the actuation of the magnetic fluid. Though the disadvantage of the liquid mirror is that the mirror is constrained to remaining horizontal, the magnetic fluid deformable mirror (MFDM) has major advantages such as large strokes, low cost per actuator and easy scalability. The strokes of the single actuator or inter-actuator can both easily reach more than 100 μm with limited power consumption. However, in order to produce a large mirror surface deformation, the size of the electromagnetic coils is normally designed to be large, which results in a low density of actuators and thus is unfavorable for the correction of high-order aberrations. In order to realize the correction of full-order aberrations with high spatial resolution, the design of a two-layer layout of the miniature electromagnetic coils is adopted in this paper. The dynamics model of the mirror is established, and the aberration correction performance is verified by the simulation and experimental results. As a new kind of wavefront corrector, the proposed MFDM has major advantages such as large stroke, low cost, easy scalability and a simple fabrication process, and thus can be easily customized for different applications. Design of Magnetic Fluid Deformable Mirror (MFDM) As shown in Figure 1, the primary elements of the MFDM are a layer of magnetic fluid, a thin film of a reflective material coated on the free surface of the fluid, a two-layer layout of the miniature electromagnetic coils placed underneath the fluid layer, and a Maxwell coil. The properties of the magnetic fluid used in this paper are given in Table 1, and they are stable colloidal suspensions of nano-sized, single-domain ferri/ferromagnetic particles and can be coated with a silver liquid-like thin film to improve the reflectance. In order to realize the correction of full-order aberrations with a high spatial resolution, the design of a two-layer layout of the miniature electromagnetic coils is adopted. As shown in Figure 1, the upper-layer actuators with a small size and high density are used to compensate for small-amplitude high-order aberrations and the lower-layer actuators with a big size and low density are used to correct large-amplitude low-order aberrations. The electromagnetic coils are conventional circular coils wound on a cylindrical bobbin, and the physical parameters are given in Table 2. Each layer of the coils is disposed in a hexagonal array. The upper-layer coils are radially spaced at 2.1 mm from center to center and the lower-layer coils are radially spaced at 4.2 mm, respectively. In order to linearize the response of the actuators, an external uniform magnetic field is produced by the Maxwell coil. As shown in Figure 1, the Maxwell coil consists of three separated coils, where each of the top and bottom coils should be of radius 4 7 R, and distance 3 7 R from the plane of the middle coil of radius R = 100 mm [16]. The parameters are given in Table 3. The three separated coils wound with American wire gauge (AWG) 25 magnet wire follow the turn ratio of 64:49 for the top and bottom coil relative to the middle coil [16]. In addition, magnetic fluids typically show low reflectance to light and can be coated with silver liquid-like thin films to improve the reflectance [17,18]. In this paper, the self-assembly method is used to prepare the silver liquid-like thin film for the MFDM. Firstly, the solution of silver nano-particles was dissociated by centrifugation to remove the supernatant, and ethanol was then infused to purify the silver nano-particles. The obtained silver nano-particles were added into the mixed solution of ethanol and dodecanethiol, and then centrifuged after being kept at room temperature for 24 h. Finally, the ethyl acetate was added to the silver nano-particles obtained from the above step, and this solution was then added drop by drop to the surface of the magnetic fluid. After the ethyl acetate evaporated, the hydrophobic dodecanethiol encapsulated silver nano-particles automatically stacked and spread on the surface of the magnetic fluid to form a large-scale domain of silver liquid-like film. A snapshot of the assembly of the mirror is shown in Figure 2. The two-layer layout of the miniature electromagnetic coils is placed within the Maxwell coil and a container filled with a 1-mm-deep layer of ferrofluid sits on top of the miniature coils, which are coated with the thin silver liquid-like film. Analytical Surface Dynamics Model of MFDM The proposed MFDM is represented by a cylindrical horizontal layer of a magnetic fluid as shown in Figure 3. The top free surface of the fluid layer is coated with a reflective film and serves as the deformable surface of the mirror. The deflection of the mirror surface at point (rk, θk) is denoted by ζ(rk, θk, t), where k = 1,2,3,…,K is a discrete number of surface locations. The magnetic field generated by any given coil, centered at the horizontal location (rij, θij), is idealized as that of a point source of magnetic potential ψij(t), where i = 1,2 is the ith layer of actuators, and j, j = 1,2,3,…, Ji is the jth coil of each layer. The magnetic field itself is governed by Maxwell's equations. Since the magnetic field of the miniature coils is idealized as that of point sources of magnetic potential located at the fluid domain boundary, a current-free electromagnetic field can be assumed. Using this assumption and further assuming that the displacement currents in the fluid are negligible, Maxwell's equations can be written as: where B is the magnetic flux density, which is related to the magnetic field H and the magnetization M by the following constitutive relationship: where μ is the magnetic permeability of the magnetic fluid, μ0 is the magnetic permeability of free space. Assuming the magnetic fluid is linearly magnetized by the applied field, the magnetization vector M can be written as − is considered to be a constant. Considering that the magnetic field extends into the space above and below the fluid layer, Maxwell's equations are applied to all three sub-domains marked in Figure 3 as (1)-(3). The scalar potentials ψ (l) , l = 1,2,3 describe the magnetic field vectors H (l) in these sub-domains as follows: Analytical Surface Dynamics Model of MFDM The proposed MFDM is represented by a cylindrical horizontal layer of a magnetic fluid as shown in Figure 3. The top free surface of the fluid layer is coated with a reflective film and serves as the deformable surface of the mirror. The deflection of the mirror surface at point (r k , θ k ) is denoted by ζ(r k , θ k , t), where k = 1,2,3, . . . ,K is a discrete number of surface locations. The magnetic field generated by any given coil, centered at the horizontal location (r ij , θ ij ), is idealized as that of a point source of magnetic potential ψ ij (t), where i = 1,2 is the ith layer of actuators, and j, j = 1,2,3, . . . , J i is the jth coil of each layer. Analytical Surface Dynamics Model of MFDM The proposed MFDM is represented by a cylindrical horizontal layer of a magnetic fluid as shown in Figure 3. The top free surface of the fluid layer is coated with a reflective film and serves as the deformable surface of the mirror. The deflection of the mirror surface at point (rk, θk) is denoted by ζ(rk, θk, t), where k = 1,2,3,…,K is a discrete number of surface locations. The magnetic field generated by any given coil, centered at the horizontal location (rij, θij), is idealized as that of a point source of magnetic potential ψij(t), where i = 1,2 is the ith layer of actuators, and j, j = 1,2,3,…, Ji is the jth coil of each layer. The magnetic field itself is governed by Maxwell's equations. Since the magnetic field of the miniature coils is idealized as that of point sources of magnetic potential located at the fluid domain boundary, a current-free electromagnetic field can be assumed. Using this assumption and further assuming that the displacement currents in the fluid are negligible, Maxwell's equations can be written as: where B is the magnetic flux density, which is related to the magnetic field H and the magnetization M by the following constitutive relationship: where μ is the magnetic permeability of the magnetic fluid, μ0 is the magnetic permeability of free space. Assuming the magnetic fluid is linearly magnetized by the applied field, the magnetization vector M can be written as − is considered to be a constant. Considering that the magnetic field extends into the space above and below the fluid layer, Maxwell's equations are applied to all three sub-domains marked in Figure 3 as (1)-(3). The scalar potentials ψ (l) , l = 1,2,3 describe the magnetic field vectors H (l) in these sub-domains as follows: The magnetic field itself is governed by Maxwell's equations. Since the magnetic field of the miniature coils is idealized as that of point sources of magnetic potential located at the fluid domain boundary, a current-free electromagnetic field can be assumed. Using this assumption and further assuming that the displacement currents in the fluid are negligible, Maxwell's equations can be written as: where B is the magnetic flux density, which is related to the magnetic field H and the magnetization M by the following constitutive relationship: where µ is the magnetic permeability of the magnetic fluid, µ 0 is the magnetic permeability of free space. Assuming the magnetic fluid is linearly magnetized by the applied field, the magnetization vector M can be written as where χ = µ µ 0 − 1 is considered to be a constant. Considering that the magnetic field extends into the space above and below the fluid layer, Maxwell's equations are applied to all three sub-domains marked in Figure 3 as (1)-(3). The scalar potentials ψ (l) , l = 1,2,3 describe the magnetic field vectors H (l) in these sub-domains as follows: Using Equations (2)-(4), the magnetic flux density B (l) in these sub-domains can be written in terms of the scalar potentials ψ (l) , l = 1,2,3 as The magnetic flux density B meets the principle of superposition. Assume the fluid is irrotational, then based on the principles of conservation of mass and momentum and the theories of magnetic fields, the perturbation part of the surface dynamic governing equations can be written as [19]: where ρ is the density of the fluid, σ is the surface tension, φ and ψ (l) , l = 1,2,3 are the perturbation components of the fluid velocity potential and the magnetic potential, respectively. Using the following two boundary conditions: The solutions with respect to the input ψ ij (t) thus are obtained as follows: where J m (·) is the Bessel function of the first kind, λ is the separation constant, and Considering that the miniature coils are located far from the walls of the fluid container, so at r = R yields J m (λR) = 0, which can be solved numerically and yields an infinite number of solutions ε mn = λR, m = 0,1,2, . . . , n = 1,2,3, . . . , providing the eigenvalue λ mn for each mode as λ mn = ε mn /R. Combining J m (λr) and Θ(θ), we define the following mode shapes as H mnc = J m (λ mn r)cosmθ and H mns = J m (λ mn r)sinmθ. For any coil ψ ij (t) on each layer, based on Equation (8) and the damping effect associated with the fluid viscosity η, the following surface dynamic equation with respect to the mode shape H mnc can then be obtained as: where . . and n = 1,2,3, . . . The main idea of derivation of Equation (12) is similar to the result of MFDM with a single-layer layout of actuators and more details can be found in [19]. A similar set of equations can be obtained with respect to the mode shape H mns as: where m and n = 1,2,3, . . . The generalized displacementsζ ijmnc (t) andζ ijmns (t), obtained from the solution of the second-order differential Equations (12) and (13) respectively, and the corresponding mode shapes H mnc and H mns evaluated at any desired location (r k , θ k ), give the total surface displacement at the location as Based on Equations (12)- (14), it can be seen that the surface response ζ(r k , θ k , t) is linearly dependent on the input ψ ij (t) introduced by each electromagnetic coil. It should be noted that using Equations (12)- (14), the static surface response model of the mirror with respect to the perturbed magnetic field produced by each actuator can be obtained. Then the parameters of the coils in both layers as listed in Table 2 are designed based on the static model of MFDM so that the desired surface deflection of 5 µm by the single actuator in the upper layer and the deflection of 40 µm by the one in lower layer can be both produced. The ratio of the diameters of the coils in the lower and upper layers is finally rounded to have a factor of two so that the same pupil can be covered by the actuators in each layer. Static Simulation of MFDM Based on the parameters listed in Tables 1-3, the magnetic fields of the Maxwell coil and the two layer coils are simulated using COMSOL Multiphysics (version 4.4, COMSOL Inc., Stockholm, Sweden). As shown in Figure 4a, the magnetic field inside the Maxwell coil is uniformly distributed. When the input current of the Maxwell coil is 500 mA, the uniform magnetic field intensity at the center plane can reach 7.4 mT (see in Figure 4b). Figure 5a shows the geometric model of the Maxwell coil, the center coils in the upper layer and lower layer and the magnetic fluid in COMSOL. Figure 5b,c show the superposition of the magnetic field distribution curve generated by the two center coils along with the Maxwell coil on the mirror surface, respectively. It is indicated that the maximum perturbed magnetic field intensity produced by the coil in the upper layer with a current of 35 mA can reach 0.06 mT on the mirror surface, whereas the maximum perturbed magnetic field intensity produced by the coil in the lower layer with a current of 50 mA can reach up to 0.28 mT. Based on the dynamics model of Equation (14), it can be derived that the maximum mirror surface displacements driven by the center coil in the upper layer or lower layer can reach up to 5 µm and 40 µm, respectively. Figure 5d shows the superposition result of the magnetic field driven by the two center coils and the Maxwell coil together, and the maximum magnetic field intensity on the mirror surface reaches 7.74 mT. The lower-layer coils can provide a large stroke deflection due to the relatively large size of the electromagnetic coils compared with the upper-layer coils. Based on the structure parameters of the coils in each layer, the corresponding magnetic potentials are obtained in COMSOL with the analytical model of the MFDM developed in MATLAB. When the Maxwell coil is turned on with a current of 500 mA, the simulation results of the surface deflection contour are shown in Figure 6. In Figure 6a, it can be seen that the maximum deflection magnitude is 5.16 µm at (0,0) when coil 1 in the upper layer (see Figure 1) is active at 35 mA. In Figure 6b, the maximum deflection magnitude is 41.54 µm at (4.2,0) when coil 2 in the lower layer (see Figure 1) is set to 50 mA. When both coils are active, Figure 6c indicates that the surface deflection of the mirror is the linear sum of the deflections generated by each of the two coils separately. Figure 6. In Figure 6a, it can be seen that the maximum deflection magnitude is 5.16 μm at (0,0) when coil 1 in the upper layer (see Figure 1) is active at 35 mA. In Figure 6b, the maximum deflection magnitude is 41.54 μm at (4.2,0) when coil 2 in the lower layer (see Figure 1) is set to 50 mA. When both coils are active, Figure 6c indicates that the surface deflection of the mirror is the linear sum of the deflections generated by each of the two coils separately. Linear Additivity of the MFDM Response In this section, the experimental results of the surface response of the mirror for different cases are presented to verify the response characteristics of the MFDM. The surface deflection with respect to the different input currents was measured using Polytec OFV 5000/552 and VIB-A-T31. During operation, the Maxwell coil was driven with a constant current of 500 mA, which produced a measured 7.43 mT uniform magnetic field inside of the Maxwell coil. In Figure 7a, the points marked as "*" signify the peak surface deflections of the MFDM when coil 1 in the upper layer (see Figure 1) is active. The experimental peak surface deflections for the case when coil 2 in the lower layer (see Figure 1) is active are marked as "ο", as shown in Figure 7b. It is obvious that the surface deflections varied linearly with the increasing currents and both negative and positive deflections were achieved. As illustrated in Figure 7c, when the two coils are energized, the surface deflection ("Δ") at a point midway between the two coils is the linear sum of the deflections ("*" and "ο") at the point generated by each of the two coils separately. Similar to the simulation results, the maximum Linear Additivity of the MFDM Response In this section, the experimental results of the surface response of the mirror for different cases are presented to verify the response characteristics of the MFDM. The surface deflection with respect to the different input currents was measured using Polytec OFV 5000/552 and VIB-A-T31. During operation, the Maxwell coil was driven with a constant current of 500 mA, which produced a measured 7.43 mT uniform magnetic field inside of the Maxwell coil. In Figure 7a, the points marked as "*" signify the peak surface deflections of the MFDM when coil 1 in the upper layer (see Figure 1) is active. The experimental peak surface deflections for the case when coil 2 in the lower layer (see Figure 1) is active are marked as "o", as shown in Figure 7b. It is obvious that the surface deflections varied linearly with the increasing currents and both negative and positive deflections were achieved. As illustrated in Figure 7c, when the two coils are energized, the surface deflection ("∆") at a point midway between the two coils is the linear sum of the deflections ("*" and "o") at the point generated by each of the two coils separately. Similar to the simulation results, the maximum surface deflections driven by the single upper-layer or lower-layer coil can reach up to 5 µm and 40 µm, respectively. The corresponding influence functions of a single coil in the upper layer with an applied current of 35 mA or the one in the lower layer with a current of 50 mA are shown in Figure 8. As can be observed from the figure, the application of current to a single coil resulted in a Gaussian surface shape with its peak located immediately above the location of the energized coil, corresponding to a neighbor coupling constant of about 49% and 17% for the upper-layer and lower-layer actuators, respectively. Figure 8. As can be observed from the figure, the application of current to a single coil resulted in a Gaussian surface shape with its peak located immediately above the location of the energized coil, corresponding to a neighbor coupling constant of about 49% and 17% for the upper-layer and lower-layer actuators, respectively. Figure 8. As can be observed from the figure, the application of current to a single coil resulted in a Gaussian surface shape with its peak located immediately above the location of the energized coil, corresponding to a neighbor coupling constant of about 49% and 17% for the upper-layer and lower-layer actuators, respectively. Figure 9 presents the dynamic response of the MFDM surface to a step input current signal, where the step currents of 10 and 25 mA are applied to the center coils in the upper layer and lower layer (coil 1 in Figure 1) of the MFDM, respectively. The time history of the mirror surface deflections above the center of the coils corresponding to these two inputs were noted and plotted. The response of the actual mirror surface is shown by the solid line while the analytically determined surface deflections are represented by the dashed line. As can be observed from the figure, the response given by the analytical model agreed well with the experimental results. The dynamics properties of the MFDM were also evaluated with a sweeping input current signal whose frequency varied from 0 to 50 Hz over time. Figure 10 Figure 9 presents the dynamic response of the MFDM surface to a step input current signal, where the step currents of 10 and 25 mA are applied to the center coils in the upper layer and lower layer (coil 1 in Figure 1) of the MFDM, respectively. The time history of the mirror surface deflections above the center of the coils corresponding to these two inputs were noted and plotted. The response of the actual mirror surface is shown by the solid line while the analytically determined surface deflections are represented by the dashed line. As can be observed from the figure, the response given by the analytical model agreed well with the experimental results. The dynamics properties of the MFDM were also evaluated with a sweeping input current signal whose frequency varied from 0 to 50 Hz over time. Figure 10 Figure 9 presents the dynamic response of the MFDM surface to a step input current signal, where the step currents of 10 and 25 mA are applied to the center coils in the upper layer and lower layer (coil 1 in Figure 1) of the MFDM, respectively. The time history of the mirror surface deflections above the center of the coils corresponding to these two inputs were noted and plotted. The response of the actual mirror surface is shown by the solid line while the analytically determined surface deflections are represented by the dashed line. As can be observed from the figure, the response given by the analytical model agreed well with the experimental results. The dynamics properties of the MFDM were also evaluated with a sweeping input current signal whose frequency varied from 0 to 50 Hz over time. Figure 10 shows Tracking of a Conical Surface Shape Based on the fabricated prototype of the MFDM, an experimental AO system was set up to evaluate the performance of the deformable mirror. The experimental arrangement is illustrated in Figure 11. A collimated, aberration-free beam of light from the laser source was magnified (×2.5) using the first optical relay. Then the beam was diverted and further magnified (×6) using the second relay. The magnified beam was limited by an aperture stop with the diameter of 20 mm. The 20 mm beam was directed on the horizontal fluid surface using the tip-tilt mirror, which also collected the reflected beam and folded it back to the wavefront sensor from the incoming beam. The reflected beam was de-magnified (×6.67) in order to be projected fully on the lenslet arrays of the wavefront sensor and the charge coupled device (CCD) The wavefront slope data were measured discretely at the 31 × 31 subapertures of the wavefront sensor and then transfered into the computer system. In the experimental evaluation, the mirror was supposed to produce a desired conical surface shape that was set to emulate an axicon. Tracking of a Conical Surface Shape Based on the fabricated prototype of the MFDM, an experimental AO system was set up to evaluate the performance of the deformable mirror. The experimental arrangement is illustrated in Figure 11. A collimated, aberration-free beam of light from the laser source was magnified (×2.5) using the first optical relay. Then the beam was diverted and further magnified (×6) using the second relay. The magnified beam was limited by an aperture stop with the diameter of 20 mm. The 20 mm beam was directed on the horizontal fluid surface using the tip-tilt mirror, which also collected the reflected beam and folded it back to the wavefront sensor from the incoming beam. The reflected beam was de-magnified (×6.67) in order to be projected fully on the lenslet arrays of the wavefront sensor and the charge coupled device (CCD) The wavefront slope data were measured discretely at the 31 × 31 subapertures of the wavefront sensor and then transfered into the computer system. In the experimental evaluation, the mirror was supposed to produce a desired conical surface shape that was set to emulate an axicon. Figure 11. Snapshot of the experimental setup. Figure 12 shows the three-dimensional conical surface produced by the MFDM and recorded by the wavefront sensor. The conical surface shape as shown in Figure 12a was produced by the MFDM only with the lower-layer layout of actuators, and the resulting average root mean square (RMS) error was 0.4657 μm. In Figure 12b, the conical surface shape was produced by the MFDM with the two-layer layout of actuators. Due to the correction of the upper-layer coils, a more accurate conical surface shape was obtained and the average RMS error was decreased to 0.1920 μm. Correction of Aberrations In this section, the mirror was used to produce a targeted wavefront which was expressed as a combination of typical Zernike modes. The amplitudes of each Zernike mode, Z1 to Z14, is shown in Figure 13a and the corresponding wavefront with a RMS value of 5.6341 μm is shown in Figure 13b. The input currents of the lower-layer coils of the MFDM were first calculated on account of the aberration with all 14 Zernike modes, and then the upper-layer coils were activated to correct the Figure 11. Snapshot of the experimental setup. Figure 12 shows the three-dimensional conical surface produced by the MFDM and recorded by the wavefront sensor. The conical surface shape as shown in Figure 12a was produced by the MFDM only with the lower-layer layout of actuators, and the resulting average root mean square (RMS) error was 0.4657 µm. In Figure 12b, the conical surface shape was produced by the MFDM with the two-layer layout of actuators. Due to the correction of the upper-layer coils, a more accurate conical surface shape was obtained and the average RMS error was decreased to 0.1920 µm. Tracking of a Conical Surface Shape Based on the fabricated prototype of the MFDM, an experimental AO system was set up to evaluate the performance of the deformable mirror. The experimental arrangement is illustrated in Figure 11. A collimated, aberration-free beam of light from the laser source was magnified (×2.5) using the first optical relay. Then the beam was diverted and further magnified (×6) using the second relay. The magnified beam was limited by an aperture stop with the diameter of 20 mm. The 20 mm beam was directed on the horizontal fluid surface using the tip-tilt mirror, which also collected the reflected beam and folded it back to the wavefront sensor from the incoming beam. The reflected beam was de-magnified (×6.67) in order to be projected fully on the lenslet arrays of the wavefront sensor and the charge coupled device (CCD) The wavefront slope data were measured discretely at the 31 × 31 subapertures of the wavefront sensor and then transfered into the computer system. In the experimental evaluation, the mirror was supposed to produce a desired conical surface shape that was set to emulate an axicon. Figure 11. Snapshot of the experimental setup. Figure 12 shows the three-dimensional conical surface produced by the MFDM and recorded by the wavefront sensor. The conical surface shape as shown in Figure 12a was produced by the MFDM only with the lower-layer layout of actuators, and the resulting average root mean square (RMS) error was 0.4657 μm. In Figure 12b, the conical surface shape was produced by the MFDM with the two-layer layout of actuators. Due to the correction of the upper-layer coils, a more accurate conical surface shape was obtained and the average RMS error was decreased to 0.1920 μm. Correction of Aberrations In this section, the mirror was used to produce a targeted wavefront which was expressed as a combination of typical Zernike modes. The amplitudes of each Zernike mode, Z1 to Z14, is shown in Figure 13a and the corresponding wavefront with a RMS value of 5.6341 μm is shown in Figure 13b. The input currents of the lower-layer coils of the MFDM were first calculated on account of the aberration with all 14 Zernike modes, and then the upper-layer coils were activated to correct the Correction of Aberrations In this section, the mirror was used to produce a targeted wavefront which was expressed as a combination of typical Zernike modes. The amplitudes of each Zernike mode, Z 1 to Z 14 , is shown in Figure 13a and the corresponding wavefront with a RMS value of 5.6341 µm is shown in Figure 13b. The input currents of the lower-layer coils of the MFDM were first calculated on account of the aberration with all 14 Zernike modes, and then the upper-layer coils were activated to correct the resulting residual wavefront error produced by the lower-layer coils. The final produced wavefront is presented in Figure 14, which shows a residual wavefront RMS error of 0.2263 µm. resulting residual wavefront error produced by the lower-layer coils. The final produced wavefront is presented in Figure 14, which shows a residual wavefront RMS error of 0.2263 μm. Figure 15 show the correction capability of the upper-layer coils for different Zernike modes. It indicates that the upper-layer coils have a high correction performance for high-order aberrations, but the performance decreases for low-order aberrations, mainly due to the small amplitude of the surface deformation they can produce. However, the correction capability of the lower-layer coils is contrary. The red bars in Figure 15 indicate their high correction capability for low-order aberrations, but the correction performance drops down for high-order aberrations due to the low density of the lower-layer actuators. If both layers of the coils are activated, as seen from the green bars in Figure 15, the correction capability of the MFDM can be improved for all Zernike modes. These comparison results illustrate that the proposed MFDM with a two-layer layout of actuators can effectively improve the correction performance, especially for the cases that deal with aberrations featuring high-amplitude low-order modes and low-amplitude high-order modes simultaneously. resulting residual wavefront error produced by the lower-layer coils. The final produced wavefront is presented in Figure 14, which shows a residual wavefront RMS error of 0.2263 μm. Figure 15 show the correction capability of the upper-layer coils for different Zernike modes. It indicates that the upper-layer coils have a high correction performance for high-order aberrations, but the performance decreases for low-order aberrations, mainly due to the small amplitude of the surface deformation they can produce. However, the correction capability of the lower-layer coils is contrary. The red bars in Figure 15 indicate their high correction capability for low-order aberrations, but the correction performance drops down for high-order aberrations due to the low density of the lower-layer actuators. If both layers of the coils are activated, as seen from the green bars in Figure 15, the correction capability of the MFDM can be improved for all Zernike modes. These comparison results illustrate that the proposed MFDM with a two-layer layout of actuators can effectively improve the correction performance, especially for the cases that deal with aberrations featuring high-amplitude low-order modes and low-amplitude high-order modes simultaneously. Figure 15 shows the effectiveness of the MFDM for the correction of aberrations in each Zernike mode, as shown in Figure 13a. The Y axis displays the fitting ability of the MFDM for each Zernike mode, which is calculated as 1 − i , where w i and w i denote the measured and the target wavefront values at the 31 × 31 subaperture positions of the wavefront sensor, respectively. The blue bars in Figure 15 show the correction capability of the upper-layer coils for different Zernike modes. It indicates that the upper-layer coils have a high correction performance for high-order aberrations, but the performance decreases for low-order aberrations, mainly due to the small amplitude of the surface deformation they can produce. However, the correction capability of the lower-layer coils is contrary. The red bars in Figure 15 indicate their high correction capability for low-order aberrations, but the correction performance drops down for high-order aberrations due to the low density of the lower-layer actuators. If both layers of the coils are activated, as seen from the green bars in Figure 15, the correction capability of the MFDM can be improved for all Zernike modes. These comparison results illustrate that the proposed MFDM with a two-layer layout of actuators can effectively improve the correction performance, especially for the cases that deal with aberrations featuring high-amplitude low-order modes and low-amplitude high-order modes simultaneously. Conclusions In order to improve the correction performance of the MFDM for full-order aberrations, a new MFDM with a two-layer layout of actuators is proposed in this paper. The structure and designed parameters of the MFDM were first presented. Then the dynamics model of the mirror was established and the corresponding mirror surface deformation performance was simulated using the COMSOL and MATLAB software packages. Finally, based on the fabricated prototype of the MFDM, an experimental AO system was set up to further evaluate the correction performance of the deformable mirror and the experimental results illustrated the effectiveness of the proposed MFDM to correct full-order aberrations for adaptive optics systems. Conclusions In order to improve the correction performance of the MFDM for full-order aberrations, a new MFDM with a two-layer layout of actuators is proposed in this paper. The structure and designed parameters of the MFDM were first presented. Then the dynamics model of the mirror was established and the corresponding mirror surface deformation performance was simulated using the COMSOL and MATLAB software packages. Finally, based on the fabricated prototype of the MFDM, an experimental AO system was set up to further evaluate the correction performance of the deformable mirror and the experimental results illustrated the effectiveness of the proposed MFDM to correct full-order aberrations for adaptive optics systems.
9,437.4
2017-03-01T00:00:00.000
[ "Engineering", "Physics" ]
Early IFNγ-Mediated and Late Perforin-Mediated Suppression of Pathogenic CD4 T Cell Responses Are Both Required for Inhibition of Demyelinating Disease by CNS-Specific Autoregulatory CD8 T Cells Pathogenesis of immune-mediated demyelinating diseases like multiple sclerosis (MS) is thought to be governed by a complex cellular interplay between immunopathogenic and immunoregulatory responses. We have previously shown that central nervous system (CNS)-specific CD8 T cells have an unexpected protective role in the mouse model of MS, experimental autoimmune encephalomyelitis (EAE). In this study, we interrogated the suppressive potential of PLP178-191-specific CD8 T cells (PLP-CD8). Here, we show that PLP-CD8, when administered post-disease onset, rapidly ameliorated EAE progression, and suppressed PLP178-191-specific CD4 T cell responses as measured by delayed-type hypersensitivity (DTH). To accomplish DTH suppression, PLP-CD8 required differential production of perforin and IFNγ. Perforin was not required for the rapid suppressive action of these cells, but was critical for maintenance of optimal longer term DTH suppression. Conversely, IFNγ production by PLP-CD8 was necessary for swift DTH suppression, but was less significant for maintenance of longer term suppression. These data indicate that CNS-specific CD8 T cells employ an ordered regulatory mechanism program over a number of days in vivo during demyelinating disease and have mechanistic implications for this immunotherapeutic approach. INTRODUCTION Multiple sclerosis (MS) is an immune-mediated demyelinating disease of the central nervous system (CNS), whereby infiltrating proinflammatory immune cells potentiate recruitment and continued activation of additional inflammatory cell types which target and destroy myelin (1). Despite the current first-line drug therapies available to patients as well as recent advancements in US clinical trials (2)(3)(4), MS remains a debilitating disease that worsens over time and for which there is no cure. In order to dissect the dynamics of immunopathogenic and immunoregulatory responses during MS, researchers use the mouse model experimental autoimmune encephalomyelitis (EAE), which manifests as an ascending paralytic disease due to spinal cord demyelination (5). Given that CD4 T cells from EAE mice are sufficient to transfer disease to healthy animals (6,7), the field has focused for many years on this encephalitogenic cell, its Th1 and Th17 pro-inflammatory states, and its role in driving demyelinating disease (8). The role of CD8 T cells however, which are oligoclonally expanded to large numbers in MS lesions (9,10), is less well understood. In multiple previous studies, we have now demonstrated the unexpected disease suppressive effect of CNS-specific CD8 T cells (CNS-CD8) in various models of EAE (7,(11)(12)(13)(14). These "autoregulatory" CNS-CD8 are unlike the "typical" regulatory T cell populations in that they lack Foxp3 expression and do not depend on anti-inflammatory cytokine production (e.g., IL-4 or IL-10), but are dependent on classical MHC class Ia presentation and require elaboration of IFNγ and perforin (11,14). Importantly, the clinical and therapeutic relevance of their role during demyelinating disease is underscored by the finding that MS patients undergoing an acute relapse have a defect in autoregulatory CD8 T cell function compared to disease quiescent patients or healthy controls (15,16). Therefore, interrogating CNS-CD8 subsets' regulatory potential and in vivo suppression mechanisms during demyelinating disease is of high interest and importance. We have demonstrated that CD8 T cells recognizing the encephalitogenic 178-191 peptide sequence of myelin proteolipid protein (PLP178-191) were superior suppressors of EAE disease compared to myelin oligodendrocyte glycoprotein (MOG) 35-55-specific CD8 T cells and were suppressive in different models of EAE (12)(13)(14). Given that PLP is the main structural component of the myelin sheath (50% of total protein) and murine and human forms share 100% amino acid sequence homology (17), we studied the in vivo therapeutic potential and mechanisms of PLP178-191-specific CD8 T cells (PLP-CD8) during EAE. Here, we show that PLP-CD8 swiftly ameliorated ongoing demyelinating disease and rapidly suppressed PLPspecific CD4 T cell responses by employing a temporally distinct cytokine effector program over a number of days in vivo. Mice Wildtype female C57BL/J, perforin-/-, and IFNγ-/-mice were purchased from Jackson Laboratories (Bar Harbor, ME). All mice were kept in barrier rooms at the University of Iowa Animal Care Facility under 12 h light/dark cycle, fed ad libitum, and humanely cared for and studied as approved by the University of Iowa's Institutional Animal Care and Use Committee. All mice used in experiments were at least 8 weeks of age. Delayed-Type Hypersensitivity (DTH)/Ear Swelling Assays For DTH measurements, 15 µL of either vehicle (PBS) alone or 150 µg PLP178-191 in PBS were injected into ear pinnae of briefly anesthetized (isoflurane USP, Clipper Distributing, St. Joseph, MO) immune recipients with a 30G needle and 1cc syringe. DTH was elicited at various times depending on the experiment (e.g., at times on the same day as CD8 T cell adoptive transfer and others seven days post-transfer and still others 9 or 20 days post-immunization for EAE), as indicated in the figure legends. Ear swelling was measured in a blinded manner with an engineer's micrometer (Mitutoyo USA, Aurora, IL) on day of injection and at 24 or 48 h, as indicated. Delta ear swelling was calculated by ear thickness (mm) at 24/48 h minus thickness at 0 h. Where noted, data were normalized to control group mean when combining swelling measurements from separate experiments. Statistics EAE scores from two groups were compared using a Welch's ttest. DTH measurements from multiple groups were compared using the ANOVA test. All statistics were calculated using GraphPad Prism software (La Jolla, CA). P-values < 0.05 were considered significant. PLP178-191/CFA to induce EAE and disease scores were monitored. Consistent with our previous observations, mice receiving PLP-CD8 were significantly protected from EAE disease compared to their OVA323-339-specific CD8 T cell (OVA-CD8)-transferred counterparts ( Figure 1A). As previously demonstrated, disease scores in the OVA-CD8 control group were not different from disease control groups that received PBS or did not receive any treatment at all (data not shown). We then tested the functional effects of PLP-CD8 treatment on in vivo readouts of CD4 function. Delayed type hypersensitivity (DTH) responses to CNS peptide antigens have been used as robust readouts of CD4 function (18)(19)(20). Importantly, DTH has also been used to assess suppressive fitness of regulatory CD8 T cell populations on CNS peptide MOG35-55 responses (21,22). We therefore studied the ability of PLP-CD8 to downregulate CD4 T cell responses in vivo through a similar method. To confirm CNS peptide-specific DTH responses in our system, mice were immunized with PBS/CFA, MOG35-55/CFA, or PLP178-191/CFA. For DTH response measurements, either PBS (vehicle control) or PLP178-191 peptide (in PBS) were injected into the pinnae of immunized mice. As expected, PBS We next tested whether administration of PLP-CD8 would yield a suppressed DTH response, corresponding to suppressed EAE disease scores. Thus, recipient mice that received donor CD8 T cells from PLP178-191-or OVA323-339-immunized mice were challenged in the ear pinnae at 9 days post-immunization, followed by measurement of DTH at 48 h. Again, ears challenged with PBS showed minimal ear swelling regardless of the group, whereas mice receiving control CD8 T cells exhibited robust PLP-specific DTH responses ( Figure 1B). Importantly, mice that received PLP-CD8 showed significant reduction in PLPspecific DTH, compared to their unprotected counterparts and almost down to swelling levels with PBS alone (Figure 1B). Together, these data indicate that PLP-CD8 suppress PLPspecific CD4 T cell responses in vivo during demyelinating disease. PLP-Specific CD8 T Cells Rapidly and Robustly Ameliorate Ongoing Demyelinating Disease Progression To test whether PLP-CD8 could treat ongoing disease, mice were immunized with PLP178-191/CFA to induce EAE (day 0). Donor CD8 T cells from OVA323-339-or PLP178-191immunized mice were subsequently transferred into these mice at day 11 post-immunization (post-disease onset). Compared to the control OVA-CD8-treated group, PLP-CD8 significantly altered ongoing disease progression within 48 h, with significant reduction in EAE scores, almost eliminating clinical symptoms (Figure 2A). Likewise, when recipient ear pinnae were challenged on day 20 post-immunization (9 days post-CD8 treatment), control mice developed a robust DTH reaction to PLP178-191, whereas mice treated with PLP-CD8 exhibited a significantly reduced DTH response ( Figure 2B), matching their diminished disease scores. These data suggest that PLP-CD8 can rapidly exert a suppressive program on pathogenic biology in vivo. PLP-Specific CD8 T Cells Rapidly Suppress PLP-Specific CD4 T Cell Responses in vivo Given that PLP-CD8 altered the trajectory of ongoing demyelinating disease progression within just 2 days of treatment (Figure 2A), we hypothesized that PLP-CD8 could rapidly suppress PLP-specific CD4 T cell responses in vivo. To test this, mice were immunized with PLP178-191/CFA and treated i.v. with PBS, OVA-CD8, or PLP-CD8 at day14. On the same day, ear pinnae were challenged with vehicle control (PBS) or PLP178-191 peptide and ear measurements were performed at 48 h to assess in vivo DTH responses. Mice receiving no CD8 transfer (PBS control) or control OVA-CD8 developed the expected robust DTH response to PLP178-191 peptide (Figure 3). Interestingly, however, immune mice receiving PLP-CD8 exhibited a significant reduction in ear swelling compared to the control groups (Figure 3). These data suggest that CNS-CD8 suppress CNS-specific CD4 T cell responses in vivo in a rapid timeframe, a process that begins essentially FIGURE 3 | PLP-specific CD8 T cells rapidly suppress PLP-specific CD4 T cell responses in vivo. Mice were immunized with PLP178-191/CFA on day 0. At day 14, these mice were injected i.v. with either PBS or adoptively transferred OVA-CD8 or PLP-CD8. On the same day, the ear pinnae were challenged with either PLP178-191 peptide or PBS as a control. Ear swelling was measured at 48 h and normalized to control (untransferred) group swelling mean. n = 6-7 per group. Data are representative of two replicates. **p < 0.01; ***p < 0.001. on the same day as the CD8 T cells are transferred into the animals. Perforin Production by PLP-Specific CD8 T Cells Is not Required for Rapid Suppression of CD4 T Cell Responses in vivo We have previously demonstrated that CNS-CD8 utilize perforin production to protect mice from severe EAE disease (11,14). Given the rapid suppressive response by PLP-CD8 observed in the current study (Figure 3), we hypothesized that this effect might be a result of perforin-mediated cytotoxic killing of immune targets, such as CD4 T cells, a mechanism we have shown to be important in CD8-mediated suppression (11,16). To test this, we utilized an experimental setup similar to that in Figure 3, where PLP178-191-immunized mice were treated with either WT or perforin-/-PLP-CD8 at day 14. The control group received WT OVA-CD8. On the same day, ears were challenged with PLP178-191 peptide and measured at 48 h. Control OVA-CD8 recipients developed a robust DTH response and the expected suppressed DTH response was seen in mice receiving WT PLP-CD8 ( Figure 4A). Surprisingly, perforin-/-PLP-CD8 resulted in similar suppression of DTH compared to the WT group (Figure 4A), indicating that perforin was not a required effector pathway to mediate rapid CD4 suppression in vivo. To test whether perforin production was required for the late DTH suppression ("maintenance phase"), ears were challenged 21 days post-immunization (i.e., 7 days post-CD8 transfer). Again, mice transferred with OVA-CD8 showed a continued robust DTH response at this stage and mice receiving WT PLP-CD8 maintained their suppressed DTH ( Figure 4B). Interestingly, compared to their WT counterparts, perforin-/-PLP-CD8 failed to optimally maintain their DTH suppression ( Figure 4B). When change in suppression was directly compared over time in individual mice, it was clear that while no change was observed in the OVA-CD8 or PLP-CD8 groups, DTH responses in mice receiving perforin-/-CD8 T cells were significantly less suppressed at day 21 ( Supplementary Figure 2A), indicating a recovery of CD4 responses to PLP178-191. These data suggest that there is a requirement for perforin production by PLP-CD8 to maintain longer term suppression of CD4 T cell responses in vivo. IFNγ Production Is Required for Rapid Suppression of CD4 T Cell Responses by PLP-Specific CD8 T Cells in vivo but Is not Required for Delayed Suppression In addition to perforin, IFNγ production by CNS-CD8 is required to mediate their ameliorative effects on EAE (11). In contrast, IL-10 or IL-4 production is not required (11). Therefore, we tested whether IFNγ production was necessary for rapid DTH suppression by PLP-CD8. Groups of recipient mice were immunized with PLP178-191 and adoptively transferred 14 days later with either control WT OVA-CD8, or WT or IFNγ-/-PLP-CD8. DTH responses were elicited and measured, both at the early (same day challenge) and late (7 days later) time points, similar to the approach in Figures 4A,B. As expected, mice receiving WT PLP-CD8 showed significantly suppressed DTH responses, compared to those receiving control OVA-CD8, both at the early and late time points (Figures 5A,B). Interestingly, in contrast to perforin-/-CD8 T cells (Figure 4), IFNγ-/-CD8 T cells showed the opposite dynamics, in that they failed to rapidly suppress the DTH reaction following same-day challenge (Figure 5A), but could eventually suppress a day 21-elicited DTH reaction ( Figure 5B). When change in suppression was directly compared Figure 2B). Together, these data indicate that PLP-CD8 mediate swift suppression of CD4 T cell responses in vivo using IFNγ-mediated mechanisms, whereas these mechanisms are not required to mediate late suppression. PLP-Specific CD8 T Cells Use Temporally Distinct Effector Mechanisms to Mediate Disease Suppression Given that IFNγ (but not perforin) production by PLP-CD8 was necessary for rapid suppression of PLP-specific CD4 T cell responses in vivo (Figures 4A, 5A) and, conversely, perforin (but not IFNγ) production was required for optimal longer term suppression (Figures 4B, 5B), we asked whether IFNγ-/and perforin-/-single-knockout PLP-CD8 could temporally compensate for each other's functional deficits to exert both a rapid and maintained in vivo suppression effect. We therefore used an admixture of IFNγ-/-plus perforin-/-single knockout PLP-CD8 to test whether this mixture could phenocopy the suppression pattern observed in the WT scenario. Following an experimental design similar to prior figures, PLP-immunized mice received either PBS, WT PLP-CD8 or a mixture of IFNγ-/-(perforin sufficient) plus perforin-/-(IFNγ sufficient) singleknockout PLP-CD8. DTH responses to PLP178-191 were elicited and measured, both at immediate (same day) and late (7 days later) time points. WT PLP-CD8 showed the expected rapid ( Figure 6A) and maintained ( Figure 6B) suppression of DTH, compared to PBS controls. Intriguingly, immune recipient mice that were adoptively transferred with both IFNγ-/-(perforin sufficient) plus perforin-/-(IFNγ sufficient) PLP-CD8 exhibited a significantly suppressed DTH response at both time points, similar to that seen with WT PLP-CD8 (Figures 6A,B). Longitudinal analysis showed that suppression was not significantly changed over time in any group (Supplementary Figure 2C). Taken together, these data suggest that IFNγ-/-and perforin-/-PLP-CD8 could compensate for each other to immediately suppress PLP-specific CD4 T cell responses and maintain suppression in vivo. As mentioned above, our previous work has demonstrated that neither IFNγ-/-nor perforin-/-CNS-CD8 are capable of protecting mice from EAE, using MOG-specific CD8 T cells (11). To formally confirm that this was also the case for PLP-specific CD8 T cells, we performed experiments using IFNγ-/-or perforin-/-PLP-CD8. As expected, PLP-CD8 deficient in either of these molecules were not capable of suppressing EAE, unlike WT PLP-CD8 (Supplementary Figure 3). Given that PLP-CD8 lacking perforin can functionally compensate for cells lacking IFNγ, and vice versa, in order to effect and maintain suppression of PLP-specific CD4 T cell responses in vivo (Figure 6), we hypothesized that a mixture of adoptively transferred IFNγ-/plus perforin-/-single knockout PLP-CD8 could successfully protect mice against EAE disease. To test this, WT donor CD8 T cells from PLP178-191-or OVA323-339-immunized mice or a mixture of IFNγ-/-plus perforin-/-single knockout PLP-CD8 were adoptively transferred into groups of naïve C57BL/6J mice. An additional control group received PBS alone. The following day, recipient mice were immunized with PLP178-191 to induce EAE and disease progression was monitored. Compared to the PBS and OVA-CD8 control groups, WT PLP-CD8 significantly protected mice from EAE ( Figure 7A). Notably, mice receiving IFNγ/perforin mixed single knockout PLP-CD8 were equally effective in robustly protecting mice from EAE disease (Figure 7A), suggesting a successful mechanistic compensation for the lack of protective function from either CD8 type alone. To further confirm that the compensation effect on PLPspecific CD4 T cell responses occurs during disease progression, DTH responses were elicited at day 7 post-immunization and measured at 48 h. Again, DTH responses were significantly suppressed in recipients of WT PLP-CD8 compared to OVA-CD8 controls (Figure 7B). Importantly, mice that received adoptively transferred IFNγ-/-plus perforin-/-single knockout PLP-CD8 also exhibited a similarly reduced DTH reaction ( Figure 7B). To test whether this effect was maintained over time during disease progression, ear challenges were performed on day 14 and measured at 48 h. Consistent with maintained EAE suppression observed in Figure 7A, and maintained DTH suppression in Figure 6B, mice receiving the mixture of single-knockout PLP-CD8 continued to show significantly suppressed in vivo DTH responses, similar to those receiving WT PLP-CD8 ( Figure 7C). Taken together, these results demonstrate that PLP-CD8 employ an ordered regulatory program over a number of days in vivo to suppress pathogenic PLP-specific CD4 T cell responses and inhibit demyelinating disease. DISCUSSION Despite the lack of etiological explanation, it has been long appreciated that autoreactive immune cells in their various proinflammatory states are at the core of driving chronic demyelinating CNS diseases like MS. Consequently, dissecting how immunoregulatory responses combat this immunopathology in vivo is essential to understanding biological mechanisms and potential immunotherapies. The murine model of demyelinating disease, EAE, has been instrumental in this regard, as it serves as a testable arena for immunomodulation. We have now demonstrated that in the wild-type setting, unlike their CD4 counterparts, CNS-CD8 not only fail to transfer or exacerbate demyelinating disease, but are unexpectedly protective against EAE (7). We have shown that this is true of CD8 responses induced by both peptide immunization (involving cross-presentation of an extrinsic antigen) and by infection with CNS sequence-encoding intracellular bacteria (Listeria) (7,(11)(12)(13)(14). These findings are further underscored by our observations that MS patients undergoing an acute relapse exhibit an immunoregulatory defect in their CD8 T cell population (15,16). Thus, interrogating the functional mechanisms and autoregulatory potential of the most potently suppressive CNS-CD8 subsets in vivo is of high interest and potential therapeutic relevance. Recent work from our lab has demonstrated that PLP-CD8 are extremely potent suppressors of EAE in both the B6 and SJL models (13,14) (and confirmed here in Figure 1A). Given the identical homology between murine and human PLP (17), we focused on these autoregulatory cells in the current study. During protection against EAE (Figure 1A), PLP-CD8 significantly suppressed PLP-specific CD4 T cell responses as read by DTH (Figure 1B). These cells also have therapeutic potential, as they strikingly ameliorated ongoing EAE, essentially eliminating clinical symptoms (Figure 2A). The rapidity (around 48 h; Figure 2A) with which PLP-CD8 reversed the EAE disease course led us to consider whether these cells were immediately suppressing PLP-specific CD4 T cell responses, an effect inherent to disease amelioration ( Figure 2B). As shown in Figure 3, this was in fact the case, where PLP-CD8 quickly and significantly reduced DTH reactions elicited on the same day as the CD8 T cell transfer. This is suggestive of immediate suppressive action in vivo, and may indicate rapid targeting and elimination/modulation of pathogenic targets. In previously published studies (11,14), we have shown that CNS-CD8 are classically MHC Class Ia-restricted and require IFNγ as well as perforin, but not IL-4 or IL-10 production to mediate their disease suppressive effects. Furthermore, there is evidence of cytotoxic elimination of immune targets by these cells (7). Therefore, we first thought that the fast suppression of DTH responses might be an effect of rapid elimination of CD4 T cells by a cytotoxic mechanism. We thus tested the requirement for perforin in this setting. Surprisingly, PLP-CD8 deficient in perforin production were perfectly capable of rapidly suppressing the DTH response in vivo comparable to WT CD8 T cells (Figure 4A), suggesting that perforin-dependent mechanisms of regulation are ancillary for the rapid suppressive effect on CNS peptide-driven DTH. However, perforin was required for optimal longer term suppression, read at 7 days post-transfer ( Figure 4B). In contrast, the pleiotropic immunomodulatory cytokine, IFNγ, was necessary for swift suppression of PLP-specific CD4 T cell responses (Figure 5A), but was not required for longer term suppression ( Figure 5B). These opposing findings, whereby IFNγ was required early and perforin was required later to effect both optimal rapid and maintained DTH suppression, is indicative of an ordered regulatory program exerted by PLP-CD8 over a number of days in vivo. Since neither IFNγ-/-nor perforin-/-single knockout CNS-CD8 are capable of suppressing disease [ Supplementary Figure 3 and (11)], it would be reasonably expected that double knockout PLP-CD8 would also not suppress EAE. Therefore, we did not utilize double knockout PLP-CD8 in these experiments. Instead, upon observing the distinct temporal dynamics of the single knockout cells in suppressing DTH responses, we asked whether an admixture of the single knockout cells would have a compensatory effect. Indeed, if immune mice were transferred both types of single knockout PLP-CD8 together (IFNγ-/-plus perforin-/-), the treatment phenocopied the WT scenario, where PLP-driven DTH was not only swiftly suppressed (Figure 6A), but remained suppressed over time ( Figure 6B). Importantly, mice that were transferred the single knockout mixture prior to EAE induction were equally protected from EAE disease as their WT CD8 T cell-transferred counterparts ( Figure 7A). Concordantly, PLPdriven DTH was equally suppressed in both instances compared to OVA-CD8 controls (Figures 7B,C). Taken together, this study provides important insights into the in vivo processes that occur upon immunotherapeutic adoptive CD8 T autoregulatory cell transfer. The differential requirement for perforin and IFNγ production by PLP-CD8 with respect to timing may suggest interactions with two distinct cellular targets in the course of their immunoregulatory exertion. We have previously shown that treatment of mice with CNS-CD8 results in both the downregulation of CNS-specific CD4 T cell responses (11) as well as modulation of antigen-presenting cells (APC), particularly dendritic cells (12). These data in conjunction with the temporal findings in the current study may indicate a dualistic suppression program whereby CNS-CD8 initially utilize IFNγ to immunomodulate APC populations while cytotoxic properties are eventually required for pathogenic CD4 T cell elimination over time. Future receptor knockout and cytotoxicity studies will be vital in teasing apart dualistic regulation in vivo. Related to this issue are our prior observations that CNS-CD8 of different specificities require cognate antigenic stimulation in vivo in the context of classical MHC Class Ia molecules (11). Thus, MOGspecific CD8 are unable to suppress PLP-induced disease and, likewise, PLP-CD8 do not suppress MOG-induced EAE (13). In the context of our DTH studies, where injected antigen is presumably presented by skin APC in the ear, the results suggest a model in which antigen-specific IFNγ-mediated APC modulation may be an early event in this cascade of interactions, dependent on the presentation of the cognate antigen. Eventually, perforin-mediated cytotoxic elimination of either APC subsets or pathogenic CD4 T cells might become an essential mechanism of sustained suppression, reflected in the late DTH data. Again, based on previous observations (7,11), these interactions also seem to require cognate antigenic presentation and may depend on acquisition of antigen by CD4 T cells through processes such as trogocytosis. Importantly, our DTH system will now allow the dissection of potential bystander suppression when antigens can be presented in vivo in a non-encephalitogenic manner to CNS-CD8 and CNS-CD4 of differing antigenic specificities. IFNγ and perforin have been described to regulate antigenspecific CD8 T cell homeostasis (23). Indeed, various aspects of CD8 T cell biology (differentiation, motility, cytotoxicity, etc.) are, in part, regulated by IFNγ (24)(25)(26)(27)(28)(29)(30)(31). Further evidence suggests that IFNγ promotes perforin-mediated killing ability in CD8 T cells (32) and that perforin-mediated control of infection is dependent on IFNγ (33). Thus, in the context of the current study, it is possible that lack of IFNγ production by CD8 T cells inhibits their immediate ability to employ immune suppressive effects in vivo. Importantly, these cells were not developmentally affected in the donor mice, since addition of IFNγ-replete (but perforin-deficient) PLP-CD8 resulted in robust compensation of the suppressor phenotype. Since both of the admixed cells shared the same antigenic target, it appears that IFNγ production in the vicinity of the overall CD8 T cell response is important in this process, arguing for an autocrine/paracrine mechanism. Conversely, we have also demonstrated that IFNγ receptordeficient CNS-CD8 were capable of suppressing EAE. However, there was a clear delay before the disease suppressive effect could be observed (11), again demonstrating the early need for IFNγmediated potentiation, which was not needed at later stages of the disease. Ultimately, the compensation of one knockout cell type by the other could suggest that the two effector pathways do not necessarily have to emerge from the same cell. One of the effects of IFNγ is to upregulate MHC Class I expression (34,35). Thus, one possible interplay is that IFNγ may be required for Class I upregulation, which in turn makes the target cells more susceptible to perforin-mediated elimination. Alternatively, it may be that early exposure of CD8 T cells to IFNγ will elicit a quick burst of perforin, whereas later in time when CD8 T cells are less responsive to IFNγ signaling (29,30), their perforin production is enhanced by any number of other cellular interplays, namely MHC contact and TCR stimulation via the acquisition of a target cell. This link between IFNγ and perforin in CD8 T cell-mediated regulation of EAE requires further study. To summarize, we offer here important insights into the in vivo regulatory mechanics of CNS-CD8 by demonstrating that these cells utilize a temporally distinct regulatory program involving IFNγ and perforin production to suppress pathogenic PLP-specific CD4 T cell responses during protection against EAE disease. Going forward, elucidating the complex cellular interplay that occurs during CNS-CD8 adoptive transfer, as well as the autoregulatory functions and temporal mechanics involved, will be critical for interrogating these cells' effectiveness as a potential immunotherapeutic for MS patients. DATA AVAILABILITY STATEMENT All relevant datasets generated for this study are included in the manuscript and the supplementary files. ETHICS STATEMENT This study was carried out in accordance with the PHS Policy on Humane Care and Use of Laboratory Animals, the Guide for the Care and Use of Laboratory Animals, and the NIH Office of Laboratory Animal Welfare. The protocol was approved by the University of Iowa's Office of Institutional Animal Care and Use Committee.
6,080.2
2018-10-09T00:00:00.000
[ "Biology", "Medicine", "Psychology" ]
Electron impact ionisation cross sections of iron oxides We report electron impact ionisation cross sections (EICSs) of iron oxide molecules, FexOx and FexOx+1 with x = 1, 2, 3, from the ionisation threshold to 10 keV, obtained with the Deutsch-Märk (DM) and binary-encounter-Bethe (BEB) methods. The maxima of the EICSs range from 3.10 to 9 . 96 × 10-16 cm2 located at 59–72 eV and 5.06 to 14.32 × 10-16 cm2 located at 85–108 eV for the DM and BEB approaches, respectively. The orbital and kinetic energies required for the BEB method are obtained by employing effective core potentials for the inner core electrons in the quantum chemical calculations. The BEB cross sections are 1.4–1.7 times larger than the DM cross sections which can be related to the decreasing population of the Fe 4s orbitals upon addition of oxygen atoms, together with the different methodological foundations of the two methods. Both the DM and BEB cross sections can be fitted excellently to a simple analytical expression used in modelling and simulation codes employed in the framework of nuclear fusion research. Introduction Plasma-wall interaction (PWI) is regarded as one of the key issues in nuclear fusion research. In nuclear fusion devices, such as the JET or the ITER tokamak (presently under construction), first-wall materials are those parts of the devices that will be directly exposed to plasma components. In ITER, the first-wall is envisaged to be coated with beryllium and tungsten [1]. After ITER, in the fusion program DEMO and beyond it in industrial applications of nuclear fusion, it seems likely that the highly toxic and hence difficult to handle beryllium will be avoided. The use of special stainless steels (i.e. the Eurofer steel envisaged for DEMO [2,3]) for some portions of the main wall may then come into consideration. Erosion of first-wall materials is an inevitable consequence of the impact of hydrogen and its isotopes as main constituents of the hot plasma [4,5]. Besides the formation of gas-phase atomic species in various charge states, also molecular species are expected to be formed via PWI processes. Disturbance of the fusion plasma and unfavourable re-deposition of materials and composites Supplementary material in the form of one pdf file available from the Journal web page at https://doi.org/10.1140/epjd/e2017-80308-2. a e-mail<EMAIL_ADDRESS>in other areas of the vessel are expected to be some of the undesired consequences [6][7][8][9]. Hence, detailed knowledge and quantification of interactions between atoms, molecules and the plasma as well as of the transport of impurities is of considerable interest for modelling and simulation of fusion plasmas [10]. Collisions of atoms and molecules with plasma electrons are one important class of such processes. They are mainly characterised by the respective electron-impact ionisation cross sections (EICSs) and their knowledge is especially important for modelling the plasma energy balance. Apart from magnetic confinement fusion, EICS data also are quite valuable due to the role of electron-induced reactions in astrophysics and in a variety of other applications such as low-temperature processing plasmas, gas discharges, and in chemical analysis [11]. During the past few decades, a number of semiempirical methods that typically use electronic structure information from quantum chemical calculations as input have been developed in order to derive absolute EICSs for various molecules. Their accuracy is usually in the same range as the one of experimental data. Among those, the most-widely used methods are the binary-encounter-Bethe (BEB) theory of Kim et al. [12,13] and the Deutsch-Märk (DM) formalism [14]. These methods have been successfully applied to atoms, molecules, clusters, ions and radicals [15]. Concerning fusion-relevant species, EICSs were reported earlier for beryllium [16,17], its hydrides [18], tungsten and its oxides [19,20], beryllium-tungsten clusters [21] and iron hydrides were also been covered recently [22]. In this work we report calculated EICSs using both the BEB and the DM methods for neutral iron oxide molecules, in particular for Fe x O x and Fe x O x+1 compounds with x = 1, 2, 3. Small amounts of oxygen are inevitably present in fusion plasma as are elements of similar atomic weight like nitrogen and argon. Moreover, such oxygen atoms will interact with surface iron or with sputtered iron atoms since the formation of iron oxide is highly exothermic. Electron impact cross sections and EICSs for some of the considered molecules (FeO, Fe 2 O 3 and Fe 3 O 4 ) were estimated earlier [23] by applying the additivity rule, i.e. by simply summing the respective cross sections of the atoms constituting a molecule. This can be seen as an upper limit for the EICSs calculated by us which will be discussed further in Section 3.2. Photoionisation studies [24,25] suggest that the most prevalent neutral iron oxide clusters in the gas-phase are of the form Fe x O x , Fe x O x+1 and Fe x O x+2 with the more oxygen rich clusters being favoured for larger values of x. Especially for small values of x < 10, the most abundant iron clusters are suggested to be of the stoichiometry Fe x O x and Fe x O x+1 which is why we are focusing on these clusters in the present work. Moreover, collision induced dissociation studies of small iron oxide cluster cations [26] revealed that predominant decomposition pathways are related to the loss of neutral O 2 and of FeO, FeO 2 , Fe 2 O 2 and Fe 2 O 3 fragments which makes the latter especially interesting to study in the framework of PWI processes. Due to the unique properties of iron oxide nanoparticles and their applications [27][28][29][30], iron oxide clusters, as their building blocks, were subject to numerous theoretical studies focusing on energetic, geometrical and magnetic properties, see e.g. references [31][32][33][34][35][36][37]. While the structures reported by Jones et al. [32] were used by us as input for structural optimisation (Sect. 2.3), the mentioned studies allowed us also to cross-validate our results for the obtained structural parameters and the energetics (Sect. 3.1). Except for the study reporting electron impact cross sections obtained by applying the additivity rule [23] mentioned above, neither theoretical nor experimental EICSs for iron oxide clusters were published to the best of our knowledge. In addition to the EICSs, we also report parameters obtained by fitting the calculated cross sections to an expression commonly used in codes modelling the impurity transport in fusion edge plasmas such as ERO [38][39][40]. The DM formalism The DM formalism was originally developed as an easy-touse semi-empirical approach for the calculation of EICSs of atoms in their electronic ground state from threshold to about 100 eV [14]. In its most recent variant [15,41], the total single EICS σ of an atom is expressed as: where r nl is the radius of maximum radial density of the atomic sub-shell characterised by quantum numbers n and l (as listed in column 1 in the tables of Desclaux [42]) and ξ nl is the number of electrons in that sub-shell. The sum extends over all atomic sub-shells labelled by n and l. The g nl are weighting factors, which were originally determined by a fitting procedure [43,44] using reliable experimental cross section data for a few selected atoms, for which the accuracy of the reported rate is in the range of 7-15%. The reduced energy u is given by u = E/E nl , where E refers to the incident energy of the electrons and E nl denotes the ionisation energy of the sub-shell characterised by n and l. The energy-dependent quantities b (q) nl (u) were introduced in an effort to merge the highenergy region of the ionisation cross section, which follows the Born-Bethe approximation [45], with the DM formula of the cross sections in the regime of low impact energies. The four constants A 1 , A 2 , A 3 and p were determined, together with c nl , from reliably measured cross sections for the various values of n and l. The superscript q refers to the number of electrons in the (n, l)-th sub-shell and allows the possibility to use slightly different functions b (q) nl depending on the number of electrons in the respective sub-shell. At high impact energies u goes to infinity, the first term in equation (2) goes to zero and b (q) nl (u) becomes a constant ensuring the high-energy dependence of the cross sections predicted by the Born-Bethe theory [45,46]. The DM formalism has been extended to the calculation of EICSs of atoms in excited states, molecules and free radicals, atomic and molecular ions, and clusters [15]. For the calculation of the EICS of a molecule, a population analysis [47,48] must be carried out to obtain the weights with which the atomic orbitals of the constituent atoms contribute to each occupied molecular orbital. These weights are obtained from the coefficients of the occupied molecular orbital after a transformation employing the overlap matrix in order to correct for the non-orthogonality of the atomic basis functions. The BEB method The BEB model [13] was derived from the binaryencounter-dipole model [12] by replacing the df /dE term for the continuum dipole oscillator strengths by a simpler form. Thus, a modified form of the Mott cross section together with the asymptotic form of the Bethe theory describing the electron-impact ionisation of an atom was combined into an expression for the cross section of each molecular orbital: where t = T /B, u = U/B, S = 4πa 2 0 N R 2 /B 2 , a 0 denotes the Bohr radius (0.5292Å), R is the Rydberg energy (13.6057 eV), and T denotes the incident electron energy. N , B and U are the electron occupation number, the binding energy (ionisation energy), and the average kinetic energy of the respective molecular orbital, respectively. In the BEB model, the total cross section, similarly to the DM method, is then obtained by summation over the cross sections for all molecular orbitals. The quantum chemical data needed to calculate EICSs are normally derived from all-electron calculations. For heavy elements and molecules containing them valence-shell-only calculations using effective-core potentials (ECPs) [49] can be used. This facilitates the quantum chemical calculations and allows the incorporation of relativistic effects. Due to the lack of inner radial nodes of the pseudo-valence orbitals, their kinetic energies are lower than normal and equation (3) can be used to determine the BEB cross section [50] avoiding the requirement of introducing an additional modification in equation (3) which became known as "acceleration correction" [51]. This combination of methods has earlier been recommended over using all-electron calculations for molecules that contain heavy atoms (with atomic number Z > 10) [52]. In an earlier work on iron hydrogen clusters, we also compared BEB cross sections obtained from all-electron calculations and by employing ECPs. There as well, a better agreement of the latter with the DM cross sections was found [22]. Quantum chemical calculations We used the structures obtained by Jones et al. [32] for Fe x O x and Fe x O x+1 compounds with x = 1, 2, 3 as starting geometries that were further optimised employing the B3LYP [53] density functional in conjunction with the Def2-TZVP basis set [54,55]. The binding energies, E BE , of the iron oxide clusters were determined according to: (4) where E(A) denotes the energy of compound A including the zero-point vibrational energy. Occupation, binding energy and average kinetic energy for each molecular orbital as required for the calculation of the BEB cross sections (see Sect. 2.2) were calculated at the HF/CEP-4G level of theory using the geometries obtained with B3LYP/Def2-TZVP. The orbital populations required for the DM formalism were derived from HF calculations in conjunction with the minimal CEP-4G basis set [56][57][58]. Orbital energies for the outermost valence electrons were calculated with the OVGF method and the Def2-TZVP basis set [59]. All calculations were performed with the Gaussian 09 software [60]. Analytical expression of the EICSs We fitted the cross sections to an expression that resembles the one used in the ERO code [38][39][40] which is used for impurity transport simulations in fusion edge plasmas. The fitting expression is given by: Here the cross section σ is expressed in 10 −16 cm 2 , the incident electron energy E and the threshold energy (first ionisation energy) E t are both expressed in eV, and the fit parameter a 1 is expressed in 10 −16 cm 2 eV. The fit parameters a 2 , a 3 and a 4 are dimensionless. Structures and energetics The structures obtained for the considered iron oxide molecules are shown in Figure 1. They correspond to the spin configurations for which the lowest energy was obtained, i.e. the multiplicities 2S + 1 = 5, 5, 9, 9, 3 and 11 for FeO, FeO 2 , Fe 2 O 2 , Fe 2 O 3 , Fe 3 O 3 and Fe 3 O 4 , respectively. Several spin configurations yielding a multiplicity 2S + 1 lower and higher than the indicated ones were used during optimisation, but we restrict our following analyses to the obtained lowest energy configurations. The relative energies of the several spin configurations tested are supplied in the supporting information accompanying this article and provided online. It is known that the relative energies of spin configurations and even their order are rather sensitive to the employed method [61], hence we refrain from discussing them further here. Note that all structures up to Fe 3 O 3 yield (nearly) planar geometries. The complete set of structural parameters [32] was a spin-singlet and in our case the ground state corresponds to 2S + 1 = 9. This is another indication of how sensitive the interplay is between the chosen method and the spin configuration. For the larger molecules we note again good agreement in terms of bond lengths compared to the results of reference [32] as well as concerning the planarity of the obtained ground state structures up to Fe 3 O 4 [32,36,37]. The binding energies determined using equation (4) and the atomisation energies for the considered ground state molecules are given in Table 1. Both include the zero-point energy correction. We include also the atomisation energies obtained by Jones et al. [32] in Table 1 for comparison. It can be noted that the trend of increasing atomisation energy over the range of considered molecules from the smallest to the largest is conserved. The incremental binding energies of additional oxygen atoms, i.e. the energy difference between Fe x O x+1 and Fe x O x , are also included in Table 1. Electron impact ionisation cross sections In Table 2, we provide the maxima of the calculated cross sections and their locations with respect to electron impact energy as well as the ionisation energies. The parameters obtained by fitting equation (5) to the respective cross sections are supplied in Table 3. In the supporting information, tabulated data for the DM and BEB cross sections are also included. Figure 2 shows the various cross sections and fitted functions. The ionisation energy obtained for FeO (8.53 eV) is in excellent agreement with the experimental value of 8.56 eV [63]. Also the ionisation energy obtained for FeO 2 (8.88 eV) is in fair agreement with the experimental one, i.e. 9.5 ± 0.5 eV [64]. The maxima of the obtained EICSs range from 3.10 to 9.96 × 10 −16 cm 2 located at 59-72 eV and 5.06 to 14.32 × 10 −16 cm 2 located at 85-108 eV for DM and BEB, respectively, increasing smoothly for increasing size of the considered molecules. We note that for both, DM and BEB, the magnitude of the cross section maxima for Fe x O x with x = 1, 2, 3 varies roughly linearly with x, see Table 2, which is in line with the approximate validity of the additivity rule used earlier to estimate the EICSs of polyatomic molecules by summing up the atomic cross section [23]. However, we see also that the resulting cross sections for Fe x O x with x = 1, 2, 3 are consistently smaller than what would be obtained by simply scaling the cross section of FeO with x. This indicates the decrease of the respective cross section due to the more compact electronic distribution upon chemical bonding as the molecules get bigger. This is also in line with the finding that the estimates for the cross sections of FeO, Fe 2 O 3 and Fe 3 O 4 obtained from applying the additivity rule in reference [23] are larger than the DM and BEB cross sections obtained in this work, see Figure 2. We note that as early as 1997, a method has been suggested yielding a modified additivity rule to actually take into account the reduction of the molecular ionisation cross section due to molecular binding [65]. It has, however, not yet been applied to iron oxides. In the higher energy region (beyond the maxima) the BEB EICSs and the cross sections determined via the additivity rule cross each other which is actually an indication that the BEB cross sections are too large at elevated energies. The DM cross sections remain lower at all energies but, in contrast to BEB, appear to be distinctly too low especially at energies far beyond the maxima since in this region the discrepancies between the three approaches should actually become smaller. The BEB cross sections are generally significantly larger (and their maxima are shifted to higher electron impact energies) than the DM cross sections -ranging from a factor of 1.7 in case of Fe 2 O 2 down to 1.4 in case of Fe 3 O 4 -which is in line with a study on iron hydrogen clusters EICSs [22] in which also discrepancies between those two methods were obtained which were larger than previously assumed to be the norm. We note that there have been cases reported in which the DM method resulted in cross sections which were significantly smaller and also showed a faster decrease beyond the maximum than it was the case for cross sections obtained using other methods or experimental ones [66][67][68]. In reference [22] the discrepancy between DM and BEB cross sections was related to the different methodological foundations of the methods, and especially to the explicit inclusion of geometric parameters in terms of the radius of maximum radial density of atomic subshells (see also Sect. 2.1) in the DM approach. It was found that the 4s electrons led to the by far most dominant contribution to the EICS of atomic Fe and as the population of this atomic orbital decreased with increasing number of hydrogen atoms in the iron hydrogen cluster, the discrepancies for the resulting EICSs between DM and BEB decreased also [22]. Hence, we also investigated how the addition of oxygen affects the population of Fe 4s orbital in the considered Fe x O x molecules. In Table 4, we supply the discrepancies between DM and BEB determined as the ratios between cross section maxima and the populations of the Fe 4s orbital divided by the number of iron atoms contained in the molecule (for atomic iron this quantity would be 2). Indeed, the depopulation of the Fe 4s orbital observed for the Fe x O x+1 molecules when compared to Fe x O x correlates with the decreasing discrepancy between the methods when increasing the number of oxygen atoms in the respective molecule. The substantial depopulation of the Fe 4s orbital in the oxides compared to atomic iron may underlie also the fact that the DM cross sections for FeO and FeO 2 are smaller than the EICS of atomic iron [69], although the ionisation thresholds are not much different (7.92 eV for atomic iron [70], 8.53 eV and 8.88 eV for FeO and FeO 2 , respectively). The EICSs of FeO and FeO 2 have maxima of 3.10 × 10 −16 cm 2 at 59 eV and of 3.95 × 10 −16 cm 2 at 72 eV, respectively, while the maximum of the Fe EICS was experimentally found to be 4.08 × 10 −16 cm 2 at 35 eV [69]. Analogous findings have been obtained for small iron hydrogen clusters [22]. Overall, our results are in line with the interpretation given for this discrepancy in reference [22]. However, this does not explain why the discrepancies found in iron containing compounds are actually that large since discrepancies between the results of different numerical methods as well as between calculations and experiments turned out be mostly within at least 50% in the past [13,15,44]. In addition to the study on iron hydrogen clusters [22], an exception to this finding has also been noted for atomic tungsten yielding a discrepancy between the methods of about a factor of two [71]. This could actually be an indication that discrepancies between these two methods are enhanced by the inclusion of heavy elements in the studied compounds. Anyway, this calls also for the experimental study of EICSs of fusion-relevant compounds in order to clarify how good the DM and BEB methods work for these and which of the two methods delivers the more accurate estimates. Most fusion relevant molecular species are unusual compounds in the sense of conventional synthetic and analytical chemistry which makes the experimental investigation of them on the one hand a challenging task. On the other hand, however, this is exactly the reason why validation of the DM and BEB methods via comparison with experimental data at least for some of the molecules would be highly appreciable in order to yield an empirical measure for the utility of the methods for molecules which are difficult to investigate experimentally. In the absence of such experimental data we cannot safely judge either DM or BEB as the more accurate method, but from overall experience would rather expect experimental cross sections somewhere in between them. This argument is supported by a comparison of BEB and DM EICSs and cross sections obtained via the additivity rule [23] as in the discussion of Figure 2 with a slight favour towards BEB at least in the high energy region. Conclusion We calculated EICSs of iron oxide molecules, Fe x O x and Fe x O x+1 with x = 1, 2, 3, from the ionisation threshold to 10 keV using the DM and the BEB methods using effective core potentials for the inner core electrons in the quantum chemical calculations necessary to obtain the orbital and kinetic energies required for the BEB approach. The maxima of the cross sections range from 3.10 to 9.96 × 10 −16 cm 2 located at 59-72 eV and 5.06 to 14.32 × 10 −16 cm 2 located at 85-108 eV for DM and BEB, respectively. The BEB cross sections are 1.4-1.7 times larger than the DM cross sections which could be related to the decreasing population of Fe 4s orbitals upon addition of oxygen. However, experimental data on EICSs of such molecular compounds are still missing. They would be highly appreciated in order to base the assessment of the calculated cross sections on empirical foundations. We assume that results from both approaches at least give good estimates of the true cross section. Both the DM and BEB EICSs were fitted against a simple analytical expression used in modelling and simulation codes in the framework of nuclear fusion research.
5,432
2017-12-01T00:00:00.000
[ "Physics" ]
A New Multi-Attribute Emergency Decision-Making Algorithm Based on Intuitionistic Fuzzy Cross-Entropy and Comprehensive Grey Correlation Analysis Intuitionistic fuzzy distance measurement is an effective method to study multi-attribute emergency decision-making (MAEDM) problems. Unfortunately, the traditional intuitionistic fuzzy distance measurement method cannot accurately reflect the difference between membership and non-membership data, where it is easy to cause information confusion. Therefore, from the intuitionistic fuzzy number (IFN), this paper constructs a decision-making model based on intuitionistic fuzzy cross-entropy and a comprehensive grey correlation analysis algorithm. For the MAEDM problems of completely unknown and partially known attribute weights, this method establishes a grey correlation analysis algorithm based on the objective evaluation value and subjective preference value of decision makers (DMs), which makes up for the shortcomings of traditional model information loss and greatly improves the accuracy of MAEDM. Finally, taking the Wenchuan Earthquake on May 12th 2008 as a case study, this paper constructs and solves the ranking problem of shelters. Through the sensitivity comparison analysis, when the grey resolution coefficient increases from 0.4 to 1.0, the ranking result of building shelters remains stable. Compared to the traditional intuitionistic fuzzy distance, this method is shown to be more reliable. Introduction At present, earthquakes, fires, novel coronavirus infections, and other frequent disasters have caused great loss to human beings. Owing to the uncertainty and fuzziness of such emergency problems, it is difficult for decision makers (DMs) to determine alternatives with real numbers to make quick decisions. The accurate processing of information has become an unavoidable problem in the development of the emergency decision [1][2][3] field. Under this urgent demand, fuzzy set theory, which can deal well with the uncertainty of decision-making problems, came into being [4]. Fuzzy sets [5,6] use membership as a single scale to reflect the support and opposition of DMs to objective things. However, with the development of decision theory, it is difficult to accurately describe the uncertainty of objective things by fuzzy sets alone. Based on this, Atanassov, a Bulgarian professor, put forward the concept of the intuitionistic fuzzy set (IFS) in the 1980s [7,8]. He used membership degree and non-membership degree to express the support, opposition, and hesitation of decision information. Compared to the fuzzy set, the IFS can describe the natural attributes of objective things more accurately [9][10][11]. The IFS is a new mathematical tool for dealing with uncertain and complex information efficiently, which is widely used in the field of multi-attribute decision-making (MADM) [12][13][14]. In recent years, scholars have made great progress in the research of intuitionistic fuzzy multi-attribute decision-making (IFMADM). The similarity measure is one of the most important decision-making methods in IFMADM. Xu et al. [15] systematically analyzed the similarity measurement formula based on geometric distance, set theory, and intuitionistic fuzzy matching degree. In order to improve the measurement accuracy of the similarity of the IFS, Park et al. [16] and Hu et al. [17] used the similarity measurement formula based on intuitionistic fuzzy entropy for the intuitionistic fuzzy number (IFN) and interval IFN, respectively, and optimized the alternatives. The IFS can represent the uncertainty of decision information well, but there are some difficulties in data comparison. Score function and precise function are effective means for data comparison and ranking in IFMADM. Chen et al. [18] were the first experts to study the score function of the IFN. They used the difference between membership and non-membership in the IFN to construct a function to compare the size relationship of the IFN, which is the basis of IFMADM. On the basis of score function, Hong et al. [19] proposed an intuitionistic fuzzy precise function, which greatly improved the efficiency of decision-making. The classical multi-attribute method has a wide range of development and application in the field of intuitionistic fuzzy. Table 1 summarizes some main methods of IFMADM. Table 1. A brief overview of preprocessing methods in intuitionistic fuzzy multi-attribute decision-making (IFMADM). Literatures Methods Xu [15], Park et al. [16] Similarity measure Hu et al. [17] Similarity measure, Fuzzy entropy Chen et al. [18] Score function Hong et al. [19] Intuitionistic fuzzy precise function Wu et al. [20] AHP, Score judgment matrix Keshavarzfarda et al. [21] AHP, DEMATEL Chatterjee et al. [22], Liao et al. [23] TOPSIS, VIKOR Wu et al. [24], Vahdani et al. [25], Yu et al. [26] ELECTRE, PROMETHEE Meng et al. [27] Prospect theory Luo et al. [28] Regret theory Unfortunately, natural disasters, such as fires and floods, often lead to unexpected and disastrous consequences. A large number of emergency decision-making problems have evolved into MADM. Up to now, domestic and foreign scholars have conducted in-depth research in this field. Xu et al. [29] proposed a two-stage method to support the consensus-building process of large-scale MADMand applied it to earthquake shelter selection. Taking a fire and explosion accident as the study, Xu et al. [30] defined a generalized asymmetric language D number and proposed the corresponding MADM fusion algorithm, which verified the effectiveness of the method. Li et al. [31] proposed a risk decision analysis method based on the TODIM (an acronym in Portuguese of interactive and MADM) method to solve the emergency evacuation problem of tourist attractions, in which the attribute value and the probability of state occurrence are in the interval number format. This method solves this kind of emergency decision-making problem well, which shows that it is more effective than the traditional method. Based on an example of ship collision, Xiong et al. [32] used two intelligent algorithms, multi-attribute differential evolution algorithm and non-dominant sorting genetic algorithm, to verify the feasibility and effectiveness of the model. From the prediction model of the triple exponential smoothing method, Wang et al. [33] proposed an MADM additive weighting method, weighted product method, and elimination selection transformation reality method to sort the recycled electric vehicles, which provided an effective solution for managers and researchers in the electric vehicle industry and improved the efficiency of the electric vehicle industry. For the multi-attribute group decision-making problem of community sustainable development emergency response, Wu et al. [34] proposed a method based on subjective imprecise estimation of the reliability of binary language vocabulary, which greatly improved the efficiency of MADM. Karimi et al. [35] introduced the best and worst algorithm to solve the MADM problem in the fuzzy environment and applied this method to the evaluation of hospital maintenance, which proves the satisfactory performance of this method. Based on the above analysis, the MADM method is widely used in the field of emergency decision-making, which can solve the uncertainty well in the case of emergency. Table 2 summarizes some applications of the MADM method in emergency situations. Table 2. A brief literature list on the applications of multi-attribute decision-making (MADM) methods in emergency situations. Methods Applications Xu et al. [29] Two-stage theory Earthquake shelter selection Xu et al. [30] Generalized asymmetric language Fire and explosion accident Li et al. [31] Risk decision analysis Electric vehicle industry Xiong et al. [32] Evolution and non-dominant sorting genetic algorithm Ship collision Wang et al. [33] Additive weighting Electric vehicle industry Wu et al. [34] Subjective imprecise estimation of binary language Community development Karimi et al. [35] The best and worst algorithm Hospital maintenance The above method is effective in solving the multi-attribute emergency decision-making (MAEDM) problem in a fuzzy environment. However, it has some limitations in the following aspects. (1) In the case of emergency, DMs often have a certain subjective preference for alternatives, which is rarely studied. (2) The traditional intuitionistic fuzzy distance measurement accuracy is not high. It is easy to have a situation where the IFN cannot be compared, which makes the decision result produce errors. (3) For MAEDM problems with unknown or partially unknown attribute weights, the research is not deep enough and needs further analysis. (4) There is no corresponding sensitivity analysis for the ranking results of alternatives, which fails to fully explain the reliability and stability of the evaluation mechanism. According to the above limitations, the motivation of this paper is summarized as follows: (1) With the increasing complexity of the global environment, many scholars focus on the field of emergency decision-making. Intuitionistic fuzzy multi-attribute emergency decision-making (IFMAEDM) is the focus of the current research. (2) It is necessary to propose a distance measurement method based on the IFN, which can get rid of the shortcomings of traditional distance measurement and improve the reliability of decision results. (3) The research on the uncertainty of attribute weight is the key problem in MAEDM. How to determine the weight is always the core of decision-making. (4) The evaluation mechanism of the ranking results of alternatives can make the decision results more reliable. Therefore, based on intuitionistic fuzzy and grey correlation analysis, this paper proposes a method to solve MAEDM by using intuitionistic fuzzy cross-entropy distance. First, the average information entropy of intuitionistic fuzzy is defined, and the measurement method of cross-entropy distance of intuitionistic fuzzy is given. On this basis, considering the unknown and known attribute weights, an optimization model with the subjective preference of the DMs is established and solved. Secondly, the intuitionistic fuzzy decision matrix is obtained according to the objective attribute evaluation of DMs. The intuitionistic fuzzy cross-entropy distance matrix is constructed by combining the objective evaluation value and subjective preference value of alternatives. Then, the attribute weight is determined according to the adjusted intuitionistic fuzzy average information entropy. By using the method of grey correlation analysis, the comprehensive grey relation coefficient of each alternative is obtained, and the order of alternatives is generated. Therefore, a new method is proposed to solve the MAEDM problem by using intuitionistic fuzzy cross-entropy and grey correlation analysis. The important contributions of this paper are mainly reflected in six aspects. (1) The intuitionistic fuzzy cross-entropy distance is defined. (2) A multi-attribute emergency decision with subjective preference is considered. (3) The uncertainty of attribute weight is discussed and solved by intuitionistic fuzzy information entropy. (4) The grey correlation analysis method is applied to MAEDM, which makes full use of decision-making information such as membership, non-membership, and hesitation. (5) According to the grey resolution coefficient, the sensitivity analysis is carried out to verify the reliability and stability of the decision results. (6) Compared to the traditional intuitionistic fuzzy distance, this method is shown to be more stable. The remainder of this paper is organized as follows. Section 2 defines some basic knowledge of intuitionistic fuzzy theory and introduces the concept of intuitionistic fuzzy cross-entropy distance. In Section 3, a MAEDM model based on intuitionistic fuzzy cross-entropy and comprehensive grey correlation analysis is constructed. In Section 4, taking the ranking of earthquake shelters as an example, the practical application of this method is illustrated by comparing to the traditional intuitionistic fuzzy method. Lastly, Section 5 is the conclusion of the method proposed in this paper and the prospect of future research. Preliminaries This section first reviews some basic concepts and definitions of intuitionistic fuzzy theory. As the preference relationship in fuzzy theory is often assigned by the complementary 0.1-0.9 five-scale, we believe that the distribution of the levels between opposition and support is uniform and symmetric. However, in an actual situation, some problems require the use of a non-consistent and asymmetric distribution to evaluate variables, such as the marginal utility decline rate in economics. Therefore, it is very popular to solve this kind of asymmetric problem by fuzzy set theory. Definition 1 [4]. If the domain X is a non-empty set, a fuzzy set is defined as: which is characterized by a membership function µ A : X → [0, 1], where µ A (x)denotes the degree of membership of the element x to the set A. Ordinary fuzzy sets can only represent membership function, which refers to the support degree of an alternative without non-membership degree information. Therefore, Atanassov [7,8] extended the fuzzy set to the IFS. It is shown as follows: Definition 2 [7]. If the domain X is a non-empty set, then the intuitionistic fuzzy set A onX can be expressed as: where µ A (x) and ν A (x) are the membership degree and non-membership degree of the element x belonging to A in the domain X, respectively, denote the degree of hesitation or uncertainty that element x in X belongs to IFS A, obviously for any x ∈ X, with the condition 0 ≤ π A ≤ 1. Example 1. Take an example to illustrate the specific meaning of the IFS. Suppose there is an IFS A = {< x, 0.7, 0.2 > |x ∈ X }, which indicates that the membership degree of IFS X is 0.7, the non-membership degree is 0.2, and the hesitation degree is 0.1. If we use this set to represent the voting process, assuming that the number of participants is 10, then 7 people support it, 2 oppose it, and 1 hesitates to remain neutral. Definition 3 [36]. Let α A = (µ A , v A ) and α B = (µ B , v B ) be the two intuitionistic fuzzy numbers. Then, the normalized Hamming distance between α A and α B is defined as follows: meanwhile, all intuitionistic fuzzy numbers are expressed as θ. Obviously, the fuzzy number a + = (1, 0) is the maximum value in the fuzzy set, and a − = (0, 1) is the minimum value in the set. Geometric distance is not suitable for processing fuzzy decision information. According to the traditional distance model, Xu [15] proposed the distance measure formula of the intuitionistic fuzzy set: 1] . If there are intuitionistic fuzzy sets, then the distance measure between the IFSs is where λ ≥ 1. When λ = 1, d Xu degenerates into Hamming distance with IFS: When λ = 2, d Xu degenerates into Euclidean distance with IFS: Hamming and Euclidean distance formulas are an extension of intuitionistic fuzzy distance. Considering the attribute weight vector of x j ( j = 1, 2, . . . n), ω = (ω 1 , ω 2 , . . . , ω n ) T , satisfies 0 ≤ ω j ≤ 1 and n Σ j=1 ω j = 1, and the above two distance formulas d H and d E can be expressed as: It is not difficult to see from the formula that all intuitionistic fuzzy distances satisfy the following properties: In order to define the concept of intuitionistic fuzzy cross-entropy, the definition of information entropy is introduced. The average level of residual information after information redundancy eliminated is called information entropy, which is used to measure the uncertainty of information source in the communication process. Definition 5. There is a discrete random variable X = {x 1 , x 2 , . . . , x n } that can be represented as: is the probability of discrete random variable X satisfying 0 ≤ p j ≤ 1 and n Σ i=1 p j = 1; then, the information entropy of I can be expressed as The constant η means the unit of measurement of information entropy, which is a constant greater than 0, and the base number c of the logarithmic function in the formula can take a non-negative constant. In particular, when c = 2, the unit of information entropy is bit. When c = e, the unit of information entropy is nat. When c = 10, its unit is dit. In general calculation, η = 1, c = 2. Burillo et al. [37] extended the basic idea of information entropy to the field of intuitionistic fuzzy, and creatively used it to describe the uncertainty of the IFS. The intuitionistic fuzzy entropy of A can be expressed as: Definition 7. Another equivalent transformation of intuitionistic fuzzy entropy E LH is: Proof. Model (11) and model (12) are equivalent. . Definition 7 is more concise in form and simpler in calculation. It eliminates the influence of hesitation and is a better expression of intuitionistic fuzzy entropy. For the MAEDM problem discussed in this paper, when the attributes are completely unknown, it is necessary to calculate the average information entropy of each attribute. Combining with the intuitionistic fuzzy entropy, the intuitionistic fuzzy cross-entropy distance is defined as: . , x n }, where A and B are two IFSs on X, then, the intuitionistic fuzzy cross-entropy distance formula of A and B is [38]: As the intuitionistic fuzzy cross-entropy CE(A, B) does not satisfy the symmetry, considering the problems of emergency decision-making, let define the intuitionistic fuzzy cross-entropy distance combined with the characteristics of multi-attribute. Theorem 1. Referring to the properties of the intuitionistic fuzzy geometric distance formula, the intuitionistic fuzzy cross-entropy satisfies the following properties: Entropy 2020, 22, 768 8 of 21 , and model (13) has been given, the following exists As the above logarithmic function is strictly convex, according to the relevant properties, therefore, we can obtain the following expression, Through the above proof, obviously, CE(A, B) ≥ 0 and CE(B, A) ≥ 0, and the same can be obtained. According to model (13) and (14), we can prove that CE * (A, B) ≥ 0. Proof. When A = B, there are the following relationships: By substituting it into the model (13), we can obtain the conclusion CE(A, B) = 0, CE(B, A) = 0. Then, combining model (14), we can prove that CE * (A, B) = 0. Proof. According to the understanding of the geometric intuitionistic fuzzy distance formula, it is not difficult to prove that the size of the fuzzy cross-entropy set is positively correlated with the size of distance. Let us assume that with As the −∆CE * is a strictly convex function, it has the property (15). It satisfies It can be seen from property (1) that the fuzzy entropy distance is non-negative. Property (2) means that when two IFSs are completely equal, the minimum intuitionistic fuzzy cross-entropy distance is equal to 0; thus, cross-entropy can be used to measure the difference degree or distance between two IFSs. Property (3) provides a sufficient basis for the comparison of intuitionistic fuzzy cross-entropy distance. Intuitionistic fuzzy cross-entropy extends the meaning of information entropy, which can be used to measure the fuzzy degree and unknown degree between IFSs on the basis of preserving the complete information of the original IFS. The greater the distance between two IFSs, the greater the cross-entropy of the fuzzy numbers. However, the traditional intuitionistic fuzzy distance measurement method cannot accurately reflect the differences between the data. Based on this, a group of simple data can be used to compare the traditional intuitionistic fuzzy distance and fuzzy cross-entropy distance to show the reliability and stability of cross-entropy used to measure the degree of fuzzy. Example 2. Suppose that there are three voting activities with a population of 10. The voting can be represented by three groups of fuzzy numbers: . First, we use the traditional Hamming and Euclidean distance model (6) and model (7), respectively, to solve d H (α 1 , Obviously, it can be seen from the calculation results that two traditional distance formulas cannot measure the distance between fuzzy numbers α 1 and α 3 , or α 2 and α 3 , which is the disadvantage of the classical intuitionistic fuzzy distance measurement method. It is solved by the intuitionistic fuzzy cross-entropy distance method, CE * (α 1 , α 3 ) = 0.0037 and CE * (α 2 , α 3 ) = 0.0101. The results show that the distance between α 1 and α 3 is closer than that of the traditional intuitionistic fuzzy distance. Therefore, it is more effective to introduce intuitionistic fuzzy cross-entropy to deal with uncertainty decision information. A Multi-Attribute Emergency Decision Model Based on Intuitionistic Fuzzy Cross-Entropy and Grey Correlation Analysis This section analyzes the IFMAEDM problem in which DMs have a certain subjective preference for alternatives. Problem Description Taking the Wenchuan earthquake on May 12th 2008 as a study case, the government needs to build a batch of temporary shelters to rescue the victims in the disaster area. Considering the impact of earthquakes, the government has a certain priority (subjective preference) for the construction of regional shelters. After determining the geographical location, disaster risk, rescue facilities, and feasibility, a number of rescues in disaster-affected areas began in an orderly manner. The whole decision-making process aims to find the optimal solution through intuitionistic fuzzy cross-entropy and grey correlation analysis, which determines the area where the shelter is built first. It can be abstractly understood as: The decision-maker (government) gives the IFN representing the attribute value (agree, disagree, neutral) µ ij , ν ij from a series of alternatives (disaster-affected areas) A i (i = 1, 2, . . . m) according to the objective evaluation attribute (specific factors of disaster situation) C j ( j = 1, 2, . . . n), which denotes that the decision maker's approval degree is µ ij , objection degree is ν ij , and neutrality degree is π ij = 1 − µ ij − ν ij for alternative A i under the condition of attribute C j . The attribute weight is expressed in ω j and satisfies 0 ≤ ω j ( j = 1, 2, . . . n) ≤ 1 and n Σ j=1 ω j = 1. The IFN meets the following conditions: 0 ≤ µ ij , ν ij , π ij ≤ 1. Using a fuzzy number to construct multi-attribute intuitionistic fuzzy decision matrix R mn , the expression form is shown in Table 3: Table 3. Intuitionistic fuzzy decision matrix. Alternative Analyzing the Wenchuan earthquake, DMs have a certain subjective preference for alternatives, which need to consider the severity of the disaster area. The preference value is also IFN c i = (σ i , δ i )(i = 1, 2, . . . m). The following content uses the method of intuitionistic fuzzy cross-entropy and grey correlation analysis to build the optimal decision model and solve it. Steps of Intuitionistic Fuzzy Cross-Entropy and Grey Correlation Analysis Algorithm For the uncertain MAEDM problem with certain subjective preference, taking the Wenchuan earthquake shelter ranking problem for analysis, the comprehensive algorithm of intuitionistic fuzzy cross-entropy and grey correlation analysis is used to solve it. The specific steps are as follows (see Figure 1 for the flow framework): Steps of Intuitionistic Fuzzy Cross-Entropy and Grey Correlation Analysis Algorithm For the uncertain MAEDM problem with certain subjective preference, taking the Wenchuan earthquake shelter ranking problem for analysis, the comprehensive algorithm of intuitionistic fuzzy cross-entropy and grey correlation analysis is used to solve it. The specific steps are as follows (see Figure 1 for the flow framework): Figure 1. Algorithm framework of intuitionistic fuzzy cross-entropy and grey correlation analysis. Step 1. According to the data given in the background of the Wenchuan earthquake case, Step 1. According to the data given in the background of the Wenchuan earthquake case, alternative A i , objective evaluation attribute value C j , decision maker's subjective preference value c i , and intuitionistic fuzzy evaluation decision matrix R mn are determined. Step 2. Using intuitionistic fuzzy cross-entropy distance to solve the grey correlation coefficient between the objective evaluation value of alternatives and the subjective preference value of DMs, the formula is expressed as: ξ is called the grey resolution coefficient, and the value range is 0 ≤ ξ ≤ 1, which is often set as ξ = 0.5. It satisfies 0 ≤ θ ij (i = 1, 2, . . . m; j = 1, 2, . . . n) ≤ 1. The larger the grey correlation coefficient θ ij , the closer the objective evaluation value and subjective preference value. In model (16), CE * ij is the intuitionistic fuzzy cross-entropy distance, and the specific formula is as follows: Step 3. On the basis of the solution method of the grey correlation coefficient given in model (16), the weight of each attribute is calculated to determine the comprehensive correlation coefficient θ i of each alternative. The following three cases are discussed: The attribute weight is completely unknown, completely known, and the value range is known. Case 1. Attribute weight is completely unknown. In order to determine the attribute weight, the average information entropy of each attribute must be obtained. On the basis of intuitionistic fuzzy entropy, the calculation method of information entropy is as follows: The natural logarithm is taken to make the entropy value return to 1 and ensure the boundedness of information entropy. By transforming the formula of average information entropy, we can obtain the method of solving attribute weight: The weight parameters of each attribute can be determined and substituted, In model (20), the comprehensive correlation coefficient of alternatives θ i can be aggregated. Case 2. Attribute weights are fully known. Under the condition that the attribute is completely known, the grey correlation coefficient θ ij of each alternative attribute is obtained by using model (16), and the comprehensive correlation degree θ i of the alternative is obtained by combining model (20). Case 3. The value range of attribute weight is known. Based on the maximum approach between weights with a known range of values and the subjective decision maker's preference, a linear programming model with attribute weight as a variable is constructed, In this way, the weight parameters of each attribute can be determined. The weight ω j of each attribute can be calculated by establishing the optimization model of the maximum comprehensive grey correlation coefficient θ i : The corresponding linear programming model is constructed by programming software Matlab (R2017b) to solve the code, and the attribute weight of each alternative is obtained. Then, the model is substituted into (20) to determine the comprehensive correlation degree θ i . Step 4. Based on the comprehensive correlation coefficient obtained under three different attribute weights in Step 3, the alternatives of the earthquake shelter are ranked according to the size relationship. The larger the θ i , the better the alternative, which is in the front row. Step 5. The sensitivity analysis is made by setting different values of the grey resolution constant in the correlation coefficient, and the difference of ranking alternatives under different resolution coefficients is compared and analyzed. A Numerical Case Study on the Ranking of Wenchuan Earthquake Shelters In this section, the traditional intuitionistic fuzzy distance and the intuitionistic fuzzy cross-entropy distance are used to analyze and compare the ranking of earthquake shelters. Intuitionistic Fuzzy Cross-Entropy Distance and Grey Correlation Analysis The stability and reliability of the method of intuitionistic fuzzy cross-entropy and the grey correlation coefficient are analyzed through comparative experiments. Assume that the government carries out shelter assessment and optimization for the five areas with a large disaster impact, and use A, B, C, D, and E to represent them. The government analyzes and evaluates the geographical location C 1 , disaster risk C 2 , rescue facilities C 3 , and feasibility C 4 of the five disaster areas. The decision-maker adopts an IFN to express the objective evaluation value of alternatives under different attributes, and the intuitionistic fuzzy decision matrix R 5×4 is shown in Table 4. Table 4. Objective evaluation value of each alternative. Alternative In order to choose the best alternative to build a shelter in the earthquake disaster area, the government adopts the intuitionistic fuzzy cross-entropy and grey correlation analysis method to make a decision. Step 1. Determine the values of alternative A, B, C, D, and E; the objective evaluation attribute values C 1 ,C 2 ,C 3 ,C 4 ; the decision makers' objective evaluation matrix R 5×4 ; and subjective preference values c 1 ,c 2 ,c 3 ,c 4 ,c 5 . Step 2. According to model (17), the intuitionistic fuzzy cross-entropy distance between the objective evaluation value and the subjective preference value of each alternative is calculated to form the distance matrix: Step 3. Assuming that the grey resolution coefficient is ξ= 0.5, the grey correlation coefficient between the decision-maker's subjective preference value and the objective evaluation value is calculated according to model (16). The coefficient matrix is as follows: Step 4. Calculate the attribute weight ω j according to the known information provided by the above case. When the attribute weight is known, the model is relatively easy to solve. The following focuses on the analysis of two situations: The attribute weight is completely unknown and the attribute weight range is known. Case 1. The weight of attributes is completely unknown. According to the idea of intuitionistic fuzzy entropy, the average intuitionistic fuzzy entropy of the attribute is obtained by combining model (18): E(C 1 )= 0.5424, E(C 2 )= 0.7385, E(C 3 )= 0.5837, E(C 4 )= 0.6498. Then, according to model (19), we obtain the attribute weight ω 1 = 0.3080, ω 2 = 0.1761, ω 3 = 0.2802 and ω 4 = 0.2357. The attribute weight obtained is substituted into model (22), and the comprehensive grey correlation coefficient θ i of the alternatives under the attribute condition is calculated: θ 1 = 0.8544, θ 2 = 0.7133, θ 3 = 0.8730, θ 4 = 0.9575, and θ 5 = 0.7930. From the comprehensive grey correlation coefficient θ i of the alternatives, the result is θ 4 > θ 3 > θ 1 > θ 5 > θ 2 and D C A E B. Therefore, the alternative D is the best and the government should give priority to building earthquake shelters in the region. For proving the superiority and stability of the intuitionistic fuzzy cross-entropy and the comprehensive grey correlation analysis algorithm proposed in this paper, different resolution coefficients ξ are set for sensitivity analysis to compare and analyze whether the above alternatives will produce fluctuations. Set ξ =0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 1.00. The results of the comprehensive correlation coefficient are shown in Table 5. The ranking results of alternatives did not fluctuate with the change in resolution coefficient. In order to verify the reliability and stability of the method proposed in this paper more intuitively, we use Python graphics to carry out simulation experiments on the sequencing and gray resolution coefficient of each alternative, and the specific results are shown in Figure 2 (G is the grey resolution coefficient). Table 5. The ranking results of alternatives did not fluctuate with the change in resolution coefficient. In order to verify the reliability and stability of the method proposed in this paper more intuitively, we use Python graphics to carry out simulation experiments on the sequencing and gray resolution coefficient of each alternative, and the specific results are shown in Figure 2 (G is the grey resolution coefficient). maintained. The simulation experiment shows that D is the best alternative to build a shelter in the earthquake disaster area, and the decision result does not fluctuate, which shows the strong stability. In order to further verify the stability and superiority of the algorithm of intuitionistic fuzzy cross-entropy and comprehensive grey correlation analysis when the attribute weight range is known, different resolution coefficients are also set for sensitivity analysis, and the optimal alternative and decision results are compared. Taking ξ =0.40, 0.50, 0.60, 0.70, 0.80, 0.90, and 1.00, and attribute weight and comprehensive grey correlation analysis when the attribute weight range is known, different resolution coefficients are also set for sensitivity analysis, and the optimal alternative and decision results are compared. The attribute weight and comprehensive grey correlation coefficient of each alternative are shown in Tables 6 and 7. From the table data, the change in the grey resolution coefficient does not affect the attribute weight and the decision-making result of the alternative, which is still D C A E B. It is always the best alternative to build the seismic shelter in the D area. In addition, when the weight is completely unknown, the comprehensive grey correlation coefficient of the alternatives is higher than that of the alternatives with known range of attribute weight. More importantly, when the grey resolution coefficient fluctuates from 0.4 to 1.0, whether the weight is known or unknown, the change range of the comprehensive grey correlation coefficient of alternative D is the smallest, which is 0.0300 and 0.0302, respectively (see Table 8). Alternative B is always the worst, and its fluctuation is also the largest, which is 0.1438 and 0.1239, respectively. Based on this, the stability of the proposed method is proved. Figure 3. Compared to Figure 2, the comprehensive grey correlation coefficient decreases but does not change the overall trend of each alternative, and the decision results remain unchanged. Whether the attribute weights are known or not, the optimal alternative and ranking results are the same, which shows the superiority and stability of the method. More importantly, when the grey resolution coefficient fluctuates from 0.4 to 1.0, whether the weight is known or unknown, the change range of the comprehensive grey correlation coefficient of alternative D is the smallest, which is 0.0300 and 0.0302, respectively (see Table 8). Alternative B is always the worst, and its fluctuation is also the largest, which is 0.1438 and 0.1239, respectively. Based on this, the stability of the proposed method is proved. Figure 3. Compared to Figure 2, the comprehensive grey correlation coefficient decreases but does not change the overall trend of each alternative, and the decision results remain unchanged. Whether the attribute weights are known or not, the optimal alternative and ranking results are the same, which shows the superiority and stability of the method. Through the above comparative analysis, the intuitionistic fuzzy entropy and grey correlation analysis method has achieved good results in solving the MAEDM problems. In this way, the ranking results have strong stability and environmental adaptability. Traditional Intuitionistic Fuzzy Distance and Grey Correlation Analysis Based on the data given by the above problem of ranking earthquake shelters, the traditional intuitionistic fuzzy distance and grey correlation degree are used to analyze and give the ranking results. The traditional intuitionistic fuzzy distance model (4) Through the above comparative analysis, the intuitionistic fuzzy entropy and grey correlation analysis method has achieved good results in solving the MAEDM problems. In this way, the ranking results have strong stability and environmental adaptability. Traditional Intuitionistic Fuzzy Distance and Grey Correlation Analysis Based on the data given by the above problem of ranking earthquake shelters, the traditional intuitionistic fuzzy distance and grey correlation degree are used to analyze and give the ranking results. The traditional intuitionistic fuzzy distance model (4) has been given; thus, the corresponding grey correlation coefficient ε ij is where r ij denotes the objective evaluation value, c i denotes the subjective preference information, and grey resolution coefficient ξ = 0.50. Step 1. Calculating the grey correlation coefficient of each alternative between the objective evaluation value and subjective preference information. Step 2. Determining the attribute weight. Due to the fact that the range of attribute weight values is known, utilize model (21) to establish the following single-objective programming model: Solving this model, attribute weight can be obtained:ω 1 = 0.30, ω 2 = 0.18, ω 3 = 0.28, and ω 4 = 0.24. Step 3. On the basis of model (20), the comprehensive grey correlation coefficient is calculated: Step 4. Determining the alternatives ranking. Rank the alternatives according to the size of the comprehensive grey correlation coefficient ε i . Thus, D E C A B is the ranking result. Comparative Analysis Based on the ranking problem of earthquake shelters, this paper makes a comparative analysis from two aspects: (1). The attribute weight is completely unknown and the attribute weight range is known For a more intuitive comparison, it is further explored based on Figures 2 and 3. Regardless of whether the attribute weight is known or unknown, the ranking results of alternatives maintain high stability. The best alternative is always D, and the worst is always B. The comprehensive grey correlation coefficient of the alternative is positively correlated with the grey resolution coefficient, which indicates that the larger the resolution coefficient, the greater the correlation coefficient of the corresponding alternative. Moreover, in the case of unknown weight, the comprehensive grey correlation coefficient of each alternative is always better than that of the known weight range, which also indirectly proves the fact that attribute weights are uncertain in most fields of decision problems (see Figures 4 and 5). In addition, the results obtained by using a reasonable method to determine the attribute weights are more practical. Moreover, in the case of unknown weight, the comprehensive grey correlation coefficient of each alternative is always better than that of the known weight range, which also indirectly proves the fact that attribute weights are uncertain in most fields of decision problems (see Figure 4 and Figure 5). In addition, the results obtained by using a reasonable method to determine the attribute weights are more practical. Meanwhile, based on the data in Table 8, we can further analyze the volatility of the comprehensive grey correlation coefficient in two cases. From Figure 6 (deviation 1 represents unknown weights and deviation 2 represents known weights range), the deviation curves of the comprehensive grey correlation coefficient in the two kinds of weights situation almost coincide. However, when the weight is unknown, the fluctuation amplitude of the comprehensive grey correlation coefficient is still less than that of the known attribute weight range. Moreover, in the case of unknown weight, the comprehensive grey correlation coefficient of each alternative is always better than that of the known weight range, which also indirectly proves the fact that attribute weights are uncertain in most fields of decision problems (see Figure 4 and Figure 5). In addition, the results obtained by using a reasonable method to determine the attribute weights are more practical. Meanwhile, based on the data in Table 8, we can further analyze the volatility of the comprehensive grey correlation coefficient in two cases. From Figure 6 (deviation 1 represents unknown weights and deviation 2 represents known weights range), the deviation curves of the comprehensive grey correlation coefficient in the two kinds of weights situation almost coincide. However, when the weight is unknown, the fluctuation amplitude of the comprehensive grey correlation coefficient is still less than that of the known attribute weight range. Meanwhile, based on the data in Table 8, we can further analyze the volatility of the comprehensive grey correlation coefficient in two cases. From Figure 6 (deviation 1 represents unknown weights and deviation 2 represents known weights range), the deviation curves of the comprehensive grey correlation coefficient in the two kinds of weights situation almost coincide. However, when the weight is unknown, the fluctuation amplitude of the comprehensive grey correlation coefficient is still less than that of the known attribute weight range. Through the comparative analysis, we can see that the ranking result with unknown weight is more reasonable and more consistent with the uncertainty of the decision environment in MAEDM problems. (2). The traditional intuitionistic fuzzy distance with the intuitionistic fuzzy cross-entropy Through the comparative analysis, we can see that the ranking result with unknown weight is more reasonable and more consistent with the uncertainty of the decision environment in MAEDM problems. (2). The traditional intuitionistic fuzzy distance with the intuitionistic fuzzy cross-entropy distance Through the above solution, the ranking results of the intuitionistic fuzzy cross-entropy method is D C A E B. Under the sufficient sensitivity analysis, the results maintain a high stability. However, by using the traditional intuitionistic fuzzy distance method, the result of ranking becomes D E C A B. Although the ranking result has little change, the best alternative is still D and the worst one is B (see Table 9). This also fully proves that the method based on intuitionistic fuzzy cross-entropy and grey correlation analysis proposed in this paper has strong stability. Table 9. Ranking results under different methods. Ranking Results The traditional intuitionistic fuzzy distance D E C A B The intuitionistic fuzzy cross-entropy distance (unknown weight) D C A E B The intuitionistic fuzzy cross-entropy distance (weight range known) D C A E B According to the above two groups of comparative analysis, it can be concluded from many aspects that D is the best alternative. For the decision maker to make rescue measures, it is the most reasonable decision to give priority to the establishment of earthquake shelters in the D area. Conclusions This paper presents a new MAEDM method based on intuitionistic fuzzy cross-entropy and comprehensive grey correlation analysis. The main contributions are as follows: (1) Overcome the limitations of the traditional intuitionistic fuzzy geometric distance algorithm, and introduce the intuitionistic fuzzy cross-entropy distance measurement method, which can not only retain the integrity of decision information, but also directly reflect the differences between intuitionistic fuzzy data. (2) This paper focuses on the weight problem in MAEDM, and analyzes and compares the known and unknown attribute weights, which greatly improves the reliability and stability of decision-making results. (3) By using the method of grey correlation analysis, the fitting degree between the objective evaluation value and the subjective preference value of the decision maker can be fully considered. On this basis, a sensitivity analysis is made for the grey resolution coefficient to make the ranking result more reasonable. (4) The intuitionistic fuzzy cross-entropy and grey correlation analysis algorithm are introduced into the emergency decision-making problems such as the location ranking of shelters in earthquake disaster areas, which greatly reduces the risk of decision-making. (5) By comparing the traditional intuitionistic fuzzy distance to the intuitionistic fuzzy cross-entropy, the validity of the proposed method is verified. Unfortunately, the method proposed in this paper is applicable to the emergency decision-making problems with certain subjective preference. For the emergency problems with which the decision maker has no obvious preference, the method needs to be further studied. In addition, considering more attribute indicators to rank alternatives may obtain more convincing results. These aspects will become the research hotspot in the future: (1) In the MAEDM, the attribute weight problem will become a research focus. Considering the time factor, it may be an interesting topic to develop the weight into a dynamic field in the future. (2) The decision maker's preference relation and attribute weight often have great uncertainty. It is an effective method to discuss the multi-attribute emergency decision by using a more reliable robust optimization [39][40][41].
9,856
2020-07-01T00:00:00.000
[ "Computer Science" ]
Research on Multi Position and Parallel Detection Device for Universal Circuit Breaker Controller This paper introduces a design scheme of the residual current electrical fire monitoring detector performance test system based on virtual instrument technology. In order to test different current specifications monitor detector compatiblly,the system combines virtual instrument technology and computer control technology to improve the accuracy of output current of the current generator with electric voltage compensation technology and the way of closed-loop control ,and designs two sets of different size and the same type of pneumatic fixture to solve the connection of different specifications monitoring detector, software design adopts layered structure and the method based on configuration files, to better satisfy all kinds of test requirements for different products. Practical application shows that the system has the testing results of accurate, high efficiency and convenient operation. Introduction Universal circuit breaker is a kind of switch device which is used to protect and control various kinds of faults such as the power system.the power system and the equipment overload, under voltage, short circuit and ground fault.Its core is its internal controller unit, the Universal circuit breaker protection control is achieved through its internal core unit -controller. The multi station parallel test system described in this paper is a system designed to test the controller before assembling the circuit breaker.Can improve the efficiency of testing equipment, reduce the testing cost of the whole system, and solve the problem of rapid product testing in mass production. System function analysis Universal circuit breaker controller mainly includes control and execution.The function of control part is to measure the input signal size and according to the signal size, according to the requirements of the action to protect the performance of the signal, and to promote the implementation of the action.so as to achieve a variety of circuit breakers.As a result, the Universal circuit breaker controller testing system should have the following test function. (1) Measurement function test.The universal circuit breaker controller can measure the electric parameters such as current, voltage, power, frequency, power and so on.It has high measurement accuracy, and its multi character protection function is based on the measurement function.So the testing function of circuit breaker is the foundation of the whole test.The test system can test the function of the controller in accordance with the error type and size according to the product specification. (2) Multi feature protection function test.It Can carry out the three section of circuit breaker protection, grounding protection, reverse power protection, protection, over frequency, under voltage protection, over voltage protection, current imbalance protection, ZSI protection and other protection functions of the test.The test system can control the signal source to output a given signal according to the different specifications and different testing items of the controller, and test whether the protection function is correct and the action time is within the specified error range.The key to the testing of the multi characteristics protection function is to provide accurate, high accuracy and high voltage test signal. (3) Communication function test.The "four remote" function of the circuit breaker can be realized by using the Modbus-RTU mode on the RS485 interface. Comparison of test plan There is only one station in the traditional test bench, and it is tested by the equipment and instruments, and the single test equipment is used to test the product.The resource utilization is low and the production efficiency is not ideal.For the testing of large quantities of products, the use of multi station testing method is more and more people's recognition and gets more promotion.But there are some limitations to increase the number of stations, must be in the appropriate number of cases to take other techniques to further improve efficiency. In multi station testing, there are many kinds of technologies, such as serial test, pipeline test, parallel test. MATEC Web of Conferences In this system, we use the parallel test and it can complete the same test tasks at the same time of five stations.Such arrangement is advantageous to reduce the difficulty and complexity of the software design.Through the use of multi station parallel testing technology can significantly improve the throughput of the test system, and further improve the utilization of resources and production efficiency.Take five test system as an example, if the test Project is 3, the serial test (Table 1), the assembly line test (Table 2), and the parallel test (Table 3) are described in table three.Through observation, it is easy to draw the conclusion that in the multi station test, the test time is the longest, and the time of parallel test is the shortest, the efficiency of parallel test is the highest.But from the point of view of resources, not all tasks can be tested in parallel, such as the use of the same test resource tasks can only be serial test.So it is necessary to solve the problem of resource allocation in the multi station test system. Test task decomposition and resource allocation Test task decomposition is the precondition and foundation of the parallel test, which affects the execution efficiency and execution time of the parallel test.It Should be based on task decomposition and normalization, the strong correlation of relative independence as the basic principle, and considering the resource usage and production enterprises of task decomposition, the decomposition of coarse and fine testing are not conducive to the project management and scheduling. Test system design The overall structure diagram of Universal control unit inspection device is shown in Figure 1.The main industrial control computer, NI data acquisition card based on PCI bus, three phase standard signal source, single station test interface circuit, current, voltage monitoring module, communication module and test stand, etc.. Circuit breaker controller can be conveniently placed in the test station of the station, through a variety of terminal connection.Power parameter measurement module WB1831B35 is used to measure the electric parameters of the three phase standard signal source.Universal circuit breaker controller testing system software adopts hierarchical structure and modular design ideas, for easy management and scheduling, each test project as a separate module, all test items can be selected according to the actual situation to do or not do, and stored in the system configuration file, so that the use of the test system becomes more flexible and convenient.System software mainly includes testing the main program, the system initialization program, the program control voltage source control program, the program controlled current source control program, the test program module, the report generation program, etc.. Test system main program for the whole system scheduling, coordination role.System work flow chart is shown in figure 3. Conclusion This paper introduces the automatic test system of Universal circuit breaker controller which combines the virtual instrument technology and computer control technology.Through the design of the array type threephase current source and the three-phase voltage source, it can realize multi position test and parallel test.Compared with the traditional automatic test system, the test system has the advantages of high efficiency, high resource utilization and high efficiency.Practice shows that this method has high practical value and good prospect of promotion, it is a practical and feasible solution. DOI: 10.1051/ C Owned by the authors, published by EDP Sciences, 201 Figure 1 Figure 2 Figure 3 Figure 1 The test system block diagram
1,691.2
2016-01-01T00:00:00.000
[ "Computer Science" ]
The attitude of risk taking Islamic junior high school (MTs) students in learning mathematics This study aims to determine the risk-taking attitude of students at Islamic Junior High School (MTs) in Bekasi towards learning mathematics. This is a preliminary research to get information about risk taking attitude in order to conduct next research. Data are obtained by providing questionnaires of 20 indicators, which includes be careful in act, having peace of mind, resolute in making decisions and confident in the act. Respondents are as many as 97 students of 7th grade students of MTs and taken with random techniques from two MTs in the city of Bekasi. The research instrument was adopted from DOSPERT developed, adapted to the ability of 7th grade students of MTs. The attitude of risk taking is part of the student's responsibility attitude to the learning of mathematics, either during preparation, process or after learning mathematics. The attitude of risk taking is important to know in order to be trained continuously. Because the trained attitude of risk taking will make students succeed in learning and working later. Introduction The process of realizing the intelligent and prosperous society requires knowledge, strategy, struggle, and sacrifice. In this case education is an appropriate container to start "educating" the nation's children from an early age. The world of education has scientists, experts who have familiar strategy of struggle and want to sacrifice for the sake of improving the quality or the quality of graduates. Social theory Pip Jones, et al states that a smart graduate must have high creativity [1]. People who are smart and creative in the course of his life will be successful and prosperous. The superior man, of course, is to make a decision taking into account all the risks that would be acceptable. The risks received are more inclined towards a harmful act, but the losses will be minimalized. This attitude is called risk taking. Theoretical Study 1.1.1. Risk Taking Attitude. As far as researchers concern, no studies are ever conducted related to risk taking attitude in mathematics learning. Knowingly or not, the general process of completing math problems always requires risk taking attitude. Therefore, in defining risk taking in this study, they adopt it from social studies. Basically, the human soul is divided into two aspects, namely the ability and aspects of personality. Djaali classifies aspect of ability to include learning outcomes, academic achievement, intelligence and talent; whereas the personality aspect includes the character, nature, adjustment, interest, emotion, attitude and motivation [2]. Furthermore, Djaali stated ability and personality will be revealed through behaviour [2]. Risk taking or attitude to risk that will be examined is part of the attitude of responsibility a person who will be reflected at one's behaviour. Meaning of the word risk according to Indonesian Dictionary is "less favourable result," it will be more inclined to hurt and harm. This means that when a person must decide from many options it will not be a fun job. But a choice that requires considerable good or bad, profitable or loss, dangerous or not, that all options are more towards harmful and dangerous. As a common example occurs in someone who has graduated, there are two options that arise in his mind; will go to college or work? If you go to college you must be prepared with considerable cost, the test to be faced, and may be separated from the family in a long time if the lecture is conducted outside the city. If you want to work you should be prepared with the process of applying for a job, chances that you are often rejected than accepted, fees or start-up capital, a small salary with a heavy workload (as inexperienced), and so forth. When you face a choice like this, then the decision must be taken if you want to continue living, of course with all the risks and consequences. The ability to take risks is part of the character or personality. The process of formation of student character can be described in the following Figure 1 Figure 1, a person's character can be formed because of his way of thinking, and thinking ability can be trained by learning. Decision making (risk taking) is the most important attitude for an individual [4], which will have an impact on someone's social life. The impact of the decision-making course has been considered not only influence good or bad for the decision-maker, but also for someone's social relations. It is this factor that sometimes makes a person difficult to take decisions quickly. Many things must be considered before decisive "decisions" are taken. Sometimes because too great risk shall be borne, someone may finally adrift in doubt, even though a lot of people waiting for his decision (usually it is a management decision). Finally, the social life of people is so constrained and uncertain. Not infrequently success was delayed because of doubts or wrong in taking decisions. This theory about risk taking that people can use their personal competencies to affect the probability of success or failure in life [5]. They will be motivated to choose the level of risk in accordance with their competence and are believed to be in their favour. A risk in taking decisions must be built on each individual. Build up the courage to take risks or risk taking should be trained continuously. This should have been done early when children are at the stage where it hurts can understand and are not ill, for example, when they learn to walk. He knew when the initial step will certainly fall, and it feels discomfort but they can bear the risk of falling and the sickness that they are managed to walk. If they are afraid to do it over again, then the process would be obstructed or took longer time to learn. Taking risks could be positive or negative, in accordance with the opinion of Leigh who states that: Risk-Taking Reviews behaviours that involve; some potential for danger or harm, also while providing an opportunity to obtain some form of reward [6]. Furthermore, Byrnes, Miller and Schafer and Leigh stated the similar thing: risk taking encompasses a broad range of behaviours that fall along both positive and negative dimensions [4,6]. Risk has two possible behaviours, they could be on the positive side or negative, beneficial or detrimental. 1.1.2. Correlation between learning mathematics and risk taking. The process of learning mathematics, especially when solving math problems, the decision made is a probability, or chance. Opportunity was 50% correct answers, or otherwise 50% obtained a wrong answer. If the decision is already taken into consideration because it is based on observation, experience or knowledge, then what is the answer even though it is wrong it will be readily accepted and ready for reparation. But if not dare to take the risk, then there is no courage to solve mathematical problems by the teacher. Always feel free to start work on a settlement, which ultimately is only visible matter alone and never answered. Take a decision or risk taking is influenced by the development of "neuroscience" [7]. What is meant by neuroscience is human behaviour from the perspective of the activities that occur in the brain. Where, according to Steinberg is the changes in the brain system coincided with the maturation of the reproductive pushing risk taking will be adaptive evolution [7]. The brain plays an important role in behaviour. The brain will be trained to the maximum through continuous learning activities and regular. Learned that maximize brain function including the learning of mathematics. Learning math is done continuously and regularly is the learning of mathematics in formal schools. Then build a brave attitude in making decisions (risk taking) should be trained when children start early adolescence (puberty) around the age of 10-18 years to study mathematics at school. Generally, children this age are very brave in risk taking but careless attitude and always full of risk adverse. Steinberg describes the risk taking ability of the sample studied between the ages of 10-30 years are described as bends or curves [7]. At the age of adolescence (puberty) around 7-29 years is at the stage of development (using Connors Impulsiveness Scale). Instead Leshem & Glicksohn states that a significant decline from the age of 14-16 to 20-22 (using Eysenk & Barratt Impulsiveness) [8]. Usually at the age of adolescence to early adulthood ahead, making decisions and risk-taking ability is too fast and brave, somewhat ignoring considerations or the impact of the decision. Tend to be careless because of the immaturity of emotion. However, over the emotions and hormones which are more mature at the beginning of adulthood, the risk taking ability of more consideration. Considerations based on understanding, experience of previous observations even an attitude of responsibility. Therefore the risk taking ability of students at secondary school age must be built with the exercises solve problems in mathematics lessons to be better trained, not sloppy and ready to face the risks of the decisions that have been taken. Is the answer or settling on the idea that appears to be incorrect, or if one is ready to be repeated with ideas and other ways. So that will further enhance the students' attitude of responsibility on his duties. Based on experts' opinions that have been described, it can be arranged indicator of risk taking in this study, which is described as follows Figure 2: The Purpose of study Every research must have a purpose. Similarly, this study, the purpose of this research is to know the attitude of MTS students risk taking, to the learning of mathematics. Other than that the results of this study as information to mathematic teachers, in order to train continuously the attitude of risk taking so that students become individuals who are responsible for all decisions taken with all the risks that have been calculated before. Later the attitude of trained risk taking will lead someone to be more responsible on the tasks. A person who responsible will of course be accepted and successful wherever they will works. Participants The sample in this study is the seventh grade students of four accredited A-MTs. chosen purposively, because they are related to a particular purpose. Purpose with purposive technique, the sample is determined directly by the researchers and the school (principal and math teacher). This is in accordance with the opinion of Nasution [9], the subject of the study sample is only as a source and can provide information. Samples can be things, events, people, or situations that are observed. Nasution's opinion is supported by Piaget's cognitive development theory which states that the average 12-to 13-year-old child is even more in the formal operational stage. This age they are still in transition from elementary school to high school [9]. Of course, the most needy period of guidance in any case, including decision-making. During elementary school age they always just imitate and follow the orders of teachers, in Junior High School or MTs. They should have started to be independent. Of course this seventh grade student, who is still in transition, has an objective and objective way of thinking to be helpful in this research. Therefore, in order to be in accordance with the purpose of the researcher, the grade 7 students of MTs. Selected as a sample in this study, which is 97 students randomly from four MTs where is located in Bekasi. Instruments Test The instrument used to obtain data in this study used a questionnaire adopted from the DOSPERT risk taking behaviour (Domain Specific Risk Taking) tool from Weber et al [10]. However, it has been adapted to the sample at 12-13 years of age that is still undergoing a transition period, the developmental period (Steinberg) is a change from childhood into adolescence (puberty). Choice of answers is also simplified from 7 to 4 adapted to student's cognitive abilities. The choice of questionnaire answers that become sample choice are: ALW is Always, if never abandoned; OFT is Often, if never done once only once; SMT is Sometimes, more do not do; NVR is Never, never done even though once. The research instrument was firstly validated by 3 mathematicians (1 lecturer), 1 evaluation expert, 2 math teachers and 2 Indonesian teachers, and 1 psychologist (psych test institution). The Validation to measure instrument readability, conformity with indicators and aspects of construction (20 Item). Furthermore the data is calculated statistically with Friedman test. Statistic calculation with the help of SPSS '20 program and obtained the following results: Result of Study Scoring techniques use a score range of 1-4 (continuum data). The reason for not being too extreme in giving an assessment, if using a score of 0 and 1 (dichotomy) then the instrument directly rejected or accepted, while the attitude assessment is not too rigid, especially at the age of transition. A valid questionnaire was given to 97 MTs students and the following data were obtained (Table 2): Based on research data if grouped based on aspects of students "readiness" and "confidence" are presented in the following Table 3: Discussion After taking into account the data obtained from the research results, the risk taking aspect of "readiness" on careful indicators in the act can be expressed that the 7th grade students of MTs as the sample who chose the option ALW and OFT has reached 59.8%, this means nearly 60% of students already have a cautious attitude when will do actions related to learning mathematics. This careful attitude is shown by the readiness of students in preparing school supplies, doing chores or homework, and other preparations. This attitude of caution is one of the attitudes to reduce errors that will have negative risks (such as the teacher's punishment for negligence or negligence of the teacher's duties). But still found 40.20% of students who are not or less carefully in action by choosing the option SMT and NVR. Have no readiness in facing learning activities. When studying in the classroom will be borrowed a lot of goods (stationery, even books) from a friend who is ready. If not lent he will interfere with his friend by taking forcibly. Even if not lent he did not learn but chatted aloud. Students like this do not think about the negative risks that will be faced, or even do not care about the negative risks that will be accepted because of negligence. The attitude of students who do not have readiness is what will hinder the teacher in achieving mastery learning which is called the minimum mastery criteria (KKM). And also will disturb other friends who are ready to learn. If not guided by the teacher then he will be a student who fails in math lessons. Questionnaire on the aspect of "readiness" risk taking for the indicators of peace of mind that choose ALW and OFT obtained 62.68% students have been able to think calmly in learning mathematics. Steps to find information or ask for information on the problems faced have been done. But still must be in further detail whether the steps taken in thinking the solution is correct. There are 37.32% of students who choose SMT and NVR show have not been able to think calmly. Students like this are always hesitant in doing tasks or solving problems assigned by teachers. No attempt to find information or troubleshooting steps. And in the end students who do not have peace of mind do not do the job or cheat the work of friends. Questionnaire which is the aspect of "confidence" risk taking on the firm indicator in making the decision, students who answered ALW and OFT obtained 43.30%. This means that as many as 43.30% of students have been able to decide what to do with all the consequences. Decisions they take will not be regretted even if ultimately wrong or get a low score. A sense of selfconfidence and independence has arisen in these students. But it still needs to be closely watched and observed further whether the decisions they take arise from themselves or just follow friends. While those who answered SMT and NVR as many as 56.70% of students, it shows still high students who have not been able to decide what will be done. Students like this are always hesitant in deciding what to do, less independent and not confident. Generally students like this will rely on the closest friends. And in the end a student who does not have the courage to decide what to do will only be an imitator, his wrong friend will be wrong, even if he is right. But, he does not understand it. Questionnaire which is the aspect of risk taking 'belief' on the indicator in action, students who answered ALW and OFT obtained 56.90%. This means the students' stance is high, in the settlement step, even if it is different from the work of a friend. This belief that will lead the students to succeed in learning. But it still needs to be closely examined and observed whether the sharpness or steady acting they have considered either bad, or just stubborn and careless attitude. While the answer SMT and NVR as much as 43.10% of students, it shows still high enough students who are not sure or steady in completing the task in math lessons. Always hesitate and change the results of his work if different from friends. Students like this, will always be vacillated with the environment and circumstances. Conclusion From the results of this study can be concluded that, there are many students of grade 7 MTs, who have not been able to determine the steps to minimize the risks received. The risks received tend to be negative. Overall students who chose not ready answers are still higher than those that are often ready and sometimes ready. Differences of students who are always ready with the never ready only 9.07%. Based on this, teachers and parents should always remind and guide students to be ready for mathematics lessons, beginning to prepare school supplies at night until the time of the teaching and learning process in the classroom. In addition to the readiness, the aspect of confidence is mainly self-confident and self-confident students of grade 7 MTs. Who answered always despite reaching almost 53% but still lower than the answer sometimes. And the overall difference with the chosen answer is never 21.09%. This shows the attitude of risk taking on the aspect of readiness is higher than the aspect of belief. This means that most students actually already understand and make preparations for facing math lessons tomorrow. However if compared with confidence still lower. So it must be reminded, guided, and supervised by teachers and parents. Not yet independent, still full of doubt in deciding solving math problems. This risk-taking attitude can be trained with habituation, both at home and at school, with direction, guidance and supervision from teachers and parents. Then the process of building and improving the attitude of risk taking will be more successful. Beginning with parents at home, should regularly remind and supervise the preparation and completeness of their children's schooling at the early age of the school (Play Ground). This habit will make them more independent, more confident tiered next. Parents at home should work with teachers at school. Teachers should regularly train students to be sure of the choice of problem solving math problems, do not hesitate even if different from the work of his friend. And cultivate beliefs never fear being wrong in doing something that is already believed. These habits will build the character of a young generation who is full of preparation, confident and independent with whatever it does. Never hesitate to decide something, because it is considering good and bad risks to be faced. If they become leaders, they will become confident leaders, not easily affected by incitement and uncertain promises. Very responsible for the task, rights and obligations. This young generation is expected to be the leader of the nation's successor ideals, and who can safeguard and advance the life of the nation.
4,676
2018-05-01T00:00:00.000
[ "Mathematics", "Education" ]
Towards Understanding Neurodegenerative Diseases: Insights from Caenorhabditis elegans The elevated occurrence of debilitating neurodegenerative disorders, such as amyotrophic lateral sclerosis (ALS), Huntington’s disease (HD), Alzheimer’s disease (AD), Parkinson’s disease (PD) and Machado–Joseph disease (MJD), demands urgent disease-modifying therapeutics. Owing to the evolutionarily conserved molecular signalling pathways with mammalian species and facile genetic manipulation, the nematode Caenorhabditis elegans (C. elegans) emerges as a powerful and manipulative model system for mechanistic insights into neurodegenerative diseases. Herein, we review several representative C. elegans models established for five common neurodegenerative diseases, which closely simulate disease phenotypes specifically in the gain-of-function aspect. We exemplify applications of high-throughput genetic and drug screenings to illustrate the potential of C. elegans to probe novel therapeutic targets. This review highlights the utility of C. elegans as a comprehensive and versatile platform for the dissection of neurodegenerative diseases at the molecular level. Introduction The prolonged average human lifespan is accompanied by an increased incidence of ageing-associated neurodegenerative disorders, including amyotrophic lateral sclerosis (ALS), Huntington's disease (HD), Alzheimer's disease (AD), Parkinson's disease (PD), Machado-Joseph disease (MJD) and other neurological diseases.The growing economic and social burdens imposed by these diseases on global healthcare systems necessitate an urgent solution to diminish their impact.Unfortunately, there have not yet been any effective treatments to unequivocally stop or slow down the disease progression.The ambiguity in current knowledge about disease-causing molecular mechanisms remains an obstacle in developing novel drugs for the diseases. Since its inception as an experimental organism in the 1970s [1], Caenorhabditis elegans (C.elegans) has rapidly emerged as a simple and cost-effective model system for human diseases.The worm is a small (~1 mm), free-living and self-producing nematode feeding on a bacterial diet of different species [1].It has been widely utilised as a paradigm for studies of neurodegenerative disorders, owing to its short life cycle of around 2 to 3 weeks, simple laboratory handling and transparent nature, facilitating the live observation of fluorescence-tagged neurons [2].Its explicitly mapped network of 302 neurons provides a direct and reliable approach for precise neuronal tracking and analyses [3].The high genetic and functional conservation between the C. elegans genome and the one of mammals [4] enables comparative studies of specific cellular mechanisms and molecular pathways.From a genetic point of view, C. elegans is amenable to high-throughput genetic and drug screens, which provides a unique opportunity to explore molecular mechanisms and therapeutic options for neurodegenerative diseases. In this review, we provide an up-to-date outline of studies that utilise C. elegans as a model organism to investigate the cellular and molecular basis of neurodegenerative diseases.We mainly focus on the currently existing "gain-of-function" models, in the context of five common neurodegenerative diseases, amyotrophic lateral sclerosis (ALS), Huntington's disease (HD), Alzheimer's disease (AD), Parkinson's disease (PD) and Machado-Joseph disease (MJD), for which C. elegans models have been well established.A graphic illustration of the transgene expression in relation to different disease models is depicted in Figure 1. screens, which provides a unique opportunity to explore molecular mechanisms and the apeutic options for neurodegenerative diseases. In this review, we provide an up-to-date outline of studies that utilise C. elegans as model organism to investigate the cellular and molecular basis of neurodegenerative di eases.We mainly focus on the currently existing "gain-of-function" models, in the conte of five common neurodegenerative diseases, amyotrophic lateral sclerosis (ALS), Hu tington's disease (HD), Alzheimer's disease (AD), Parkinson's disease (PD) and Machado Joseph disease (MJD), for which C. elegans models have been well established.A graph illustration of the transgene expression in relation to different disease models is depicte in Figure 1.A simplified anatomical sketch of C. elegans denoting tissues of transgene expression a plied in the reviewed disease models.Regions of expression are separated into two organ system with 4 sub-divisions of specific neurodegenerative diseases.Green: nervous system; grey: muscul system. Amyotrophic Lateral Sclerosis (ALS) ALS is a lethal motor neuron disease characterised by the selective and gradual lo of motor neurons in the spinal, bulbar and cortical regions [5].The vast majority of AL cases are sporadic, while 5-10% of patients exhibit apparent autosomal dominant inhe itance [5,6].Several causative genes have been linked to familial ALS, including Cu/Z binding superoxide dismutase (SOD1), TAR DNA-binding protein (TDP-43), fused in sa coma (FUS) and the chromosome 9 opening reading frame 72 (C9ORF72) [7]. Cu/Zn-Binding Superoxide Dismutase (SOD1) Models SOD1 was first identified as a causative gene of ALS in 1993 [8].It functions as a antioxidant catalyst for the conversion of superoxide radicals into dioxygen and hydroge peroxide, essentially preventing superoxide from damaging the cell [9].To date, over 17 missense point mutations in SOD1 have been discovered, accounting for 10-20% of fam ial ALS cases [9,10].Although the exact molecular mechanism of SOD1 protein-relate toxicity has not yet been delineated, increasing evidence supports that mutated SOD1 e erts its cytotoxic effects in a gain-of-function manner, causing aggregation, mitochondri dysfunction, oxidative stress elevation and proteostasis disruption [11,12]. Amyotrophic Lateral Sclerosis (ALS) ALS is a lethal motor neuron disease characterised by the selective and gradual loss of motor neurons in the spinal, bulbar and cortical regions [5].The vast majority of ALS cases are sporadic, while 5-10% of patients exhibit apparent autosomal dominant inheritance [5,6].Several causative genes have been linked to familial ALS, including Cu/Zn-binding superoxide dismutase (SOD1), TAR DNA-binding protein (TDP-43), fused in sarcoma (FUS) and the chromosome 9 opening reading frame 72 (C9ORF72) [7]. 2.1.Cu/Zn-Binding Superoxide Dismutase (SOD1) Models SOD1 was first identified as a causative gene of ALS in 1993 [8].It functions as an antioxidant catalyst for the conversion of superoxide radicals into dioxygen and hydrogen peroxide, essentially preventing superoxide from damaging the cell [9].To date, over 170 missense point mutations in SOD1 have been discovered, accounting for 10-20% of familial ALS cases [9,10].Although the exact molecular mechanism of SOD1 protein-related toxicity has not yet been delineated, increasing evidence supports that mutated SOD1 exerts its cytotoxic effects in a gain-of-function manner, causing aggregation, mitochondrial dysfunction, oxidative stress elevation and proteostasis disruption [11,12]. The gain-of-toxicity effects have been observed in transgenic C. elegans by introducing human SOD1 mutants.The overexpression of human SOD1 (G93A) in C. elegans motor neurons led to prominent SOD1 aggregates, axon guidance failure and motor defects [13,14].Similarly, worms with the pan-neuronal expression of human SOD1 (G85R) displayed insoluble SOD1 aggregates, a reduced axonal size and number and significant locomotory impairment [15,16].When overexpressing disease-associated SOD1 mutations (A4V, G73R and G93A) in C. elegans body wall muscles, it yielded similar gain-of-toxicity phenotypes, manifesting as the presence of SOD1 aggregates and severe appearance and locomotion anomalies upon exposure to paraquat-induced oxidative stress compared to a control strain [17].In general, these studies have managed to recapitulate some of the characteristic clinical phenotypes of ALS, such as the progressive loss of motor capabilities, presence of toxic protein aggregates and axonal abnormalities [18]. The above models in different tissues have greatly facilitated genetic and drug screenings related to SOD1 toxicity.In the mammalian system, SOD1 neurotoxicity has been linked to the proteostasis network.The upregulation of the ubiquitin-proteosome pathway or autophagy activities effectively mitigates SOD1 toxicity [19].A genome-wide RNA interference (RNAi) screen using the SOD1 (G93A) model in C. elegans corroborated the protective role of the proteostasis network in suppressing SOD1 toxicity and identified 63 genetic modifiers that were efficient in alleviating SOD1 aggregation.These modifiers incorporated different aspects of the proteostasis network from the chaperone system and ubiquitin-proteosome pathway to autophagy [20].In another model, TorsinA, an ER protein acting in a chaperone-like fashion, attenuated SOD1 (G85R)-induced ER stress, promoted the proteasomal degradation of mutant SOD1 protein and rescued behavioural defects [21].Interestingly, genes regulating ageing have also been identified to modulate SOD1 toxicity.The overexpression of daf-16 alleviated aggregates formation and reversed the paralytic phenotype elicited by SOD1 mutations.Consistently, metformin, a lifespan extension drug, showed protective effects against SOD1-induced cytotoxicity.It significantly increased the lifespan and mitigated SOD1-induced locomotor dysfunctions, partially relying on a daf-16-dependent pathway [22].Subsequently, metformin has recently entered a phase 2 clinical trial to examine its safety and efficacy in ALS patients [23]. TAR DNA-Binding Protein (TDP-43) Models Mutations in TDP-43 account for approximately 3% of familial ALS cases [24].TDP-43 is a ubiquitously expressed DNA-and RNA-binding protein of 43 kDa that regulates transcription and alternative mRNA splicing and RNA stability [25].In ALS patients, the sequestration and redistribution of phosphorylated TDP-43 proteins into intracytoplasmic ubiquitinated inclusions, accompanied by a significant depletion in natural nuclear TDP-43, were discovered in their brain samples [25][26][27].A gain-of-toxicity from nuclear TDP-43 mislocalisation to cytosolic inclusions has been reported to contribute to TDP43 proteinopathy [25,28]. The pan-neuronal expression of ALS-linked human TDP-43 mutants (G290A, A315T, Q331K, M337V) elicited neurotoxicity in C. elegans.Worms exhibited distinct neurotoxic features including motor dysfunction, compromised longevity and solid inclusions with phosphorylated protein aggregates, analogous to the hallmarks of TDP-43 proteinopathy in humans [28,29].When expressed in motor neurons alone, TDP-43 (A315T) caused the progressive deterioration of locomotor function, cytoplasmic insoluble aggregates and motor neuron degeneration, which resembled the cellular phenotypes of human ALS [30]. Phosphorylation has been identified to play an important role in TDP-43 toxicity in C. elegans.Using a C. elegans model, Liachko et al. [29] located the phosphorylation site at serine residues 409/410 (s409/410), as a main driving factor for the higher toxicity of mutant TDP-43 (G29A, M337V).In addition, a potent phosphatase, calcineurin, was recognised for its precise dephosphorylation at the s409/410 sites.The genetic inhibition of this phosphatase in C. elegans profoundly promotes phosphorylated accumulation and aggravates motor deficits [31].Another drug screen has revealed an alternative potent drug candidate for its neuroprotective effects in treating TPD-43 mutant-caused neurotoxicity that resembles familial ALS characteristics in C. elegans.α-Methyl-α-phenylsuccinimide (MPS), an active metabolite of a widely used anti-epileptic drug, ethosuximide, rescued the locomotor deficits and extended the lifespan in the TDP-43 (A315T) model [32].This effect was mainly mediated through the DAF-16-dependent insulin-like pathway, indicating the importance of the ageing pathway in relation to treating TDP-43 neurotoxicity [32].These studies exemplify the practicability and robustness of the C. elegans model system for the high-throughput drug discovery of new drug candidates. Fused in Sarcoma (FUS) Models About 4% of familial ALS cases are attributed to mutations in FUS, a gene encoding DNA-and RNA-binding proteins that regulate DNA damage, RNA transcription, splicing and transport [33,34].Similar to TDP-43, the proteinopathy of mutant FUS proteins is characterised by the cytosolic accumulation of toxic FUS aggregates alongside a loss of wild-type [35] proteins, dysfunctional mRNA metabolism and motor neuron degeneration [36,37]. The overexpression of human FUS mutations (R514G, R521G, R522G, R524S and P525L) pan-neuronally in C. elegans showed characteristic neuropathological changes, such as cytosolic aggregates, a gradual decline in locomotor activities and a reduced lifespan.The severity of each mutant corresponded to the level of clinical severity of each one in humans and failed to be restored by the WT FUS protein, indicating gain-of-function toxicity [35].A consistent phenotype was observed in another study conducted by Vaccaro et al. [30], where they introduced full-length FUS variant S57∆ in C. elegans motor neurons.Labarre A [38] engineered a single-copy human FUS mutant model in motor neurons, which provoked a similar gain-of-toxicity phenotype, manifesting as progressive locomotory defects and destructive neuromuscular junctions.Prior studies have suggested a link between FUS toxicity and autophagy.For further investigation, Baskoylu et al. [39] introduced diseasecausing mutations (R524S, P525L) into C. elegans FUS orthologue fust-1.The study revealed that the neurotoxicity of fust-1 was partially due to the disturbance in autophagy following the loss of fust-1, highlighting possible cellular mechanisms of FUS proteinopathy [39].Taken together, these models closely mimic the clinical features of mutant FUS-related ALS cases and provide valuable insights into the cellular mechanisms and pathogenesis of the disease. Chromosome 9 Open Reading Frame 72 (C9ORF72) Models Hexaneulotide (GGGGCC) repeat expansions within a non-coding region of the C9ORF72 gene have been implicated to be responsible for 10-40% of familial cases, making this the unprecedently most frequent ALS-causing gene [40][41][42].C9ORF72 proteins play a role in the regulation of intracellular endolysosome trafficking in the autophagylysosome pathway [43].Typically, more than 30 hexanucleotide repeats is considered etiopathogenetic, although, in some ALS cases, the repeat counts can reach hundreds to thousands [42,44]. The overexpression of human C9ORF72 in C. elegans, consisting of 29 hexanucleotide repeats, either globally or pan-neuronally, causes a severe age-dependent decline in motility in parallel to a shortened lifespan [45].This finding was further corroborated by a separate study, where worms expressing 75 GGGGCC repeats pan-neuronally developed a shortened lifespan, locomotor defects and distinct dipeptide repeat (DPRs) protein aggregates [46]. Although how exactly C9ORF72 confers toxicity remains enigmatic, a combination of loss-of-function and gain-of-function has been speculated [41,47].Loss-of-function toxicity is a result of the perturbed regulation of normal gene expression, which ultimately leads to C9ORF72 haploinsufficiency [47].In terms of gain-of-function toxicity, the leading theory is based on repeat-associated non-AUG (RAN) translation, translating sense and antisense transcripts containing GGGGCC repeats and producing five toxic dipeptide repeat (DPRs) proteins with the propensity to aggregate intracellularly [48].The five DPRs translated from GGGGCC repeats include poly-glycine-alanine (GA), poly-glycine-proline (GP), poly-glycine-arginine (GR) in the sense direction and poly-proline-arginine (PR) and poly-proline-alanine (PA) in the antisense direction [47].Several studies have reported that arginine-containing dipeptides PR and GR possess the highest toxicity.Worms expressing 50 repeats of PR or GR in either muscle or motor neurons developed an age-dependent paralytic pattern and stunted growth [49].It is noted that the nuclear localisation of the peptide is required to exert toxic effects [49].On this basis, Snoznik, et al. [50] performed a forward genetic screen and identified spop-1, an orthologue for human SPOP (a conserved nuclear E3 ubiquitin ligase adaptor protein), responsible for the neurotoxicity of PR50 and GR50.The inhibition of spop-1 significantly improved the abnormal behavioural phenotypes in worms, presenting a potential druggable target for the alleviation the neurotoxicity of arginine-related dipeptides [50]. Polyglutamine (polyQ) Repeat Diseases The abnormal expansion of CAG trinucleotide repeats in the coding regions of separate genes encoding polyglutamine (polyQ) tracts in a RAN translation fashion is the genetic cause of at least nine neurodegenerative disorders, among which Huntington's disease (HD) and spinocerebellar ataxias (SCAs) represent the two most frequent forms [51].Although the affected genes in different polyQ disorders are unrelated, all diseases share a common phenotypic feature that is slow and progressive, accompanied by a pathological threshold of polyQ length ranging from around 21 to over 100 for complete penetrance [51].Proteins containing polyQ expansions are prone to misfolding and aggregation, and it is widely accepted by the current literature that polyQ aggregation may involve a gain-oftoxicity from the expanded polyglutamine repeats [52][53][54].Intriguingly, a recent finding uncovered the aberrant accumulation of novel repeat peptide proteins produced through RAN translation of the CAG repeats, polyalanine, polyserine, polyleucine and polycysteine in HD human brains.This implies an uncharacterised pathogenetic pathway contributing to the neurotoxicity of CAG-repeat-related diseases [55]. Huntington's Disease (HD) Models HD is a dominantly inherited disorder that is monogenic, rare and fatal, with currently no disease-modifying treatment available.It is genetically caused by an elongated CAG repeat in exon 1 of the Huntingtin (HTT) gene that encodes an expanded polyQ stretch [56].In normal populations, the number of CAG repeats is equal to or below 35, while in patients with HD, the disease is fully penetrant when the length of repeats exceeds 40 [57]. Clinical manifestations of HD include the progressive loss of motor control, such as chorea and incoordination, cognitive impairment and neuropsychiatric disorders [57].A prominent reduction in striatal volume and atrophy of the caudate nucleus and putamen are the core neuropathological changes associated with HD [58].Even though there is wide expression of HTT in human brains, GABAergic medium spiny neurons of the striatum suffer a strikingly selective vulnerability, subsequently subjecting them to neuronal dysfunction and cell death [59].A hallmark pathological feature of HD is the deposit of intranuclear and cytoplasmic aggregates, with previous evidence found in post-mortem human HD brains, transgenic mouse models and in vitro cell culture models [60,61].The exact physiological role of misfolded HTT is unclear; however, it is hypothesised that the expanded polyQ strand confers a toxic gain-of-function that results in neurodegeneration and the development of HD symptoms [58]. The absence of an HTT orthologue in C. elegans does not prevent it from becoming a suitable model organism for the investigation of the underlying mechanisms of neurotoxicity driven by polyQ.Several transgenic C. elegans models have been established to enable the expression of polyQ with varying lengths fused to fluorescent marker proteins in different groups of neurons-for example, in ASH sensory neurons of C. elegans under the control of the osm-10 promoter [62].The results demonstrated in this study are in consistency with the findings in human HD, indicating that the age of onset and disease severity are polyQ-length-dependent, and reveal a certain threshold of polyQ expansions for the appearance of mutant HTT aggregates [62].The overexpression of HTT171-Q150 in ASH sensory neurons led to nose touch defects before the occurrence of major aggregation, indicating that cellular dysfunction mediated by mutant HTT might precede protein aggregation [62].Another C. elegans model used the mec-3 promoter to express mutant HTT in touch receptor neurons, where the perinuclear formation of aggregates along with axonal abnormalities were identified in both young and old adult animals [63].No cell death was observed in this study, which might be attributed to the lack of intranuclear aggregate formation [63].Furthermore, the pan-neuronal expression of polyQ in C. elegans was examined using the rgef-1 promoter [64].Behavioural assays reflected a significant correlation between the polyQ repeat size and neuronal dysfunction, and a pathogenic threshold of more than 40 glutamines was required for the formation of insoluble aggregates [64]. In addition to expressing mutant HTT or polyQ in the nervous system, there are muscle-specific C. elegans models, in which polyQ expression is confined to body wall muscle cells.Disease-length polyQ expressed in these cells under the control of the promoter unc-54 caused reduced motility and a shortened lifespan compared to WT animals, and polyQ aggregation and toxicity were shown to increase with age [65,66].According to the fluorescence distribution in muscle cells expressing polyQ, 35-40 glutamine residues was considered a threshold for aggregation and neuronal dysfunction [66].Mutations in age-1, which could prolong the lifespan of C. elegans via an insulin-like pathway, contributed to the delayed onset of motility defects and polyQ aggregation [66].Moreover, the overexpression of ubiquitin was found to alleviate the toxic effects associated with HTT-Q55 [67]. Owing to the facile genetics of C. elegans, forward and reverse genetic screens have been largely employed in C. elegans models to identify gene mutations that are of interest regarding polyQ toxicity.Using a previously described C. elegans model [62], genetic screens were conducted aiming to identify protective proteins against the toxic effects of polyQ, consequently giving rise to the discovery of the polyQenhancer-1 (pqe-1) gene [68].Mutations in pqe-1 led to the enhancement of neurotoxicity in ASH sensory neurons, and neurodegeneration was exacerbated as the animals aged [68].Other studies performed genome-wide RNAi screens in transgenic C. elegans models, which identified 88 genetic suppressors of polyQ aggregation and 23 of toxicity [20], as well as 49 modifiers of polyQmediated neuronal dysfunction that had been found previously in HD mice models [69].Moreover, by performing a mutagenesis screen in a C. elegans model expressing Q40, a novel modifier of aggregation moag-4 has been identified [70].The inactivation of moag-4 was shown to suppress polyQ aggregation in transgenic animals.Notably, MOAG-4 is highly conserved.Human orthologues SERF1A and SERF2 have also been shown to modulate polyQ aggregation and toxicity [70].These results have further confirmed the genetic intersection between the nematode and mammals and therefore the feasibility of using C. elegans as models to interpret human HD. A more recent study examined the toxic effects of all six repeat peptide products of CAG-related RAN translation, including polyglutamine in C. elegans, and reported polyleucine to convey the strongest toxicity, which caused the most penetrant phenotype of stunted growth and defective motility in worms [71].This result corroborated the previous finding in HD human brains that an alternative mechanism might be responsible for the neurotoxicity of CAG-repeat-related neurodegenerative diseases other than the conventionally thought polyQ repeats [71]. Machado-Joseph Disease (MJD) Models Spinocerebellar ataxia type 3 (SCA3), also referred to as Machado-Joseph disease (MJD), is a dominantly inherited neurodegenerative disorder that represents the most frequent form of SCAs worldwide [72][73][74].MJD is caused by an abnormally expanded CAG repeat in exon 10 of the ATXN3 gene [75].The expansion of CAG repeats in individuals affected by MJD usually ranges from 60 to 87, while in healthy populations, it does not exceed 44 [76].The age of onset of MJD is inversely proportional to the size of trinucleotide repeats, and the disease severity increases with the repeat length [72]. Clinically, MJD leads to progressive ataxia and pyramidal signs, accompanied by a wide array of symptoms such as amyotrophy, gait imbalance, ophthalmoplegia, speech difficulties and dysphagia [77].Neuropathological findings of MJD are highlighted by prominent neuronal loss and the atrophy of brain structures, including the cerebellum, pons and basal ganglia [78].Similar to HD, the accumulation of intranuclear and cytoplasmic aggregates is a common feature of MJD, as evidenced in human brain, transgenic animal and cell line studies [79,80]. In C. elegans, full-length and truncated ATXN3 with varying lengths of glutamines was expressed pan-neuronally under the control of the unc-119 promoter, which caused motility deficits and neuronal dysfunction including an impaired ubiquitin-proteasome system (UPS), disrupted synaptic transmission and compromised neuronal processes [81].It was reported in another model that intranuclear and cytoplasmic mutant ATXN3 aggregates accumulated in a polyQ-length-dependent manner in vivo, and protein aggregation followed a cell-type-specific pattern in the nervous system of C. elegans, where immobile aggregates were detected in ventral and dorsal nerve cord neurons but rarely in lateral interneurons [82].Significantly reduced motility was also observed in animals in the presence of aggregates compared to the control group, suggesting a direct correlation between mutant ATXN3 aggregation and neuronal dysfunction [82].Moreover, it has been found that the ageing-related transcription factors DAF-16 and heat-shock factor 1 (HSF-1) play a protective role against mutant ATXN3 pathogenesis [82].A more recent muscle-specific C. elegans MJD model presents similar results that the aggregation and neurotoxicity driven by a C-terminal fragment of ATXN3 are dependent on the polyQ length [83].Interestingly, the study found that ageing is not necessarily involved in the exacerbation of polyQ aggregation and toxicity. A large-scale RNAi screen performed in a C. elegans transgenic model expressing mutant ATXN3 gene led to the identification of a transcription-factor-coding gene fkh-2/FOXG1, which rescued the mutant ATXN3-induced motility defect, shortened lifespan and neurodegeneration when overexpressed [84].In another C. elegans MJD model, the efficacy of befiradol was tested, which is an agonist specifically targeting the serotonin 5-HT 1A receptor, and both acute and chronic treatment resulted in a reduction in mutant ATXN3 aggregation [85]. Alzheimer's Disease (AD) Alzheimer's disease (AD) is a chronic neurological disorder and a classic manifestation of dementia with an increased incidence with age [86].AD affects diverse regions of the brain, including the hippocampus, temporal lobe, frontal lobe and limbic system [87].The physiological consequences of AD encompass a broad spectrum of dysfunctions, such as memory loss, cognitive impairment and disturbances in consciousness [88].Aggregation is a hallmark of AD that is believed to cause neural dysfunction and, ultimately, neuronal death [89]. Numerous genes are associated with AD pathology, namely the Apolipoprotein E (APOE), Microtubule-Associated Protein Tau (MAPT) and Amyloid-β Precursor Protein (APP) genes [89].Mutations in the APP and APOE genes result in the elevated accumulation of Amyloid-β plaques between neurons, which further disrupts neuronal function and is recognised as an established hallmark of AD pathogenesis [89,90].Additionally, AD is characterised by the presence of intracellular hyperphosphorylated tau aggregates, which form neurofibrillary tangles that hinder communication between brain cells [89].The amyloid plaques and fibrillary tangles increase the production of toxic reactive oxygen species (ROS) and impair normal cellular machineries such as autophagy and mitochondrial function, which eventually contribute to cell death [91,92]. Amyloid-β (Aβ) Models Gain-of-toxicity models of Aβ has been applied to different tissues in C. elegans.When expressing human Aβ1-42 in body wall muscles, intriguingly, a mass spectrometry analysis detected the presence of truncated Aβ3-42, rather than the intended full-length Aβ1-42 [93].Nevertheless, worms accumulated toxic Aβ aggregates and led to progressive paralysis [93,94].McColl et al. [95] successfully promoted the expression of full-length Aβ1-42 in C. elegans muscle cells by an additional insertion of Asp-Ala (DA) to the N-terminus of the human Aβ sequence.These worms showed characteristic degenerative features like soluble Aβ oligomers and behavioural deficits, leading to severe paralysis [95].Other similar studies confirmed the same observations, and an increased ROS level and decreased lifespan were also noticed [96,97]. Models introducing Aβ into the nervous system of C. elegans have been constructed as well.The overexpression of Aβ1-42 in glutamatergic neurons or pan-neurons caused Aβ deposits, neuronal degeneration, behavioural defects and a shortened lifespan.Fluorescence lifetime imaging revealed that Aβ aggregation starts in a subset of neurons and spreads to other tissue during ageing.The RNAi-mediated depletion of Aβ specifically in these neurons effectively delays Aβ aggregation and pathology [98].Other molecules, such as the transcription factor SPR-4, have also been reported to act as mitigating factors for Aβ-related toxicity [99].Additionally, an inducible global secretion of Aβ1-42 proteins was constructed to study the time-lapse changes of protein aggregates.Aβ proteins were observed to outspread from neurons and form distinct immobile aggregates extracellularly [100].Based on the model, a disintegrin and metalloprotease 2 (ADM2) were identified to be capable of removing extracellular Aβ aggregates [100]. Various genetic and drug screenings have been conducted in C. elegans Aβ models.An RNAi screen in a muscle expression model revealed that the inhibition of mitoferrin-1 diminished mitochondrial ROS levels, resulting in a reduced paralysis rate and prolonged worm lifespan [101].Natural products such as Holothuria scabra extracts, Radix Stellariae extracts and D-pintol have been found to reduce Aβ aggregation and decrease ROS levels in Aβ disease models [96,102,103]. Tau Models A C. elegans homologue of human tau, ptl-1, is involved in the maintenance of neural health during ageing [104].As the loss of its function cannot be fully restored by human tau, the majority of tau models in C. elegans opt for the direct expression of human tau and its disease-related variants [104].The overexpression of disease-associated tau (P301L, V337M and R406W) in the C. elegans nervous system caused insoluble tau accumulation and defects in sensory and motor neuronal functions [105,106].These worms also developed agedependent breaks in nerve cords following substantial neuronal loss, indicating possible neurodegeneration [105,106].The overexpression of human HSP70 managed to alleviate the neural dysfunction in these models [107].A genome-wide RNAi screen employed a pan-neuronal expression model of tau (V337M) and identified 75 genes that aggravated tau (V337M)-induced toxicity.Forty-six of them shared sequential similarities with the human genome, including chaperones and proteases that are part of the proteostasis network [107]. Tau aggregation-mediated toxicity was further supported by introducing pro-aggregation and anti-aggregation mutations in C. elegans models [108].Pro-aggregation mutation with K280 deletion enhanced tau aggregation propensity, while anti-aggregation mutations with a combination of the I277P and I308P mutations prevented β-sheet formation and subsequent aggregation [108].Worms with pro-aggregation mutations showed impaired mitochondrial transport, severely compromised motility and obvious neuronal dysfunction in comparison to the anti-aggregation combination [108].The overexpression of another tau-aggregationprone variant (3PO) also caused the formation of insoluble aggregates and a shortened lifespan [109].The pan-neuronal overexpression of another disease-associated mutation, tau (V363A/V363I), further differentiated the toxicity of insoluble tau and soluble oligomers.Worms with tau (V363A) formed soluble oligomeric assemblies, while tau (V363I) accumulated as highly phosphorylated insoluble tau assemblies.Interestingly, tau (V363A) impaired presynaptic function in both motor and pharyngeal neurons.In contrast, tau (V363I) only affects postsynaptic function in motor neurons [106]. Consistent tau-induced neurotoxicity has been demonstrated in a single-copy gene insertion model.Two strains were constructed to mimic common post-translational modifications contributing to tauopathy, tau (T231E) for phosphorylation and tau (K274/281Q) for lysine acetylation [110].Both strains exhibited reduced touch sensation and an abnormal neuronal morphology, while tau (K274/281Q) hampered neuronal mitophagy under mitochondrial stress [110]. Recently, studies have indicated a novel aggregation-independent mechanism of tau toxicity.The overexpression of tau (A152T) in the nervous system leads to severe locomotor defects and gaps in nerve cords, implying motor neuron degeneration [111].A close inspection of the touch sensory neurons revealed morphological abnormalities such as convoluted neuronal processes and nonspecific outbranching, resembling common characteristics of aged neurons [111].These worms also showed the aberrant localisation of presynaptic components and neurotransmission defects, as well as an abnormal mitochondrial distribution and trafficking [111].Strikingly, no insoluble tau aggregate was detected, the addition of anti-aggregation compounds failed to rescue tau (A152T)-related toxicity [111].Another piece of evidence is from a pseudo-hyperphosphorylation (PHP) tau model, which overexpressed mutated tau (ten serine/threonine residues to glutamic acid) in C. elegans, to mimic the pseudo-hyperphosphorylation status [112].These worms showed defects in motor neuron development and ageing-related neurodegeneration, but, surprisingly, lacked apparent tau aggregates [112].Similarly, a model with tau (R406W) expressed in all neurons showed aberrantly phosphorylated tau but no detergent-insoluble aggregates [113].Drug screening using the same model identified curcumin, a major phytochemical compound in turmeric, that reduced tau-induced toxicity [113]. Parkinson's Disease (PD) Parkinson's disease (PD) is a neurodegenerative disease characterised by the loss of dopaminergic neurons in the substantia nigra.Patients with PD develop motor symptoms including muscle stiffness, slowness of movement and postural instability, and non-motor symptoms such as sleeping disorder, cognitive impairment and neuronal dysfunction [114,115].At the cellular level, α-synuclein (αSyn, encoded by the SNCA gene) aggregation is considered as the pathological hallmark of PD.In addition, increased ROS levels and impaired autophagy and mitochondria together contribute to PD pathology [116][117][118].A number of genes were identified to associate with familial forms of PD, including SNCA, LRR2, PINK1 and PARK7 [119].Mutations in these genes either directly lead to abnormal αSyn amyloid fibrils or interfere with the physiological pathways involved in mitochondria and autophagy [115,[119][120][121][122]. Despite the lack of a functional orthologue for human αSyn in the C. elegans genome, the overexpression of disease-associated mutant SNCA (A53T/A56P/A30P) proteins in C. elegans dopamine neurons leads to αSyn accumulation and locomotory defects, therefore phenocopying the cellular and physiological defects described in mammalian PD models [123][124][125][126][127]. Worms with the overexpression of WT human αSyn:Venus fusion in dopamine neurons developed inclusions in the axons and pathological blebbing in the dendrites [128].The rounded cell bodies and dendritic disorganisation indicated that the process of neurodegeneration was associated with ageing [128].These worms showed defects in foraging behaviour and the crawling to swimming switch, similar to that induced by dopamine deficiency [128].Based on this model, reverse genetic screening of >100 PD susceptibility genes identified in a preliminary genome-wide association study (GWAS) yielded 28 genetic modifiers participating in pathways such as calcium signalling and vesicular trafficking [128].The inactivation of these genes altered the pathological phenotype and alleviated αSyn toxicity [128]. Other studies have been undertaken to introduce human αSyn into different C. elegans tissues.The overexpression of WT human SNCA proteins, either strictly in motor or mechanosensory neurons, or broadly in all neurons or the musculature, resulted in the formation of mobile and immobile aggregates and movement defects, indicating an apparent gain-of-toxicity [123,125,129].RNAi screens using these models have uncovered different cellular pathways that can suppress αSyn-mediated inclusions and modulate neurotoxicity, including histone modification, choline phosphorylation, cytoskeletal components and vesicular endocytosis [125,129].Other suppressors, sir-2.1/SIRT1and lagr-1/LASS2, participate in an ageing-associated cellular pathway, suggesting a potential linkage between αSyn inclusion formation and cellular ageing [130].A recent high-throughput kinetic screening identified a small molecule, SynuClean-D, as an αSyn aggregation inhibitor in vitro [131].The treatment of SynuClean-D in C. elegans expressing αSyn in both dopaminergic neu-rons and muscle cells showed substantially reduced proteotoxicity [131].Furthermore, natural products such as squalamine and chrysin have also been found to suppress αSyn aggregation and alleviate locomotory defects [124,132]. Conclusions C. elegans has established itself as a favoured model organism in the field of ageingrelated disease research.Abundant analyses of gene mutations pertinent to neurodegenerative diseases have been undertaken using this small and simple nematode, recapitulating critical phenotypic features of the diseases.Through these models, C. elegans acts as an informative intermediary to provide mammalian studies with novel candidates to probe the complexities of neurodegenerative diseases.Another compelling advantage is the practicability of conducting large-scale high-throughput in vivo drug screenings in C. elegans models, where several compounds have been tested for their efficacy against neurotoxicity.The intricacy or simplicity of C. elegans does have its drawbacks.The complex and heterogeneous nature of neurodegenerative diseases is difficult to mimic in the simple architecture of the C. elegans nervous system.In the context of neurons, the intricately interconnected clusters of neurons, the caudate and putamen, are absent in C. elegans, which are the most affected structures in HD patients.In the context of neuronal processes, the absence of myelin sheaths wrapping C. elegans axons failed to recapitulate myelin in the human nervous system, the dysfunction of which plays an imperative role in the pathogenesis of neurodegenerative diseases [133].Moreover, C. elegans lacks an adaptive immune system and fails to incorporate and resemble the comorbidities, such as neuroinflammation, that underlie the pathology of these diseases [2].In addition, to what extent the drug candidates discovered in C. elegans models can retain their high efficacy in the human system, or whether they are relevant to human pathology, remains unknown.Nevertheless, these models present novel therapeutic candidates as promising alternatives to the limited effective therapies available currently, and increasing research is being conducted to validate the potency of these drugs in mammalian systems.The worm itself still serves as a robust preclinical tool to enhance our understanding towards the fundamental pathophysiology of neurodegenerative diseases at the molecular and genetic levels.More research is warranted to accelerate this process, potentially by focusing on conserved signalling pathways or molecules involved in disease pathogenesis, which will possibly shed more light on promising disease intervention strategies. Figure 1 . Figure 1.A simplified anatomical sketch of C. elegans denoting tissues of transgene expression a plied in the reviewed disease models.Regions of expression are separated into two organ system with 4 sub-divisions of specific neurodegenerative diseases.Green: nervous system; grey: muscul system. Figure 1 . Figure 1.A simplified anatomical sketch of C. elegans denoting tissues of transgene expression applied in the reviewed disease models.Regions of expression are separated into two organ systems with 4 sub-divisions of specific neurodegenerative diseases.Green: nervous system; grey: muscular system.
7,897.4
2023-12-28T00:00:00.000
[ "Biology", "Medicine" ]
Analysis of Gas Recirculation Influencing Factors of a Double Reheat 1000 MW Unit with the Reheat Steam Temperature under Control : In this paper, the simulation software EBSILON is used to simulate the reheat units, and the reheat temperature control mode is deeply explored. In the benchmark system, the influence of di ff erent load intermediate point temperature on the flue gas recirculation (FGR) is analyzed. Then, the e ff ects of load, coal quality, excess air factor, and feed water temperature on FGR are studied under the premise of intermediate point temperature as design value, and the cause for FGR change is analyzed by comparing the cuto ff bypass flue (CBF) system. The results show that under any load, the FGR decreases with the increase of the intermediate point temperature, while under low load, the change of the intermediate point temperature has a greater impact on the FGR rate. When the intermediate point temperature remains constant, the FGR plunge has an increase of load at low load and is almost unchanged at high load; the FGR rate of coal with low calorific value and high moisture content is low and the coal with low volatile and high ash content has great influence on reheat steam temperature; and the excess air factor and feed water temperature are inversely proportional to the flue gas recirculation rate. In the CBF system, the change trend is similar to the reference system, but under the same working condition, the FGR rate is higher than the latter. Introduction By the end of 2019, the proportion of coal-fired installed capacity will still account for 55% of the power structure [1]. Under the situation of abundant coal and tight electric power in China, thermal power generation is still the most important power generation mode in China. To improve the boiler efficiency is the inevitable choice for China's coal-fired power plants [2]. After decades of development, the unit parameters have been improved from subcritical to supercritical, and the efficiency has been increased from 30% to 47% [3]. Therefore, the double reheat technology has become one of the most effective ways to improve the efficiency of coal-fired power plants [4]. Many scholars have made great achievements in economy and key technologies [5][6][7]. By increasing the pressure of superheated steam and the temperature of reheated steam, the double reheat technology increases the cycle efficiency and the thermal efficiency of the unit. It reduces the coal consumption of the unit, improves the thermal economy, and reduces the emission of pollutants. Similarly, it can improve the operating conditions of the last stage blade and make the unit safer. However, the use of double reheat technology makes the composition of the thermal system more complex, the investment cost increases, and the requirements for operation are higher [8]. Since each temperature parameter is close to the limit value of the pipeline, a small temperature deviation will lead to serious consequences [9]. Therefore, the steam temperature regulation of the reheat unit is the key to the safe and efficient operation of the unit. 2 of 21 In the steam side, the regulation mode is mainly spray cooling, but this way will reduce the work share of the high-pressure cylinder and reduce the overall cycle thermal efficiency of the unit, so it is not suitable for normal temperature regulation. However, this method is more sensitive and generally used as a fine adjustment measure [10]. Flue gas side temperature regulation mainly includes FGR, burner swing, and flue gas damper. The swing of the burner has a certain effect on steam temperature regulation, but its long-term operation at a non-zero position will be affected by the flue gas vortex. Therefore, burner swing mode is not used in normal operation. The main method to adjust the outlet steam deviation is the flue gas baffle. Because the FGR system is relatively simple and has a wide range of reheat steam temperature regulation, flue gas recirculation is still the temperature regulation method of double reheat unit. However, the FGR will lead to the mismatch between the flue gas volume and the air volume, resulting in the decrease of unit efficiency. Therefore, the waste heat utilization technology must be adopted to reduce its adverse effects. The low-temperature economizer is arranged in the tail flue, but its economic benefit is not high due to the limitation of flue gas temperature and heat transfer requirements. More studies show that increasing the air supply temperature of air preheater (APH) or arranging bypass flue (BPF) beside the APH to heat condensate can effectively reduce the exhaust gas temperature and improve the system efficiency [11][12][13][14]. Ma et al. took 660 MW thermal power plant as an example and analyzed the thermal performance and technical economy of three typical waste heat utilization processes, namely, low-temperature economizer, upwind section preheating, and BPF thermal system. The results showed that the BPF thermal system performed best [15]. Ma Guoqian et al. [16] added and analyzed the inlet temperature system of the APH heated by flue gas waste heat on the basis of BPF. The results showed that the thermal economy of the system was significantly improved through this advanced flue gas waste heat utilization system. At present, many documents prove that the two kinds of flue gas waste heat utilization systems, BPF and flue gas waste heat heating APH inlet air, have high economy and will be widely used in China [17,18]. FGR is to reintroduce the cold flue gas from the tail flue into the furnace, reduce the radiation in the furnace, change the temperature distribution in the boiler, and then adjust the heat transfer ratio [19]. Many scholars have deeply studied the influence of FGR on boiler system. The results show that FGR can effectively reduce the generation of nitrogen oxides and improve boiler efficiency [20,21]. So far, the research on FGR can be roughly divided into the following two categories. One is to explore the change of boiler system by changing different FGR rates. Hu, Pei et al. [22,23] applied FGR to oxycoal combustion, and found that the nitrogen oxide content in the flue gas decreased significantly. Byeonghun et al. [24] conducted experiments with gas-fired boilers of laboratory scale and found that when FGR increased under the same air equivalent, the phenomenon of red heat on the burner surface decreased significantly, which confirmed the advantages of FGR in improving the safe operation of the unit. Liu et al. [25] conducted quantitative analysis on the performance of incinerator HRSG under different FGR through experimental research. The results showed that with the increase of FGR, the boiler efficiency slightly increased, and NOx emission greatly reduced. V.T. Sidorkin et al. [26] reached the same conclusion after introducing the circulating flue gas into the burner. In addition, through the simulation calculation combined with the experimental results, it can be more persuasive to demonstrate the impact of FGR on the boiler system, and also can eliminate some uncontrollable factors, making the theoretical results more accurate. Wang et al. [27] explored the impact of FGR on denitrification and coal consumption rate through a numerical research method, indicating that FGR has obvious environmental and economic benefits. Li et al. [28] used Aspen Plus simulation software to simulate the combustion process of coal biomass boiler and draw the conclusion that more than 10% FGR rate can basically offset NOx emission. Zhang et al. [29] determined, based on the simulation, that the optimal FGR rate is obtained by thermal calculation, and high FGR rate is proposed under low load. The second is to analyze the influence of different FGR location on the unit. Liu et al. [9,30] carried out numerical analysis and research on the combustion process of 1000 MW SUC; they also compared Energies 2020, 13, 4253 3 of 21 the NOx emission and the temperature distribution of steam water side and flue gas side of different FGR introduction positions. The results show that the introduction of FGR at the top of the burner is an effective method to control NOx generation and steam temperature, and also ensures the safe operation of the unit. Ehsan Houshfar et al. [31] studied the influence of FGR's introduction position on NOx production using a grate combustion reactor of laboratory scale and concluded that FGR can further reduce NOx emission on the basis of staged combustion. However, Ling et al. [32] studied the influence of three different FGR introduction positions on industrial furnaces, and found that FGR can reduce NOx emissions, but slightly increase CO emissions. In fact, gas in the cycle technology is not only used in power plants, but also plays a positive role in other thermal cycle processes [33][34][35][36]. On the basis of the above literature review, we found that in the past research, most of the mathematicians used the FGR as the research variable, or adjusted its recirculation position, or adjusted its recirculation flow, to explore the trend of the change of different parameters of the boiler system. However, there are few papers to explore the change of FGR rate under different working conditions on the premise of ensuring the safe operation of the unit. In this paper, the simulation software is used to simulate the operation under different working conditions to get the change trend of FGR rate, which fills this gap. Under the safe operation of the unit, the temperature change of the intermediate point will lead to the new temperature characteristics of the steam temperature parameters in the boiler. Therefore, this paper first discusses the influence of intermediate temperature change on FGR under the control of main steam temperature. Then the load, coal quality, excess air factor, and feed water temperature are changed under the control of the intermediate point temperature, and the FGR rate is adjusted to ensure that the main steam temperature remains unchanged, so as to study the change trend of the FGR rate under different working conditions after changing the above parameters. Through the qualitative analysis of the steam control equation, as well as the quantitative analysis of the simulation, the causes of the inflection point under the change of the FGR rate are deeply explored. Finally, the change trend of the FGR rate in the flue gas residual heat utilization system is analyzed. The purpose of this paper is to analyze and understand the change of FGR rate with the change of working conditions to provide practical basis for the adjustment and optimization of boiler parameters, and to provide a reference for the study of system economy. Equipment Description In this paper, a super-supercritical reheat system is taken as the research object. EBSILON software is used to simulate the model of boiler system, steam turbine system, and boiler steam turbine coupled whole plant thermal system. The boiler used in the boiler model is a 1000 MW single furnace, tower type arrangement, two-stage air supply, and four-corner tangential firing super-supercritical once-through boiler. The furnace of the boiler is composed of a spiral coil water wall and vertical membrane water wall, and the pulverizing system adopts a positive pressure direct blowing type. The superheated steam temperature is regulated by water coal ratio and water spray desuperheating. The reheated steam adopts flue gas baffle and flue gas recycling. The design coal quality of the boiler is bituminous coal. The common coal quality of boiler is shown in Table 1. The heating surfaces at all levels in the boiler are arranged on the upper part of the once-through tower furnace, surrounded by vertical water walls, and the lower part is spiral water walls. The layout of the heating surfaces is shown in Figure 1. The heating surfaces at all levels in the boiler are arranged on the upper part of the once-through tower furnace, surrounded by vertical water walls, and the lower part is spiral water walls. The layout of the heating surfaces is shown in Figure 1. . EBSILON simulation boiler model. 1-primary superheater; 2-tertiary superheater; 3high-pressure and high-temperature reheater; 4-low-pressure and high-temperature reheater; 5secondary superheater; 6-low-pressure and low-temperature reheater; 7-high-pressure and lowtemperature reheater; 8-economizer; 9-burner; 10-denitration device; 11-APH; 12-APH bypass economizer; 13-PA; 14-SA; 15-induced draft fan. Steam water side process: The feed water first enters the two-stage economizer for heating. After heating, it enters the lower part of the boiler through the downcomer, then enters the mixing header of the water wall through the ash hopper and the spiral water wall, and then enters the vertical water wall of the upper part of the boiler. Finally, it enters the steam separator after passing through the lead-in pipe. The water separated from the steam water separator enters the water storage tank, and this part of water will be pumped into the economizer through the recycling pump at the start-up stage. The steam separated from the steam water separator enters the primary superheater arranged at the bottom through the hanging pipe of the heating surface, then enters the secondary superheater 3-high-pressure and high-temperature reheater; 4-low-pressure and high-temperature reheater; 5-secondary superheater; 6-low-pressure and low-temperature reheater; 7-high-pressure and low-temperature reheater; 8-economizer; 9-burner; 10-denitration device; 11-APH; 12-APH bypass economizer; 13-PA; 14-SA; 15-induced draft fan. Steam water side process: The feed water first enters the two-stage economizer for heating. After heating, it enters the lower part of the boiler through the downcomer, then enters the mixing header of the water wall through the ash hopper and the spiral water wall, and then enters the vertical water wall of the upper part of the boiler. Finally, it enters the steam separator after passing through the lead-in pipe. The water separated from the steam water separator enters the water storage tank, Energies 2020, 13, 4253 5 of 21 and this part of water will be pumped into the economizer through the recycling pump at the start-up stage. The steam separated from the steam water separator enters the primary superheater arranged at the bottom through the hanging pipe of the heating surface, then enters the secondary superheater after passing through the primary desuperheater, and then enters the tertiary superheater after passing through the secondary desuperheater. Finally, the main steam is delivered to the turbine system through the main steam channel. The exhaust steam from the ultra-high-pressure cylinder (UHP) of the steam turbine system enters into the high-pressure and low-temperature reheater and high-pressure and high-temperature reheater of the boiler system in turn for heating, and then it is delivered back to the high-pressure cylinder of the steam turbine to complete the primary reheating process of the whole system. The exhaust gas from the high-pressure cylinder continues to return to the low-pressure and low-temperature reheater and the low-pressure and high-temperature reheater of the boiler system in turn for heating, and then it passes through the secondary heating after the reheat pipeline is delivered to the intermediate pressure cylinder to continue to work and complete the second reheat process. Flue gas side process: The blower sends the primary and secondary cold air to the APH of the quartering bin for heating. After completion, the air is mixed with the pulverized coal sent to the furnace for combustion to generate hot flue gas. The flue gas passes through the primary superheater, the tertiary superheater, the high-pressure and high-temperature reheater, the low-pressure and high-temperature reheater, and the secondary superheater in turn, and then enters the low-pressure and low-temperature reheater and the province, respectively, through the partition wall in the flue. After the completion of radiation convection heat exchange, the coal economizer, high-pressure and low-temperature reheater, and economizer will enter the four-compartment rotary APH from the outlet flue of the economizer and the two-stage low-temperature economizer in the BPF of the APH, and finally, the flue gas will be discharged to the electric dust removal and induced draft fan after mixing. After the electric dust removal process, some flue gas is led to the furnace through the FGR flue to control the temperature of the reheater. Software Simulation Modeling EBSILON is a simulation platform developed by STEAG GmbH in Germany. It can be used to design, check, and optimize different types of power stations, and calculate mass balance and heat balance. It can also monitor the power station dynamically. The software has the following three advantages: (1) The interface operation is intuitionistic. (2) The calculation is efficient and reliable. (3) There are abundant materials and an extensive component library. In this paper, EBSILON (Version 14.03) software is used to simulate the boiler. According to the boiler structure diagram, the layout of the heating surface, and the equipment, the boiler model is built on the basis of the processes of the flue gas system and steam water system. In EBSILON, the control logic is used to control and adjust the reheat steam temperature of the unit. In reheat steam temperature regulation, FGR is the main means to control reheat steam temperature. The controller components are mainly used to adjust the APH, BPF, and reheat temperature. The flue gas temperature at the outlet of APH and the flue gas temperature at the outlet of BPF are both set at 110 • C, which is used to adjust the flow proportion of flue gas entering the APH; the primary air (PA) volume of bypass is controlled by setting the PA temperature to 195 • C. The steam temperature of reheating and secondary reheating is set to the rated value of 623 • C. According to the pre-calculation, for every 10 t/h change of the recirculated flue gas volume, the temperature of the reheated steam and the second reheated steam changes by 0.41 • C and 0.2 • C, respectively. The sensitivity of the reheated steam temperature to the change of the recirculated flue gas volume is greater than that of the second reheated steam temperature. Two controllers are used to control the temperature of primary reheat steam and secondary reheat steam. At the same time, the temperature of the secondary air (SA) temperature is affected by the controller at the recirculation flue gas volume, and then the SA temperature is controlled to reach the design value through the fine adjustment of the flue gas damper. Figure 2 shows the comparison between the design value and simulation value of the steam temperature at the outlet of each stage heat exchanger and the steam flow of primary reheat and secondary reheat in the boiler system under four working conditions. Abscissae 1-8 respectively correspond to the steam temperature at the outlet of the primary superheater, the outlet of the secondary superheater, the outlet of the high-pressure low-temperature reheater, the outlet of the economizer at the side of the high-pressure low-temperature reheater, the outlet of the low-pressure low-temperature reheater, the outlet of the economizer at the side of the low-pressure low-temperature reheater, the primary heat flow, and the secondary reheat flow. The maximum error is 3.68%, the minimum is 0.01%, and the errors of all levels are within 5%. The steam water side model is built accurately. Energies 2020, 13, x FOR PEER REVIEW 6 of 21 Figure 2 shows the comparison between the design value and simulation value of the steam temperature at the outlet of each stage heat exchanger and the steam flow of primary reheat and secondary reheat in the boiler system under four working conditions. Abscissae 1-8 respectively correspond to the steam temperature at the outlet of the primary superheater, the outlet of the secondary superheater, the outlet of the high-pressure low-temperature reheater, the outlet of the economizer at the side of the high-pressure low-temperature reheater, the outlet of the low-pressure low-temperature reheater, the outlet of the economizer at the side of the low-pressure lowtemperature reheater, the primary heat flow, and the secondary reheat flow. The maximum error is 3.68%, the minimum is 0.01%, and the errors of all levels are within 5%. The steam water side model is built accurately. Figure 3 shows the comparison of design value and simulation value of flue gas temperature and total flue gas volume at the outlet of heat exchanger at all levels in the boiler system under four working conditions. The flue gas side 1-9 corresponds to the low flue gas temperature of the panel, the flue gas temperature at the outlet of the first stage superheater, the flue gas temperature at the outlet of the third stage superheater, the flue gas temperature at the outlet of the high-pressure final reheater, the flue gas temperature at the outlet of the low-pressure final reheater, the flue gas temperature at the outlet of the second stage superheater, the flue gas temperature at the outlet of the high-pressure low-temperature reheater, and the flue gas temperature at the outlet of the economizer at the side of the high-pressure low-temperature reheater. The maximum error is 3.65%, the minimum error is 0.11%, and the error is within 5%. The smoke side model is built accurately. Figure 3 shows the comparison of design value and simulation value of flue gas temperature and total flue gas volume at the outlet of heat exchanger at all levels in the boiler system under four working conditions. The flue gas side 1-9 corresponds to the low flue gas temperature of the panel, the flue gas temperature at the outlet of the first stage superheater, the flue gas temperature at the outlet of the third stage superheater, the flue gas temperature at the outlet of the high-pressure final reheater, the flue gas temperature at the outlet of the low-pressure final reheater, the flue gas temperature at the outlet of the second stage superheater, the flue gas temperature at the outlet of the high-pressure low-temperature reheater, and the flue gas temperature at the outlet of the economizer at the side of the high-pressure low-temperature reheater. The maximum error is 3.65%, the minimum error is 0.11%, and the error is within 5%. The smoke side model is built accurately. Figure 4 shows the error comparison between design value and simulation value of some parameters of APH under four working conditions. Abscissae 1-4 respectively correspond to the temperature of PA, SA, PA flow of bypass, and flue gas flow at the inlet of APH. The maximum error is 4.79% and the minimum error is 0.95%, both of which are lower than 5%. The APH model is built accurately. Figure 4 shows the error comparison between design value and simulation value of some parameters of APH under four working conditions. Abscissae 1-4 respectively correspond to the temperature of PA, SA, PA flow of bypass, and flue gas flow at the inlet of APH. The maximum error is 4.79% and the minimum error is 0.95%, both of which are lower than 5%. The APH model is built accurately. Experimental Verification In order to further verify the accuracy of the simulation model, field experiments were carried out on a 1000 MW reheat unit in China. According to〈performance test code for utility boiler〉 (GB/T10184-2015),〈instructions for 2X1000MW ultra-supercritical secondary reheat boiler of upper large pressure and small unit of Huaneng Laiwu power plant ultracritical condition through boiler〉 (F0310BT001A121), and other standards, the test was carried out under 75% and 100% load. The main instrument list of boiler performance test is shown in Table 2. Figure 4 shows the error comparison between design value and simulation value of some parameters of APH under four working conditions. Abscissae 1-4 respectively correspond to the temperature of PA, SA, PA flow of bypass, and flue gas flow at the inlet of APH. The maximum error is 4.79% and the minimum error is 0.95%, both of which are lower than 5%. The APH model is built accurately. Experimental Verification In order to further verify the accuracy of the simulation model, field experiments were carried out on a 1000 MW reheat unit in China. According to〈performance test code for utility boiler〉 (GB/T10184-2015),〈instructions for 2X1000MW ultra-supercritical secondary reheat boiler of upper large pressure and small unit of Huaneng Laiwu power plant ultracritical condition through boiler〉 (F0310BT001A121), and other standards, the test was carried out under 75% and 100% load. The main instrument list of boiler performance test is shown in Table 2. Experimental Verification In order to further verify the accuracy of the simulation model, field experiments were carried out on a 1000 MW reheat unit in China. According to performance test code for utility boiler (GB/T10184-2015), instructions for 2X1000MW ultra-supercritical secondary reheat boiler of upper large pressure and small unit of Huaneng Laiwu power plant ultracritical condition through boiler (F0310BT001A121), and other standards, the test was carried out under 75% and 100% load. The main instrument list of boiler performance test is shown in Table 2. During the experiment, interference operations such as soot blowing, coking, etc. were avoided in order to reduce the test error. The coal quality shall be as close as possible to the design coal quality. The power plant adjusts the operation of the boiler to make the steam admission parameters of the steam turbine meet the test requirements, and the deviation and fluctuation of the parameters meet the requirements of the test regulations. According to the grid method in GB/T10184-2015, 10 measuring points shall be arranged at the outlet of each APH and bypass mixed flue. The flue gas of each measuring point enters into the mixing drum through the sampler and rubber tube for mixing, and then it is introduced into the flue gas analyzer for detection after mixing. The data is collected and saved by the computer, with the collection interval of 10 s and the connection is shown in Figure 5. Testo-608-H1 <3%RH, <0.5℃ 1 During the experiment, interference operations such as soot blowing, coking, etc. were avoided in order to reduce the test error. The coal quality shall be as close as possible to the design coal quality. The power plant adjusts the operation of the boiler to make the steam admission parameters of the steam turbine meet the test requirements, and the deviation and fluctuation of the parameters meet the requirements of the test regulations. According to the grid method in GB/T10184-2015, 10 measuring points shall be arranged at the outlet of each APH and bypass mixed flue. The flue gas of each measuring point enters into the mixing drum through the sampler and rubber tube for mixing, and then it is introduced into the flue gas analyzer for detection after mixing. The data is collected and saved by the computer, with the collection interval of 10 s and the connection is shown in Figure 5. The flue gas temperature measurement and flue gas composition measurement are carried out simultaneously. The thermocouple and the samplers are integrated. The thermocouple is connected to the digital temperature inspection instrument through the compensation wire, and the data is collected and saved by the computer. The layout mode is shown in Figure 6. The flue gas temperature measurement and flue gas composition measurement are carried out simultaneously. The thermocouple and the samplers are integrated. The thermocouple is connected to the digital temperature inspection instrument through the compensation wire, and the data is collected and saved by the computer. The layout mode is shown in Figure 6. The outlet of the mixed flue gas duct of APH and bypass is the outlet boundary of boiler flue gas, and the inlet of induced draft fan is the inlet boundary of primary and secondary air boilers. The heat loss of the tail flue and the electric dust removal process after APH and introduction of external The outlet of the mixed flue gas duct of APH and bypass is the outlet boundary of boiler flue gas, and the inlet of induced draft fan is the inlet boundary of primary and secondary air boilers. The heat loss of the tail flue and the electric dust removal process after APH and introduction of external heat by the induced draft fan are ignored. According to the measured flue gas composition and temperature, the heat loss of flue gas is calculated, and the boiler efficiency is calculated by the reverse balance method in GB/t10184-2015. Table 3 shows the comparison of the experiment and simulation of main parameters of boiler operation. The results show that the measured efficiency of the boiler is 95.70% at 100% load, the corrected efficiency is 95.49%, the calculated efficiency is 94.84% in the simulation results, and the relative error is only 1%; at 75% load, the measured efficiency of the boiler is 95.89%, the corrected efficiency is 95.67%, the calculated efficiency is 94.54%, and the error is 1.1%. The error is within the allowable range, which shows that the model of the boiler system is accurate. Theoretical Analysis Basis In order to analyze the influence of various factors on the recycling rate, it is necessary to establish the steam temperature control equation and conduct the basic analysis of the intermediate point temperature. Since the change of intermediate point temperature will lead to new steam temperature characteristics during unit operation, the influence of intermediate point temperature on FGR should be analyzed first. Then, the influence of other factors on FGR is analyzed when the steam temperature characteristic is constant. Intermediate Point Temperature The intermediate point temperature of the supercritical boiler refers to the working medium temperature in the steam water separator at the outlet of the water wall. The change of the temperature at the middle point directly reflects the flow rate of the cold wall of the effluent and the position where the large specific heat capacity area of the working medium extends into the furnace under supercritical pressure. If the temperature of the middle point is too high, the flow of the water wall will be reduced and the cooling capacity will be poor; if the temperature of the middle point is too low, it will lead to the operation of the separator with water and the water inflow of the superheater, which will affect the safe operation of the superheater. In the actual operation, the position of the middle point temperature and keeping the micro-superheat state, i.e., 10-30 • C higher than the saturation temperature or quasi-critical temperature, are of great significance to the efficient and stable operation of the unit. In supercritical units, the phase change point temperature under separator pressure is defined as quasi-critical temperature, which can be calculated according to the following formula [37]: Steam Control Equation The steam control equation is the energy balance equation that reflects the change of heat transfer ratio of each component when the boiler operation condition variations. In the actual operation of the boiler, it is very simple and convenient to use the steam control equation to qualitatively analyze and quantitatively estimate the influence of the operation condition on the steam temperature without all the coal quality data. For simple calculation, the reheater parameters in the following formula are applicable to primary reheat steam and secondary reheat steam temperature. For reheaters, the energy balance is: For the water wall (including economizer), the energy balance is: Define heat of vaporization r 1 : r is the FGR rate; B j is the calculation of coal consumption, kg/s; Q zr and Q s are respectively defined as the heat transferred from the flue gas produced by 1 kg coal to the reheater and the water wall (including the economizer), kj/kg; D zr is the mass flow rate of reheat steam, G is the mass flow rate flow of water wall, kg/s; q zr is the amount of heat absorbed by 1 kg of reheated steam, kj/kg; r 1 is the enthalpy rise at the intermediate point; h 1 is the specific enthalpy of the separator outlet; and h gs is the specific enthalpy of feed water, kj/kg. In the reheater system, there is no desuperheating water to reduce reheat steam temperature, where ∆h zr is the total enthalpy rise of reheater, kj/kg. Define the desuperheating spray ratio ∅ of each stage: The relationship between superheated steam flow and water wall flow is obtained by mass balance: where D gr is the flow of superheated steam. Define reheater flow coefficient d: The following formula can be obtained by combining the above formulas: The heat transfer in the boiler is a complex process. By changing the operation conditions, such as load, coal quality, excess air factor, and feed water temperature, the heat transfer in the boiler will be affected. The parameters in the equation are qualitatively analyzed through variable operating conditions and steam temperature control equation. Then, the inflection point of each parameter is quantitatively analyzed by simulation. Finally, the changing trend of FGR is summarized. Simulation Results and Analysis In this paper, the reheat steam temperature of the double reheat million kilowatt boiler is regulated at the flue gas side by means of FGR and flue gas damper. FGR is to extract a part of the flue gas with low temperature from the tail of the boiler and input it into the furnace of the boiler through the recirculation fan, so as to reduce the temperature of the furnace. In this way, the temperature of the reheated steam can be adjusted by changing the heat absorption ratio of the convection heating surface and the radiation heating surface to achieve the design amount. The FGR can reduce the furnace in the boiler, and the flue gas baffle is set in the tail flue of the boiler. The flue gas is divided into two parts. Through the opening and closing of the flue gas, the proportion of the flue gas in the two flue gas is changed, and the change of the flue gas quantity can change the heat absorption of the reheater, so as to achieve the purpose of regulating the reheat steam temperature. Through the above system, the EBSILON model is built, and the FGR changes with different parameters under the design value of reheat steam temperature. The Influence of the Intermediate Point Temperature on FGR In the reference system, the change of FGR by changing the intermediate point temperature under different loads is discussed. Because we use the simulation software to simulate the system, the control of the intermediate point temperature is simpler than the actual operation, using the temperature components in the software to control it. Table 4 As shown in Figure 7, under different loads, with the increase of the intermediate point temperature, the FGR rate decreases. Table 5 is the parameter of the reference system under different loads; the coal consumption remains constant, the intermediate point temperature rises, the desuperheating water quantity of the superheater increases, the coal water ratio increases, and the flue gas temperature at the furnace outlet increases. In this process, coal consumption remains As shown in Figure 7, under different loads, with the increase of the intermediate point temperature, the FGR rate decreases. Table 5 is the parameter of the reference system under different loads; the coal Energies 2020, 13, 4253 12 of 21 consumption remains constant, the intermediate point temperature rises, the desuperheating water quantity of the superheater increases, the coal water ratio increases, and the flue gas temperature at the furnace outlet increases. In this process, coal consumption remains unchanged, and an increase in coal-water ratio means a decrease in water consumption. At the same time, the main steam temperature remains the same, and the heat from the coal to the water through the water wall will decrease, so the furnace outlet temperature will increase. The unit radiant heat in the furnace decreases and the total radiant heat also decreases, and the heat transfer of convection flue increases accordingly. The arrangement of the reheater is pure convection type, the heat absorption of the reheater is also increased, and the temperature of the reheater is increased-that is to say, with the increase of the intermediate point temperature and reheat temperature, when the temperature reaches 623 • C, there is no need for too much FGR, and the FGR rate decreases. Under the condition of 30% load, because the design value of the intermediate point temperature is too high, and the 30% load is abnormal, it is necessary to carry out fuel injection and combustion supporting processes to increase the intermediate point temperature, so as not to need too much FGR, so the inflection point appears. Figure 8 shows the flowchart of the influence of intermediate point temperature on FGR. It can be seen from Figure 8 that when the temperature change of the intermediate point is within the range of 20 • C, the change of flue gas recycling rate is the smallest at 50% load, reaching 4.88%, and the change is the largest at 100% load, and the change of flue gas recycling rate is 8.52%. The temperature change of the middle point has a great influence on the high load. Influencing Factors of FGR Rate at Fixed the Intermediate Point Temperature With the increase of the FGR, the heat absorption ratio of heating surface can be adjusted, so that the reheat steam temperature can be controlled within the safe operation range, the exhaust gas temperature can be reduced, the heat loss of exhaust gas can be reduced, and the emission concentration of nitrogen oxide can be reduced. However, the FGR is not the higher, the better. The change of the FGR is related to many factors. This section discusses the influence of load, excess air coefficient, coal quality, and feed water temperature on FGR rate under the condition of intermediate point temperature control. In actual operation, it is necessary to control the temperature of the outlet of the water wall during the regulation process, so as to ensure this temperature is slightly superheated steam under any working condition, and the specific enthalpy at the outlet of the water wall can control the coal Load The change of the FGR with the load is shown in Figure 9. The overall trend of flue gas recycling is to reduce with the increase of load. When the load is higher than 75%, the change of steam temperature tends to be gentle, and the FGR rate is almost unchanged; when the load is lower than 75%, the FGR rate decreases with the increase of load. Load The change of the FGR with the load is shown in Figure 9. The overall trend of flue gas recycling is to reduce with the increase of load. When the load is higher than 75%, the change of steam temperature tends to be gentle, and the FGR rate is almost unchanged; when the load is lower than 75%, the FGR rate decreases with the increase of load. When the load is lower than 75%, with the increase of fuel quantity, the load increases, the flue gas temperature at the furnace outlet increases, and the unit radiant heat Qs in the furnace decreases. The reheaters are all pure convection heat exchangers, so the heat absorption of convection heating surface Qzr increases. At the same time, with the increase of load, the enthalpy rise of the intermediate point r1 decreases. Through the steam temperature control equation, Qzr/QS and d/r1 increase with the increase of load. From Figure 9, the slope of the ratio of the former is always greater than that of the latter, so the enthalpy rise of reheat Δhzr increases with the increase of load. At low load, the rise of reheat enthalpy Δhzr is lower, and more FGR is needed to change the heat absorption ratio of heat exchanger in the boiler, so that the reheat steam temperature reaches 623°C. With the increase of the load, the reheat enthalpy rise Δhzr rises, and the FGR rate is reduced to keep the primary reheat steam temperature unchanged. At 50% load, the system gradually changes from low load to high load When the load is lower than 75%, with the increase of fuel quantity, the load increases, the flue gas temperature at the furnace outlet increases, and the unit radiant heat Q s in the furnace decreases. The reheaters are all pure convection heat exchangers, so the heat absorption of convection heating surface Q zr increases. At the same time, with the increase of load, the enthalpy rise of the intermediate point r 1 decreases. Through the steam temperature control equation, Q zr /Q S and d/r 1 increase with the increase of load. From Figure 9, the slope of the ratio of the former is always greater than that of the latter, so the enthalpy rise of reheat ∆h zr increases with the increase of load. At low load, the rise of reheat enthalpy ∆h zr is lower, and more FGR is needed to change the heat absorption ratio of heat exchanger in the boiler, so that the reheat steam temperature reaches 623 • C. With the increase of the load, the reheat enthalpy rise ∆h zr rises, and the FGR rate is reduced to keep the primary reheat steam temperature unchanged. At 50% load, the system gradually changes from low load to high load operation, the change trend of reheater heat absorption becomes slow, the change trend of Q zr /Q S becomes slow, and the inflection point appears. In the same way, the trend of enthalpy rising at the intermediate point becomes slower, which leads to d/r 1 becoming slower and to a turning point. From the steam temperature control equation, the above two ratio changes act on the reheat enthalpy rise, which makes the reheat enthalpy rise at 50% load tend to slow down, and there is a turning point. When the system operates at a load higher than 75%, the enthalpy of the intermediate point decreases, which makes d/r 1 increase. According to the steam control equation, the change of ∆h zr tends to be gentle. In actual operation, the change trend of steam temperature in the boiler is stable from high load to full load, and the change of ∆h zr is also slow, so the FGR is basically the same. Coal Quality Change In actual operation, the quality of coal put into the boiler is variable. Two kinds of coal different from the designed coal are put into the unit, and the components of the two kinds of coal are shown in Table 1. Under each working condition, the premise of coal quality analysis is to ensure that the boiler evaporation does not change. The influence of two kinds of coal on FGR rate was studied by comparing the designed coal. The variation of the FGR under different loads of three kinds of coal is shown in Figure 9. Under the same load, the FGR of coal 1 and coal 2 is lower than that of design coal, which indicates that the reheat temperature is higher than that of design coal, and the flue gas recirculation rate of coal 1 is the lowest. The content of moisture, ash, and volatile matter in the three kinds of coal are quite different, but the difference of calorific value is relatively small. Under the same load, only the coal quality is changed, the intermediate point temperature is unchanged, the reheat enthalpy rise r 1 at the intermediate point is almost constant, and the flow coefficient d of the reheater remains fixed. It can also be seen from Figure 9 that the ratio d/r 1 of the two is unchanged. Among the three kinds of coal, the moisture content of coal 1 is the largest. When the moisture content of coal 1 is high, the theoretical combustion temperature decreases, the unit radiant heat Q s decreases, the flue gas quantity increases, the flue gas temperature at the furnace outlet increases, and the heat transfer of convection reheater Q zr increases. From the steam control equation, it can be concluded that the reheat enthalpy rise ∆h zr of coal 1 is the largest, resulting in the lowest FGR of coal 1. Compared with the designed coal, coal 2 is low volatile and high ash coal. Because of this characteristic, it is difficult to ignite; the highest temperature area in the furnace will move upward, resulting in the increase of flue gas temperature at the furnace outlet; the unit radiant heat Q s is smaller; and the convective heat transfer is relatively larger. From the steam control equation, it can be concluded that the temperature and enthalpy of reheated gas increase a little more, so the FGR of coal 2 is lower than that of the designed coal. Compared with coal 2, coal 1 is a kind of coal with high volatility and low ash. However, from Figure 9, it can be seen that the unit radiant heat of flue gas of coal 2 is between design coal and coal 1, as is the rise of reheat enthalpy ∆h zr . However, the rise of reheat enthalpy ∆h zr of coal 1 is the largest, and the recycling rate of flue gas is the smallest. It can be concluded that under the same load, the influence of moisture content of different coal on flue gas recycling is the largest. Excess Air Factor The influence of excess air factor on FGR rate is studied under the premise of given coal quantity and PA flow. By changing the SA flow, the excess air factor is calculated by the burner. In order to keep the reheated steam temperature unchanged, the controller adjusts the reheated steam temperature by regulating the amount of recycled flue gas and obtains the influence on the change of the FGR. The influence of excess air factor on FGR rate is shown in Figure 10, and the design values of both under four working conditions are given. Under any load, the FGR rate decreases with the increase of excess air factor. It can be seen from the above that as the load increases, the system tends to be stable gradually, and the flue gas recycling rate tends to be constant from high load to full load, so in Figure 7, the distance of the four curves becomes smaller gradually under four different working conditions. Under any load, because the intermediate point temperature is unchanged, the enthalpy rise r 1 of the intermediate point remains unchanged, so the ratio d/r 1 is a constant. As shown in Figure 11,when the excess air factor increases, the flue gas temperature at the outlet of the furnace basically remains the same, the theoretical combustion temperature decreases, and the radiation in the furnace weakens, so the radiation quantity Q S of the water wall decreases; at the same time, the flue gas quantity increases with the increase of the excess air coefficient, the flue gas temperature at the outlet of each convection heating surface increases, the specific heat capacity and heat transfer coefficient of the flue gas increase, and the heat absorption quantity Q zr of the convection heat exchange surface also increases. Therefore, the ratio of Q zr /Q s was positively correlated with the excess air factor. Through the steam control equation, it is easy to determine that the reheat enthalpy rise ∆h zr also increases with the increase of excess air coefficient, and the FGR is the opposite. Figure 11,when the excess air factor increases, the flue gas temperature at the outlet of the furnace basically remains the same, the theoretical combustion temperature decreases, and the radiation in the furnace weakens, so the radiation quantity QS of the water wall decreases; at the same time, the flue gas quantity increases with the increase of the excess air coefficient, the flue gas temperature at the outlet of each convection heating surface increases, the specific heat capacity and heat transfer coefficient of the flue gas increase, and the heat absorption quantity Qzr of the convection heat exchange surface also increases. Therefore, the ratio of Qzr/Qs was positively correlated with the excess air factor. Through the steam control equation, it is easy to determine that the reheat enthalpy rise Δhzr also increases with the increase of excess air coefficient, and the FGR is the opposite. Figure 11,when the excess air factor increases, the flue gas temperature at the outlet of the furnace basically remains the same, the theoretical combustion temperature decreases, and the radiation in the furnace weakens, so the radiation quantity QS of the water wall decreases; at the same time, the flue gas quantity increases with the increase of the excess air coefficient, the flue gas temperature at the outlet of each convection heating surface increases, the specific heat capacity and heat transfer coefficient of the flue gas increase, and the heat absorption quantity Qzr of the convection heat exchange surface also increases. Therefore, the ratio of Qzr/Qs was positively correlated with the excess air factor. Through the steam control equation, it is easy to determine that the reheat enthalpy rise Δhzr also increases with the increase of excess air coefficient, and the FGR is the opposite. Figure 11. Influence of excess air factor on Δhzr and Qzr/Qs under different loads. Figure 11. Influence of excess air factor on ∆h zr and Q zr /Q s under different loads. Feed Water Temperature After the high-pressure heater is cut off, when the feed water temperature is reduced and the load is constant, the boiler feed water needs more heat when it is heated to the same evaporation capacity. Considering the change of feed water temperature, the coal consumption increased slightly. Figure 12 shows the influence trend of feed water temperature on the FGR under various loads. Feed Water Temperature After the high-pressure heater is cut off, when the feed water temperature is reduced and the load is constant, the boiler feed water needs more heat when it is heated to the same evaporation capacity. Considering the change of feed water temperature, the coal consumption increased slightly. Figure 12 shows the influence trend of feed water temperature on the FGR under various loads. It can be seen from Figure 12 that under the same load, the feed water temperature decreases and the FGR decreases. After the high-pressure heater is cut off, the feed water temperature decreases, and the variation trend of each coefficient and the FGR in the steam control equation with the load is similar to the working condition of the high-pressure heater that is not cut off as a whole, the working condition of turning point is similar, and the reason is similar to that of the previous chapter, which will not be discussed here. This section mainly discusses the parameter change caused by the change of feed water temperature under the same load. Under any load, the feed water temperature decreases after the high-pressure heater is cut off, and the enthalpy rise r1 of the intermediate point increases. At the same time, the working fluid entering the reheater increases due to the extrusion, which leads to the increase of the flow coefficient d of the reheater, but the increase is not large enough to offset the increase in r1. Therefore, with the increase of the feed water temperature d/r1 decreased. After the removal of the high-pressure heater, the system load remains constant, the steam quantity reduces, and the unit steam work will increase inevitably. Then, the enthalpy value of reheat steam inlet will be reduced and the heat Qzr transferred from unit coal flue gas to the primary reheater is increased. However, the increase of Qs per unit radiant heat of water wall is larger than that of Qzr, so the ratio of Qzr/Qs decreases. For the reheater, the steam pressure is low and the specific heat capacity is small. The sensitivity of the medium temperature rise of reheater to the change of the flue gas side is much greater than superheater, so the reheat enthalpy rise Δhzr increases after the removal of high-pressure heater. Through the steam control equation, under the same load, it is easy to get from Figure 12 that the change of Qzr/QS before and after high cut is less than d/r1, so the reheat enthalpy rise Δhzr after high cut increases, and the amount of FGR required decreases, which is consistent with the theoretical analysis results. It can be also seen from Figure 9 that the feed water temperature difference before and after the high-pressure heater is cut off increases with the increase of load, and the reheat enthalpy Δhzr increase after the high-pressure heater is cut off. In other words, no extra FGR is required to increase It can be seen from Figure 12 that under the same load, the feed water temperature decreases and the FGR decreases. After the high-pressure heater is cut off, the feed water temperature decreases, and the variation trend of each coefficient and the FGR in the steam control equation with the load is similar to the working condition of the high-pressure heater that is not cut off as a whole, the working condition of turning point is similar, and the reason is similar to that of the previous chapter, which will not be discussed here. This section mainly discusses the parameter change caused by the change of feed water temperature under the same load. Under any load, the feed water temperature decreases after the high-pressure heater is cut off, and the enthalpy rise r 1 of the intermediate point increases. At the same time, the working fluid entering the reheater increases due to the extrusion, which leads to the increase of the flow coefficient d of the reheater, but the increase is not large enough to offset the increase in r 1 . Therefore, with the increase of the feed water temperature d/r 1 decreased. After the removal of the high-pressure heater, the system load remains constant, the steam quantity reduces, and the unit steam work will increase inevitably. Then, the enthalpy value of reheat steam inlet will be reduced and the heat Q zr transferred from unit coal flue gas to the primary reheater is increased. However, the increase of Q s per unit radiant heat of water wall is larger than that of Q zr , so the ratio of Q zr /Q s decreases. For the reheater, the steam pressure is low and the specific heat capacity is small. The sensitivity of the medium temperature rise of reheater to the change of the flue gas side is much greater than superheater, so the reheat enthalpy rise ∆h zr increases after the removal of high-pressure heater. Through the steam control equation, under the same load, it is easy to get from Figure 12 that the change of Q zr /Q S before and after high cut is less than d/r 1 , so the reheat enthalpy rise ∆h zr after high cut increases, and the amount of FGR required decreases, which is consistent with the theoretical analysis results. It can be also seen from Figure 9 that the feed water temperature difference before and after the high-pressure heater is cut off increases with the increase of load, and the reheat enthalpy ∆h zr increase after the high-pressure heater is cut off. In other words, no extra FGR is required to increase the reheat temperature. Thus, the difference of FGR before and after high-pressure heater removal is also increased. From high load operation to full load operation, the temperature in the furnace tends to be flat, and the difference between ∆h zr and FGR remains approximately the same. CBF System The CBF system is based on the reference system and removes the BPF used for heating part of the condensate and water supply. By comparing the changes of flue gas recycling in the two systems under the same conditions, the causes of the phenomenon are analyzed, which provides the basis for the economic analysis of different systems with or without flue gas residual heat utilization. In order to study the influence of waste heat utilization of flue gas on FGR, this section explores the change of FGR of CBF system under off-design condition is discussed according to the above research process. The results show that the change trend of the FGR of the reference system and the CBF system with the influence parameters is approximately the same. Therefore, the main purpose of this section is to explore the change of FGR in both systems under any same state. It can be seen from Figure 13 that under the same load, the FGR rate and ∆h zr of the CBF system are higher than the reference system. The reasons are as follows: compared with the CBF system, the benchmark system uses flue gas to heat part of the feed water, so the extraction steam will be squeezed out to increase the working medium entering the reheater, and the unit working medium work will be reduced, resulting in the increase of the enthalpy value at the reheater inlet. If the outlet state of reheater working medium remains unchanged, then the ∆h zr will decrease, which is consistent with the simulation results. After removing the BPF, in order to keep the outlet temperature of reheater unchanged, it is necessary to increase the amount of FGR to ensure the efficient operation of the unit. Energies 2020, 13, x FOR PEER REVIEW 18 of 21 the reheat temperature. Thus, the difference of FGR before and after high-pressure heater removal is also increased. From high load operation to full load operation, the temperature in the furnace tends to be flat, and the difference between Δhzr and FGR remains approximately the same. CBF System The CBF system is based on the reference system and removes the BPF used for heating part of the condensate and water supply. By comparing the changes of flue gas recycling in the two systems under the same conditions, the causes of the phenomenon are analyzed, which provides the basis for the economic analysis of different systems with or without flue gas residual heat utilization. In order to study the influence of waste heat utilization of flue gas on FGR, this section explores the change of FGR of CBF system under off-design condition is discussed according to the above research process. The results show that the change trend of the FGR of the reference system and the CBF system with the influence parameters is approximately the same. Therefore, the main purpose of this section is to explore the change of FGR in both systems under any same state. It can be seen from Figure 13 that under the same load, the FGR rate and Δhzr of the CBF system are higher than the reference system. The reasons are as follows: compared with the CBF system, the benchmark system uses flue gas to heat part of the feed water, so the extraction steam will be squeezed out to increase the working medium entering the reheater, and the unit working medium work will be reduced, resulting in the increase of the enthalpy value at the reheater inlet. If the outlet state of reheater working medium remains unchanged, then the Δhzr will decrease, which is consistent with the simulation results. After removing the BPF, in order to keep the outlet temperature of reheater unchanged, it is necessary to increase the amount of FGR to ensure the efficient operation of the unit. Figure 13. FGR and Δhzr change of reference system and cutoff bypass system. Conclusions In this paper, the system parameters of a boiler side are simulated. Firstly, the influence of the intermediate point temperature on the FGR is explored. Secondly, on the premise of controlling the intermediate point temperature, by changing the load, coal quality, excess air factor, and feed water temperature, the FGR rate is adjusted to keep the reheated steam temperature in a stable state. The conclusion is as follows: (1) It can be seen from the simulation results that the FGR of the reference system is reduced by 4.9% from low load to high load operation (i.e., 30% load to 75% load). The load continues to increase Figure 13. FGR and ∆h zr change of reference system and cutoff bypass system. Conclusions In this paper, the system parameters of a boiler side are simulated. Firstly, the influence of the intermediate point temperature on the FGR is explored. Secondly, on the premise of controlling the intermediate point temperature, by changing the load, coal quality, excess air factor, and feed water temperature, the FGR rate is adjusted to keep the reheated steam temperature in a stable state. The conclusion is as follows: (1) It can be seen from the simulation results that the FGR of the reference system is reduced by 4.9% from low load to high load operation (i.e., 30% load to 75% load). The load continues to increase to full load (i.e., 100% load), and FGR remains basically unchanged. For the BPF system, FGR decreases from 30% load to 100% load, which is about 14%. (2) The calorific value, volatile content, moisture content, and ash content in coal quality affect the reheat temperature. The volatile matter, moisture, and ash in the coal quality affect the reheat temperature. High moisture makes the steam temperature rise. The flue gas recirculation rate of coal 1 and 2 is lower than that of design coal. The moisture content of coal 1 is the highest, the reheat steam temperature is also high, and the FGR is the lowest. The influence of volatile matter and ash in coal 1 is less than that of moisture. (3) With the increase of the excess air coefficient, the FGR rate decreases. Compared with the reference system, the change of excess air factor of the bypass system has the greatest impact on the FGR rate, which is about 7%. (4) The feed water temperature is in direct proportion to the FGR. The feed water temperature decreases, the reheat temperature increases, and the FGR rate decreases. The FGR difference before and after the removal of the high-pressure heater increases with the load increasing, which is about 0.38% at low load (30% load) and 2.2% at high load (75%), then it tends to be flat. In addition, compared with the reference system, the FGR of CBF system has the same trend with the influencing parameters, and the FGR of the former is smaller than that of the latter under any working conditions. Conflicts of Interest: The authors declare no conflict of interest.
14,456.4
2020-08-17T00:00:00.000
[ "Engineering", "Environmental Science" ]
HSF1 Alleviates Brain Injury by Inhibiting NLRP3-Induced Pyroptosis in a Sepsis Model Background Sepsis, which could cause a systemic inflammatory response, is a life-threatening disease with a high morbidity and mortality rate. There is evidence that brain injury may be related to severe systemic infection induced by sepsis. The brain injury caused by sepsis could increase the risk of mortality in septic patients, which seriously affects the septic patient's prognosis of survival. Although there remains a focus on sepsis research, clinical measures to prevent and treat brain injury in sepsis are not yet available, and the high mortality rate is still a big health burden. Therefore, it is necessary to investigate the new molecules or regulated pathways that can effectively inhibit the progress of sepsis. Objective NLR family pyrin domain-containing 3 (NLRP3) increased in the procession of sepsis and functioned as the key regulator of pyroptosis. Heat shock factor 1 (HSF1) can protect organs from multiorgan dysfunction syndrome induced by lipopolysaccharides in mice, and NLRP3 could be inhibited by HSF1 in many organs. However, whether HSF1 regulated NLRP3 in sepsis-induced brain injury, as well as the detailed mechanism of HSF1 in brain injury, remains unknown in the sepsis model. In this research, we try to explore the relationship between HSF1 and NLRP3 in a sepsis model and try to reveal the mechanism of HSF1 inhibiting the process of brain injury. Methods In this study, we used wild-type mice and hsf1−/− mice for in vivo research and PC12 cells for in vitro research. Real-time PCR and Western blot were used to analyze the expression of HSF1, NLRP3, cytokines, and pyrolytic proteins. EthD-III staining was chosen to detect the pyroptosis of the hippocampus and PC12 cells. Results The results showed that HSF1 is negatively related to pyroptosis. The pyroptosis in cells of brain tissue was significantly increased in the hsf1−/− mouse model compared to hsf1+/+ mice. In PC12 cells, hsf1 siRNA can upregulate pyroptosis while HSF1-transfected plasmid could inhibit the pyroptosis. HSF1 could negatively regulate the NLRP3 pathway in PC12 cells, while hsf1 siRNA enhanced the pyroptosis in PC12 cells, which could be reversed by nlrp3 siRNA. Conclusion These results imply that HSF1 could alleviate sepsis-induced brain injury by inhibiting pyroptosis through the NLRP3-dependent pathway in brain tissue and PC12 cells, suggesting HSF1 as a potential molecular target for treating brain injury in sepsis clinical studies. Introduction Sepsis is a life-threatening disease with a high morbidity and mortality rate [1,2]. The pathogenesis of sepsis is extremely complicated due to the dysregulation of the body's response to infection [3][4][5]. In sepsis, immune dysfunction, coagulation dysfunction, and systemic inflammatory network effect induce damage to its tissues and organs [6][7][8][9]. The mechanism of brain impairment among patients with sepsis was still unclear, and brain injury could increase the risk of mortality in septic patients. There was evidence that brain injury may be related to severe systemic infection induced by sepsis, which could cause a systemic inflammatory response [10][11][12][13]. However, the exact pathophysiology of brain injury in sepsis was complex, and the possible processes cause brain injury by increasing the expression of proinflammatory cytokines (interleukin (IL) family), including oxidative stresses and the damaging of the blood-brain barrier (BBB) structure [14][15][16]. Therefore, it is necessary to investigate the new molecules or regulated pathways that can effectively inhibit the progress of sepsis to form brain injury. In recent years, heat shock proteins (HSPs) have become an interesting topic in the development of new treatments for sepsis [17]. The induction of HSPs responsible for maturation, antioxidative protection, adiposis, etc., can be significantly affected by harmful stress such as reactive oxygen species (ROS) and inflammation [18][19][20]. Interacting with other signaling pathways, HSPs can produce a collective response against harmful stress like sepsis [21][22][23][24][25]. HSPs are evolutionarily conserved and involved in the cellular protective mechanism of heat shock response (HSR) to maintain protein homeostasis in almost all eukaryotic cells. Heat shock factor 1 (HSF1) plays a key role in the HSR, and thus, HSP expression is highly dependent on HSF1 regulation [26][27][28]. HSF1 can spontaneously interact with a complex of chaperone proteins HSP90, HSP70, and HSP40, preventing it from binding to DNA. Facts have proved that HSF1 can protect organs from multiorgan dysfunction syndrome induced by lipopolysaccharides (LPS) [29]. Furthermore, inhibiting HSF1 in mice prevents HSP induction and makes cells vulnerable to proteotoxic stress [23]. It is evident that HSF1 decreases the production of inflammatory mediators to attenuate the inflammatory responses caused by LPS, and HSF1 exerts protective effects against brain dysfunction in sepsis [30]. However, the exact mechanism needs to be clarified. The NLRP3 inflammasome mainly exists in immune and inflammatory cells following inflammatory activation [31][32][33]. NLRP3 can respond to various stimuli such as viral RNA, lysosomal damage, and ROS [34][35][36]. The NLRP3 inflammasome increased in the procession of sepsis and functioned as the key regulator of pyroptosis [37][38][39]. Pyroptosis, playing a protective role in host defense during infection, is an inflammatory form of regulated cell death (RCD) [40][41][42][43][44][45]. Mature IL-1β is cleaved by the activated NLRP3/caspase1 pathway, and the cleaved IL-1β is released through the pore in the cell membrane and induces pyroptosis [46][47][48]. Abnormal pyroptosis is harmful to normal cells and organs, and so it is necessary to keep the pyroptosis within the limits of homeostasis. Downregulated NLRP3 can suppress pyroptosis and protect the endothelium from early sepsis, and the inhibition of the NLRP3 inflammasome can prevent sepsis [38,49]. Moreover, HSF1 can regulate the innate immunity of the NLRP3 inflammasome, leading to the protection of sepsis [50]. HSF1 could protect many organs such as the liver, lung, and kidney in the sepsis animal model. However, whether HSF1 could alleviate brain injury in sepsis remains revealed [51]. The clinical measures to prevent and treat brain injury induced by sepsis are not yet available, which is still a big health burden in treating sepsis [12][13][14][15]. It is very important to find a possible molecular pathway for alleviating pyroptosis through NLRP3 in brain tissue and nerve cell lines, which can shed light on the mechanism to protect brain tissue in sepsis. Materials and Methods 2.1. Cell Culture. PC12 pheochromocytoma cells were cultured in DMEM culture media (Life Technologies, Carlsbad, CA, USA) supplemented with 10% fetal bovine serum (FBS; Life Technologies) at 37°C, 5% CO 2 . When the PC12 cells had grown to approximately 80% density, they were subjected to LPS-different times (a 12 h point was chosen for further research)/adenosine triphosphate (ATP)-1 h treatment, including small interfering RNA (siRNA) or plasmid transfection, respectively. Real-Time Reverse Transcription-Polymerase Chain Reaction (PCR). The total RNA of the PC12 (n = 5 for each group) or hippocampus (n = 3 for each group) was isolated with the TRIzol reagent (Life Technologies), and reverse transcription was performed using oligo-dT primers with Superior III RT Supermix (Innogene Biotech, Beijing, China). Real-time PCR was performed using the Eppendorf realplex with UltraSYBR Mixture (Toyobo Co., Osaka, Japan), and the expression was quantified compared to the housekeeping gene (β-actin) for mRNA. The primer sequences used for qPCR were as follows (Table 1). Western Blot Assay. The total protein of the PC12 (n = 5 for each group) or hippocampus (n = 3 for each group) was extracted and separated on a 12% sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE) with a constant voltage of 75 V. The electrophoresed proteins were transferred to nitrocellulose membranes with a transfer apparatus (Bio-Rad, Hercules, CA, USA). The membranes were blocked with 5% nonfat milk in Tris-buffered saline (pH 7.5) with 0.05% Tween 20 for 2 hours at room temperature. Primary antibodies, including HSF1, NLRP3, IL-1, caspase1, and GAPDH (Proteintech, Chicago, IL, USA), were diluted to 1 : 1000 in TBS buffer overnight at 4°C. The blots were washed 3 times in TBS buffer for 10 min and then 2 Mediators of Inflammation immersed in the secondary antibody solution containing the goat-IgG rabbit polyclonal antibody (1 : 5000 in TBS buffer, Proteintech) for 2 h and diluted with TBS buffer. The membranes were washed 3 times for 10 min in TBS buffer. The immunoblotted proteins were visualized using an ECL Western blot luminol reagent (Advansta, Menlo Park, CA, USA) and quantified using a Universal Hood II chemiluminescence detection system (Bio-Rad) by imaging scans. 2.5. Brain Tissue of Animals. The brain tissues of hsf1 KO (hsf1 -/-) and wild-type (hsf1 +/+ ) mice were provided by Prof. Xiao Xian-zhong (The Department of Pathophysiology, School of Basic Medical Science, Central South University, Changsha, Hunan, China) and have been described previously [30,52]. Protocols for animal breeding and experiments were approved by the Institutional Animal Care and Use Committee of Central South University under license number 2018sydw0378 (approval date: 25 Nov. 2018). 16-20-week-old (weight 20-25 g) mice were subjected to CLP or sham. 2.6. Cecal Ligation and Puncture Model. Cecal ligation and puncture (CLP) was performed as previously described in Xiao's lab [30,52]. Briefly, hsf1 -/and hsf1 +/+ mice (n = 3 for each group) were anesthetized with 2% isoflurane. Under aseptic conditions, a midline laparotomy incision was performed to allow exposure of the cecum. One-third of the cecum was tightly ligated with an orientation distal to the ileocecal valve and was perforated twice with a 22-gauge needle. A few feces were extruded from the puncture holes to ensure patency. The laparotomy was then sutured, followed by fluid resuscitation (normal saline, 50 mL/kg). The mice were euthanized after 12 h, and the hippocampus was removed and fixed with paraformaldehyde or stored at -80°C for further research. 2.7. Immunofluorescence Staining. The brain tissues were collected and fixed in 4% paraformaldehyde and embedded in paraffin for preparation of 4 μm thick tissue sections. The dewaxed paraffin sections of hippocampal tissue from differently treated mice were detergent extracted with 0.1% Triton X-100 for 10 min at 4°C before being incubated overnight with the HSF1 (sc-13516, Santa Cruz, Dallas, Texas, USA) and NLRP3 (19771-1-AP, Proteintech) polyclonal antibody (1 : 100) at 4°C. The slide was washed with phosphate-buffered saline (PBS) and incubated with 150 μL with the FITC (SA00003-11)-or Cy3 (SA00009-2)-tagged secondary antibody (1 : 200) at room temperature (RT) for 2 hours. The slide was then incubated with an immunofluorescence polyclonal antibody for 1 h. The nuclei were stained with DAPI (Sigma-Aldrich, St. Louis, MO, USA). The laser scanning microscope in Servicebio, Wuhan, China, was used to capture the image. The proportion of red-positive/greenpositive cells to blue-positive cells was calculated using ImageJ (NIH, Baltimore, MD, USA). 2.9. Statistical Analysis. SPSS19.0 was used for statistical analysis. The results were summarized and presented as mean ± standard deviation ðSDÞ and analyzed using an independent t-test for the two groups. One-way analysis of variance (one-way ANOVA) was utilized for comparing multiple groups, followed by multiple comparison tests (Bonferroni post hoc tests). The statistically significant level was defined as p < 0:05. Increased Expression of HSF1 in Hippocampal Tissue of Septic Mice. The previous results [52] had proved the increased expression of HSF1 in the lung, kidney, and liver of septic mice constructed by CLP, but the expression of HSF1 in the hippocampus of CLP septic mice remained unknown. In this research, we collected the hippocampal tissue of HSF1 wild-type (hsf1 +/+ ) control mice and septic mice constructed by CLP. Expression of HSF1 in hippocampal tissue was detected by Western blot. As the results showed, HSF1 obviously increased in the CLP sepsis model ( Figure 1). Because the exact mechanism of HSF1 in the process of sepsis still had not been demonstrated, the expression of inflammatory factors was further assessed in the hippocampal tissue of hsf1 +/+ mice. The results revealed that the expression of inflammatory cytokines such as caspase1 and NLRP3 was significantly elevated as well as HSF1, and the cleavage of IL-1β was also increased in CLP septic mice ( Figure 1). Pyroptosis Is Elevated in Brain Tissue of hsf1 -/-Septic Mice. The inflammatory molecules such as IL-1β, caspase1, and NLRP3 played important roles in pyroptosis [46,55]. As the expression of HSF1 in the CLP sepsis model was related to IL-1β, caspase1, and NLRP3 (Figure 1), pyroptosis in the brain tissue of differently treated mice was detected. The EthD-III staining experiments found that EthD-IIIpositive cells existed in the dentate gyrus and pyramidal cells in hippocampal and cortex tissues. The pyroptotic cells looked like cabbage or fried egg, with the nucleus located in the center [25]. The results showed that the level of pyroptosis was elevated in both the hippocampal and cortex tissues of septic hsf1 -/and hsf1 +/+ CLP models contrasted to the sham control, respectively. Interestingly, the pyroptosis in hippocampal and cortex tissues was found to increase in the hsf1 -/-CLP model compared to the hsf1 +/+ CLP model, while the hsf1 -/mice also showed more pyroptosis in hippocampal and cortex tissues than hsf1 +/+ mice (Figure 2). hsf1 -/-Septic Mice Displayed Enhanced NLRP3 Expression in the Brain. The CLP sepsis model and sham model were established in hsf1 +/+ mice and hsf1 -/mice, and the models were sentenced to death after 12 h. Brain tissue was collected and divided into two parts. One hemisphere containing hippocampal tissue was used in Western blot and real-time PCR, while another hemisphere was used in immunofluorescence staining. Figure 3 reveals that the mRNA and protein level of NLRP3 was elevated in the hippocampus of both hsf1 -/and hsf1 +/+ CLP mice, and the expression of NLRP3 protein was increased in the hsf1 -/-CLP mice compared to hsf1 +/+ CLP mice. The immunofluorescence experiments were also proved to significantly increase NLRP3 in the brain tissue of hsf1 -/septic mice contrasted to hsf1 +/+ septic mice (Figure 4). HSF1 could protect organs from multiorgan dysfunction syndrome induced by Mediators of Inflammation LPS [56]. Combined with the decreased level of pyroptosis in hsf1 +/+ CLP mice compared to hsf1 -/-CLP mice (Figure 2), we supposed that HSF1 might alleviate brain injury by inhibiting NLRP3-dependent pyroptosis in the brain of the sepsis model. HSF1 Negatively Regulated NLRP3 and Pyroptosis in PC12 Cells In Vitro. To reveal the alleviating mechanism of HSF1 on sepsis in CNS, the PC12 cell model was used for cell experiments. LPS is usually used to induce inflammation. The LPS+ATP-treated cells showed obvious pyroptosis, which is the classic pyroptosis model [57,58]. Here, PC12 cells were treated with LPS at different times and ATP for 1 hour to generate an inflammation-related pyroptosis model, which was utilized to assess the expression of HSF1 and NLRP3 in the classic pyroptosis model (Figure 5(a)). LPS treatment for 12 hours was chosen for further research. qPCR results revealed that the stimulation of LPS+ATP can induce more NLRP3 and IL-1β in PC12 cells transfected with hsf1 siRNA than PC12 cells without hsf1 siRNA interference ( Figure 5(b)). Similar results were obtained in PC12 cells transfected with hsf1 siRNA or plasmid. The mRNA expression of NLRP3 and IL-1β was inhibited by hsf1 plasmid but enhanced by hsf1 siRNA (Figure 6(a)). Western blot also proved that HSF1 could negatively regulate NLRP3 protein expression (Figures 6(b) and 6(d)). Similarly, hsf1 siRNA enhanced the pyroptosis in PC12 cells, which could be reversed by nlrp3 siRNA (Figures 6(c) and 6(e)). HSF1 Regulates Pyroptosis Dependent on NLRP3 in PC12 Cells In Vitro. The nlrp3 siRNA was cotransfected into PC12 cells with hsf1 siRNA. The real-time PCR and Western blot results showed that the upregulation of IL-1β and caspase1 in hsf1 silence PC12 cells was reversed by transfecting with nlrp3 siRNA (Figures 7(a) and 7(c)), and hsf1 siRNA enhanced the pyroptosis in PC12 cells, which could be reversed by nlrp3 siRNA (Figures 7(b) and 7(d)). This proved that HSF1 could inhibit the NLRP3-dependent Discussion The major characteristics of sepsis are systemic inflammatory responses accompanied by multiple organ dysfunction syndromes. It has proved to cause more neutrophil infiltration in the lungs and kidneys of hsf1 -/mice [56]. The hsf1 -/septic mice exhibited a greater degree of lung, liver, and kidney tissue damage by increased fibrin:fibrinogen deposition compared with septic wild-type mice [52]. However, the degree of the impaired brain in hsf1 -/septic mice was not certificated. The brain injury according to the acute phase of sepsis led to serious sequelae, which were the key factor of the prognosis of patients with sepsis. The brain could be impaired by increasing inflammatory cytokines in sepsis. Unfortunately, there is no effective way to prevent or even treat this grievous disease, and the pathogenesis of brain injury in sepsis remains to be clarified [59,60]. The occurrence of brain injury in sepsis is closely related to inflammation, and inhibiting the inflammatory factors' expression in brain tissues could alleviate sepsis encephalopathy [13]. Sepsis induces significant brain disorders and the dysfunction of neurons and synaptic plasticity of the cerebral cortex and hippocampus [61,62]. The hippocampus and cortex are vulnerable to cell death; therefore, it is necessary to prevent the hippocampus from cell death in septic patients [61]. Cell death is controlled partially by RCD pathways, which comprise apoptosis, necroptosis, ferroptosis, pyroptosis, etc. [53,[63][64][65][66][67][68][69][70][71][72]. Pyroptosis has been proved to be closely related to sepsis-induced organ damage, but whether inhibiting pyroptosis could be a possible therapy tactic for the brain in sepsis is starved for more evidence [73,74]. In this study, we focus on exploring the candidate molecules or pathways that could regulate pyroptosis. The NLRP3-dependent cas-pase1/IL-1β pathway is the key regulator of pyroptosis [75], and the upregulation of NLRP3 expression can promote inflammation and pyroptosis [38]. While pyroptosis is the inflammatory form of RCD, it can release IL-1β and IL-18 in the early stages to initiate the event of sepsis [76]. HSF1, the key regulator of heat shock response, prevents harmful stimulation, which can inhibit the expression of NLRP3 [77,78]. HSF1 can protect the lung, liver, and kidney from multiorgan dysfunction syndrome induced by sepsis, but the function and expression in the brain had not been demonstrated [52,56]. As a result, the relationship between HSF1 and NLRP3 in the CLP septic mouse model was investigated. Unexpectedly, we found that HSF1 is intrinsically increased in the hippocampus of CLP septic mice ( Figure 1) and positively related to the changes of inflammatory factors NLRP3, caspase1, and cleaved IL-1β, which is significantly different from some previous reports claiming HSF1 could downregulate NLRP3 [50,77,79]. Although the expression of HSF1 increased in the septic mouse model, we are still assured that the elevated HSF1 antagonizes the sepsis in the CLP model. To certify the exact roles of HSF1 in the brain of sepsis, wild-type mice and hsf1 -/mice were used to build the CLP model. The brain tissue collected from differently treated mice was used for functional exploration. The EthD-III experiments showed that HSF1 was also negatively related to pyroptosis. The pyroptosis in brain tissue cells significantly increased in the hsf1 -/-CLP model compared with the hsf1 +/+ CLP model, while the hsf1 -/mice also showed more pyroptosis in cells of brain tissue than hsf1 +/+ mice ( Figure 2). In further research, PC12 cells were utilized to simulate the neuron model, which revealed that hsf1 siRNA could upregulate pyroptosis while HSF1 plasmid can inhibit the pyroptosis ( Figure 6). It is confirmed that pyroptosis could initiate sepsis [38,76,80], implying that HSF1 could alleviate sepsis-induced brain dysfunction by blocking the initiation event of sepsis by inhibiting pyroptosis. Interestingly, the level of NLRP3/caspase1/cleaved IL-1β was significantly elevated in the hsf1 -/-CLP mice compared Mediators of Inflammation with hsf1 +/+ CLP mice (Figures 3 and 4). These results explained that the elevated expression of HSF1 in the CLP model could protect mice from inflammation factors (Figures 1, 3, and 4), and the lack of HSF1 makes the inflammation aggressive in the CLP mice (Figures 3 and 4). The same results are also found in the in vitro model, and nlrp3 mRNA increased in the PC12 cells transfected with hsf1 siRNA ( Figure 5). Further research revealed that the nlrp3 siRNA can reverse hsf1 siRNA that enhanced the pyroptosis in PC12 cells (Figure 7). Collected all together, these data suggested that HSF1 might be a protective factor of pyroptosis by inhibiting NLRP3. The mechanism of HSF1-regulated NLRP3 was reported in a few articles. HSF1 could indirectly control the expression of NLRP3 by activating Snail and regulating the TRX1/TXNIP and TRX1/ASK1 complexes or promoting β-catenin translocation from the cytoplasm to the nucleus, which inhibits XBP1 activation in response to TLR/TRAF6 stimulation in macrophages by enhancing βcatenin transcriptional activity [50,77]. Our recent research proved that HSF1 was involved in the activation of the NLRP3 inflammasome in septic acute lung injury (ALI) [81]. We proved that HSF1 could suppress NLRP3 inflammasome activation in transcriptional and posttranslational modification levels and that HSF1 can inhibit caspase1 activation and IL-1β maturation via inhibiting the NLRP3 pathway [81]. This work supports previous results and indicates that HSF1 prevents brain injury from sepsis by inhibiting the sepsis-induced pyroptosis through the NLRP3-dependent caspase1/IL-1β pathway. Conclusion Our works showed that HSF1 plays an important role in sepsis-induced brain injury by regulating NLRP3. Although the exact mechanism of HSF1 inhibiting the NLRP3/cas-pase1/cleaved IL-1β pathway was not considered in this study, we conclude that HSF1 prevents brain injury from sepsis by inhibiting the sepsis-induced pyroptosis through the NLRP3-dependent caspase1/IL-1β pathway, suggesting HSF1 as a potential molecular target for treating brain injury in sepsis clinical studies. Data Availability The data used to support the findings in this study are available from the corresponding author upon reasonable request. Ethical Approval Protocols for animal breeding and experiments were approved by the Institutional Animal Care and Use Committee of Central South University under license number 2018sydw0378 (approval date: 25 Nov. 2018). Conflicts of Interest The authors declare no conflict of interest.
5,026
2023-01-27T00:00:00.000
[ "Medicine", "Biology" ]
Long-range Dependencies Learning Based on Non-Local 1D-Convolutional Neural Network for Rolling Bearing Fault Diagnosis In the field of data-driven bearing fault diagnosis, convolutional neural network (CNN) has been widely researched and applied due to its superior feature extraction and classification ability. However, the convolutional operation could only process a local neighborhood at a time and thus lack ability of capturing long-range dependencies. Therefore, building an efficient learning method for long-range dependencies is crucial to comprehend and express signal features considering that the vibration signals obtained in a real industrial environment always have strong instability, periodicity, and temporal correlation. This paper introduces non-local mean to the CNN and presents a 1D non-local block (1D-NLB) to extract long-range dependencies. The 1D-NLB computes the response at a position as a weighted average value of the features at all positions. Based on it, we propose a non-local 1D convolutional neural network (NL-1DCNN) aiming at rolling bearing fault diagnosis. Furthermore, the 1D-NLB could be simply plugged into most existing deep learning architecture to improve their fault diagnosis ability. Under multiple noise conditions, the 1D-NLB improves the performance of the CNN on the wheelset bearing dataset of high-speed train and the Case Western Reserve University bearing dataset. The experiment results show that the NL-1DCNN exhibits superior results compared with six state-of-the-art fault diagnosis methods. Introduction ROLLING bearings are the pivot components of the rotating machinery, and the damage of them directly declines the performance of the mechanical system, and safety problems, as well as enormous economic losses, could be caused. However, the long-time process under adverse operating conditions could easily cause different kinds of damage such as crack, abrasion, and gap. Therefore, the health condition monitoring for rolling bearings is crucial to protect the machinery system from safety problems [1]. With the development of the internet of things and the demand for long-term condition monitoring, companies have obtained enormous industrial data. Since the data-driven machine learning method could extract features of the machinery system form historical data automatically, it has been widely applied in the field of rolling bearing fault diagnosis. In general, the traditional diagnosis methods [2][3][4][5] mainly include two steps: (1) feature extraction and (2) fault recognition. The feature extraction [2,6] is to obtain the features that can reflect the state of the machine through the feature extraction algorithm. Fault recognition [3,7] uses a classifier algorithm to identify and classify the obtained features. However, the manually extracted statistical features can hardly characterize the complex dynamic features of vibration signals. Moreover, most of these classifier algorithms are shallow models, which cannot learn the complex non-linear relationship effectively. Thus, it is easy for them to make a wrong judgment. In recent years, deep learning has attracted more and more attention in the field of fault diagnosis [8][9][10][11]. Compared with traditional methods, the deep learning method could extract features from lower level to higher level automatically based on multiple nonlinear operations, and thus it could diagnose with higher intelligence. In particular, the convolutional neural network (CNN) has achieved remarkable success in fault diagnosis tasks due to its unique feature learning mechanism [12][13][14]. For example, Ince et al. [15] proposed a new one-dimensional CNN (1DCNN) for the real-time fault diagnosis of motors. Peng et al. [16] used a 1D deep residual CNN to diagnose the fault status of train wheelset bearing. Chen et al. [17] combined the CNN with an extreme learning machine to improve the fault diagnosis performance of the network. These methods are based on the 1DCNN [15][16][17][18][19][20][21], which mainly takes signals as input and automatically extracts fault features and diagnoses fault types through 1D convolution. In addition, Xia et al. [22] proposed a multi-sensor-based CNN fault diagnosis method to learn spatial and temporal information from multiple sensors simultaneously to obtain better results. Wen et al. [23] used two-dimensional CNN (2DCNN) to diagnose the health status of various mechanical components. These methods are based on the 2DCNN [22][23][24], which recombine 1D signal into 2D image or time spectrum, and then use 2D network architecture to get the final diagnosis results. However, compared with the 1DCNN, the network structure and operation process required by the 2DCNN is more sophisticated. Therefore, in this paper, we use the 1DCNN to solve the fault diagnosis of rolling bearings. Even though the CNN has been successfully applied in bearing fault diagnosis, it was initially introduced to solve computer vision problems such as image segmentation [25] and face recognition [26]. In order to accomplish these tasks, CNN needs to pay more attention to the relevant information of the local neighborhood. Therefore, CNN lacks sufficient attention to the relevance of long-distance information. Nevertheless, the vibration signal of rotating machinery is significantly different from the image. It is a temporal signal with strong periodicity. In addition, because of complicated operation conditions, these signals are always with strong nonlinearity and instability. Therefore, there is a strong correlation among different time points. Among these periodicities and correlations, there may be a large quantity of valuable information hidden. For example, as shown in Fig. 1, when a bearing has a local fault, the faulty part and other components produce a periodic short-term impact and encourage the bearing system to perform high-frequency free attenuation vibration according to its resonance frequency. Therefore, if we only consider the signal within a local region, diagnosis is more likely to be interfered with by random factors [27]. Apart from this, comparing the relationship among the amplitude of impulse points in different periods and positions is considered practical to understand the information in the signal fully. The Non-local mean (NLM) algorithm was first introduced by Buades et al. [28] in the field of image de-noising. This algorithm firstly breaks the image into patches of the same size. Then, it replaces the value at one pixel with the weighted average based on the similarity among the patch where the pixel belongs and other patches. In that way, the NLM could use the dependencies among one pixel and other pixels. Therefore, this method has a strong ability to capture long-range dependencies and has shown its extraordinary performance on image de-noising. Besides, The NLM is also widely used in the de-noising task of 1D time-series signals and has achieved impressive results. For instance, Van The contributions of this paper are summarized as follows: 1) Inspired by the NLM algorithm in the field of signal de-noising, this paper proposes a non-local module based on the 1DCNN for capturing long-term dependencies of signals. 2) The proposed 1D-NLB can be integrated into every 1DCNN as an efficient, simple, and universal component, thereby improving the diagnosis performance of the network. 3) This paper proposes the 1DCNN based on 1D-NLB to diagnose the health status of rolling bearings. 4) The NL-1DCNN has been extensively verified on the wheelset bearing dataset and the Case Western Reserve University (CWRU) bearing dataset [32], which has achieved better diagnostic results than six state-of-the-arts fault diagnosis methods. The rest of this paper is organized as follows. In Section II, the realization of the NLM algorithm on signal is described. In Section III, the proposed NL-1DCNN is described in detail. Section IV verifies the effectiveness and superiority of the NL-1DCNN. Section VI summarizes the whole paper. Realization of NLM on Vibration Signal The NLM algorithm for signal de-noising is mainly based on the following procedures. First, a neighborhood block is constructed with each vibration signal point as the center, and then structural information, similar to the neighborhood block, is searched in the global range of the signal. Finally, the information is weighted and averaged to eliminate noise in the vibration signal. Suppose the expression of the vibration signal of faulty rolling bearing is: where x(t) is the fault impulse signal, n(t) is the noise generated by other factors such as resonance and y(t) is the observed signal. The mission of de-noising is to eliminate n(t) from the observed vibration signal y(t) so that the original fault impulse signal x(t) could recover. For any position t, the estimated K(t) which is the weighted average of signal values within a predefined search neighborhood N(t) is given by: where ω(t, s) is the weight associated with sth searched point and tth desired point in N(t) which represents the search window centered on position t. is the normalized factor. The weight, as describe in [33], is given by: where λ is the bandwidth parameter and ∆ represents the local patch of L∆ points surrounding the position t; the patch surrounding the position s also contains L∆ points; d 2 represents the sum of the squares of Euclidean distances of the local patches centered on the signal points t and s. The novelty of NLM is that the weight between two local patches relies on their similarity rather than their physical distance [34]. Therefore, the de-noising process of NLM is non-local. The illustration of universal architecture of the 1D-NLB. "×" denotes batch matrix multiplication and "+" represents elements-wise add. This module can well capture the long-distance dependencies of the input signal. The Proposed Nl-1dcnn Fault Diagnosis Method In this section, the generic definition of non-local operation in the CNN is firstly introduced. Then we give instance based on the definition. For the last part, the NL-1DCNN aiming at rolling bearing fault diagnosis is introduced in detail. Definition of 1D Non-Local Different from the implementation of NLM algorithm in the field of vibration signal de-noising, the non-local operation in the 1DCNN takes feature signals as input, and then outputs feature signals containing global feature information. Therefore, we define a generic non-local operation in the 1DCNN as: where i is the index of a position on the output feature signal, and the response at that position is the value obtained after a non-local operation. j is the index that enumerates all possible positions. n is the input feature signal and m is the output which has the same length as n. The function f is responsible for calculating the dependency between indexes i and all indexes j of the signal. The function g computes the response of the input signal at position j. The response is normalized by a factor κ(n). This operation takes the relationship between position i with any position j into consideration and regards the weighted average value of the response as output. Therefore, it can make the network perceive long-range dependencies among different regions in the input feature signal at one time. By comparison, the convolutional operation could only learn the feature within a local neighborhood whose size equals the size of convolution kernels. Likewise, a recurrent neural network (RNN) could only capture the dependencies among neighboring times. The 1D non-local operation is very simple. The basic idea is to calculate the long-range correlation between the current position and other positions in the input signal, so that the algorithm can quickly capture the detailed local information and global information of the input signal. In addition, this operation can be easily implemented in the CNN with only a small amount of parameter increasing. 1D Non-Local Block According to the above definition, the pivot of 1D-NLB operation is function f which calculates similarity and function g computing the response. Thus, the realization of these two functions is highly related to the performance of 1D-NLB. In this paper, for simplicity, we only consider g as a linear transformation, which means ( ) where Wg is a weight matrix to be learned. According to the implementation of non-local operations in [28,31], a natural choice of f is the Gaussian function. For the convenience of capturing the dependencies among different regions in the signal, we define the f as: where n T i nj represents dot-product similarity, which is much easier to realize in various neural network platforms, and does not add any training parameters. Thus, the normalized factor is defined as: Fig. 2. illustrates the realization of the 1D-NLB in the 1DCNN. n is the input feature signal, n ∈ R B×W×C , where B is batch size, W means the length of the signal and C represents the number channels. At the very beginning, n is multiplied by n T and get matrix v, v ∈ R B×W×W . Then, v is fed into softmax layer to obtain the dependencies among one position of n and other positions. The result could be expressed as: Meanwhile, n goes through a 1×1 convolutional layer to halve its channels. After that, it is multiplied byv and pass another 1×1 convolutional layer so that the number of channels could recover to C. Thus, the output m is calculated by: At last, in order to optimize the feature signal while retaining the original information. We introduce residual connection on this basis to form a complete 1D-NLB. As a result, the output m is rewritten as: The method we proposed computes the dependencies among one local region of the input signal and the entire signal. Besides, the information could be extracted by only increasing extremely few training parameters. The 1D-NLB is very simple to be plugged into most existing 1DCNN. It could also be embedded into any layers among the network to combine the long-range dependencies with short-range information at different level. Therefore, this allows us to build an architecture with a strong ability to learn the global information contained in signal. Non-Local 1D-Convolutional Neural Network The 1D-NLB can be simply embedded in the 1DCNN to improve its learning ability of long-range dependencies of input signals. Based on 1D-NLB, we propose the NL-1DCNN, which aims at rolling bearing fault diagnosis. The universal architecture of the NL-1DCNN is shown in Fig. 3. The NL-1DCNN takes a 1D vibration signal as input. First, two shallow convolution modules are used to learn the shallow feature information in the signal. Subsequently, a 1D-NLB is used to learn the long-range dependencies features of the signal. Through the feature learning of the shallow convolution module, the input signal of 1D-NLB can encode enough semantic information, so that 1D-NLB can obtain the temporal correlation in the signal with higher effectiveness and accuracy. This is why two shallow convolution modules are used before 1D-NLB. In addition, the NL-1DCNN also uses multiple convolution modules to encode the high-level semantic features of the signal, so that different types of signals have sufficient distinction. For each convolutional module, it is consisted by a 1D convolutional layer, a batch normalization and a ReLU activation function layer. We implement down-sampling by setting a large convolution stride, which can minimize the corresponding information loss. For the classification stage, the learned feature is sent to a global average pooling (GAP) [35] layer followed by a softmax activation. Assuming there are H different classes, the output probability Qh for the class h is calculated by: where qh is the input of the softmax layer. The diagnosis output is the fault label corresponding to the largest Qh. The detailed architecture of the NL-1DCNN is demonstrated in TABLE I. The length of the input signal of the NL-1DCNN is 2048 × 1, which can ensure that the input signal contains a complete period. Six convolutional modules are applied in the NL-1DCNN in total. Among them, the first two layers of convolution modules are used to capture the shallow information of the input signal, then 1D-NLB is used to learn long-range dependencies features, and the last four layers of convolution modules are used to learn high-level semantic features. The number of channels of the network's convolution module gradually increases from 16 to 128. Except that the stride of the first layer is set to 4, the stride of other layers is set to 2, so that the dimension of the feature signal is finally compressed to 16 × 128. Inspired by [16,19,36], we use wide convolution kernel to learn more fault-related features of the signal. In order to balance the feature extraction capability and the number of parameters of the network model, we set the size of the convolution kernel to gradually decrease, that is, the size of the convolution kernel of the network is gradually reduced from 24 × 1 to 3 × 1. The proposed network model thus uses large convolution kernels in shallow layers to obtain sufficient shallow features from the signal. The extracted features are then filtered and abstracted using small convolution kernels in the deep layers to build high-level features that can be used for device health identification. Apart from this, we use GAP layer to compress the signal into a vector, which decreases the number of trained parameters dramatically compared with using fully connected layer. The probability is outputted by the softmax function. Experiment Verification In this section, we perform ablation study and comparative experiments on the wheelset bearing dataset and motor bearing data from the CWRU to verify the effectiveness and superiority of the proposed non-local operation and fault diagnosis method. Experiment Setup Deep learning based methods need a large quantity of samples to optimize parameters and the process of slicing the training samples with overlap proposed by [16,19] could enormously increase the number of training samples. Therefore, we adopt the same method for data augmentation. The length of each sample is 2048 while the step size of sliding segmentation is set to 128 in our experiment. 2048 is greater than the number of sampling points in one rotation cycle of the device, so each sample contains complete cycle information. The proposed NL-1DCNN is realized in the Keras library under Python 3.5. The training and testing process are performed on a workstation with an Intel Core i7-6850K CPU and a GTX 2080 GPU. In addition, we changed the division standard deviation to division variance in z-score normalization. We find that this can make the network achieve better performance. During the training process, we adopt Adam optimizer and the learning rate is set to 0.0001. The batch size is 196 and 96 on wheelset bearing dataset and motor bearing dataset respectively. In this paper, we adopt three generic performance indicators: accuracy, recall and precision. To better stimulate strong noise disturbance of bearings in the real circumstance, we added additional Gaussian white noise to the raw signals. The definition of SNR is shown as: 10 10 signal dB noise where Psignal and Pnoise are the power of signal and the noise respectively. In this paper, the NL-1DCNN is compared with six state-of-the-arts deep learning based methods. First, we compare the NL-1DCNN with dislocated time series CNN (DTS-CNN) proposed by Liu et al. [27]. The DTS-CNN uses a dislocate layer, so that the network can learn the correlation between different time series in the signal to a certain extent. In the experiments, m, n, and k of the DTS-CNN are set to 10, 512, and 30, respectively, and a dropout layer with a dropout rate of 0.2 is used in the fully connected layer to suppress overfitting. In addition, we compare the NL-1DCNN with the LSTM-based methods. The LSTM has a good learning ability of timing correlation features. In this experiment, the used LSTM has two LSTM cells, where its time steps are 64 and the input dimension is 32. Finally, we also selected the two state-of-the-art 1DCNN-based fault diagnosis methods, namely wide first-layer kernels CNN (WDCNN) [19] and residual-learning -based CNN (ResCNN) [18], which use wide convolution kernel and residual network structure, respectively; and the two state-of-the-art 2DCNN-based fault methods, namely Wen-CNN [23] and hierarchical learning rate adaptive deep CNN (ADCNN) [24], both convert 1D signals into 2D images, and then use different structures of 2D networks to learn fault features. To fairly compare the performance of different methods, we have trained and tested these methods under the same experimental conditions, and four-fold cross validation is also applied to verify the performance of every method. Data description The wheelset bearing test rig provides the experiment data. As shown in Fig. 4, the wheelset bearing test rig is mainly composed of a drive motor, a belt transmission device, a lateral loading set, a vertical loading set and two fan motors. The vertical and the lateral loading sets are designed to mimic two-dimensional loads in real train operation. An axle and its two supporting bearings are assembled to the test rig. Use acceleration sensor to collect the vibration signal of rolling bearing. The acceleration sensor is fixed at 9 o'clock and 12 o'clock of the axle box, and the sampling frequency is 5120 Hz. The experimental bearings used double-row taper roller bearings. The photos and models of these faulty bearings are shown in Fig. 5. These faulty bearings are naturally produced during the operation of high-speed train. There are various fault occurring to wheelset bearing during the real operation. Therefore, 12 different kinds of typical fault conditions combined with health conditions are set. The faults are distributed in the inner race, outer race, rolling element and cage of the wheelset bearing, and the severity of the faults is different. The information of the testing wheelset bearing is shown in As shown in Fig. 6, the raw vibration signals of the 12 health conditions of the wheelset bearing dataset are displayed. In addition, in order to explain the influence of noise on vibration signal, we show the vibration signal after adding different degrees of noise. As shown in Fig. 7, we added 6 dB, 0 dB and -6 dB Gaussian white noise to the vibration signals of the two fault categories. It can be seen that when a small amount of noise is added, the noise has little effect on the vibration signal. However, when a large amount of noise is added, the original waveform of the vibration signal is completely destroyed by the noise, so that it is difficult to distinguish. In actual situations, noise is inevitable. Therefore, in the following experiments, we will also discuss the influence of noise on the deep learning model and the anti-noise performance of our proposed method. Influence of the position of 1D-NLB The proposed 1D-NLB can be embedded in any layer of the network to capture long-range dependencies of the feature signal. However, because the length and semantic level of the feature signal in different layers are not consistent, the features learned by 1D-NLB on these layer are also different. Therefore, embedding 1D-NLB in different locations on the network brings different diagnostic performance. In order to explore the impact on performance when embedding 1D-NLB in different layers of the network, in this experiment, we set up a total of seven different network structures, which are the 1DCNN (the same structure as the NL-1DCNN but does not include 1D-NLB), NL-1DCNN-1, NL-1DCNN-2, ..., NL-1DCNN-6, in which, the number after their name indicates the layer after which the 1D-NLB is embedded. With SNR = −6dB, we performed experiments on these seven methods. TABLE III and Fig. 8 show the accuracy, recall and precision of these methods on the wheelset bearing dataset. The experimental results show that the 1DCNN only obtains 76.80% accuracy, 74.30% recall, and 75.56% precision. After adding 1D-NLB after the first convolutional layer, the NL-1DCNN-1 achieves 81.64% accuracy, 80.13% recall, and 82.90% precision. Which means they are improved by 4.84%, 5.83%, and 5.75%, respectively. This is a huge improvement, which illustrates the effectiveness of the proposed 1D-NLB. The NL-1DCNN-2 has further achieved better performance, and its accuracy, recall and precision have improved by 7.53%, 8.60%, and 8.26% over 1D-NLB, respectively. This shows that the 1D-NLB can encode enough long-distance dependencies from shallow feature signals, so that the network can achieve better performance. In addition, we also observed that starting from NL-1DCNN-3, the diagnostic performance of the network decreased compared to NL-1DCNN-2. Furthermore, the performance of NL-1DCNN-6 is even worse than the 1DCNN. This shows that the 1D-NLB is very sensitive to its location in the network, and its performance changes with its location in the network. In summary, we can conclude that as the location of 1D-NLB in the network deepens, its performance increases first and then decreases. This phenomenon is well understood. The main role of 1D-NLB is to capture the long-range dependencies of the feature signal, and whether sufficient temporal dependencies can be captured is closely related to the input of the 1D-NLB. When the 1D-NLB is located in the shallow layer, the input feature signal has sufficient length, but the semantic level is low, so increasing the semantic level of the input signal can improve the performance of 1D-NLB. When the 1D-NLB is located in the deep layer, the length of the feature signal becomes a greater restrictive factor. In particular, the length of the feature signal outputed by the sixth convolution layer is only 16. In this case, the 1D-NLB has been unable to learn any temporal-related features from such a short feature signal. As a result, the performance of the network has declined since NL-1DCNN-3. Therefore, when designing a 1D-NLB-based fault diagnosis method, it is necessary to balance the two key factors which are the semantic level and feature signal length. In order to understand the improvement of network performance brought by 1D-NLB more clearly, we use T-SNE technology [37] to visualize the distribution of the features of NL-1DCNN-2 and the 1DCNN on a 2D space, respectively. It is worth noting that the only difference between NL-1DCNN-2 and the 1DCNN is that NL-1DCNN-2 has 1D-NLB and the 1DCNN does not. The visualization results are shown in Fig. 9. Different colored dots represent different health conditions. According to the subfigures A1 and B1, the shallow features of these two networks are not distinguishable. Subsequently, the 1D-NLB makes the NL-1DCNN-2's features more distinguishable. Thus, the discrimination of the features of NL-1DCNN-2 is better than the 1DCNN. For example, the features in the subfigures B2 and B3 are always clustered together. The degree of dispersion of A2 is greater than that of B2, and the degree of dispersion of A3 is greater than that of B3. This shows that the features in A2 and A3 are more discriminative. Therefore, the discrimination of the features of subfigures A2 and A3 are significantly better than that of subfigures B2 and B3. This phenomenon shows that the long-distance dependency captured by 1D-NLB is helpful for the network to distinguish and diagnose different fault categories. This not only proves the validity of 1D-NLB, but also proves that the long-distance dependence of the signal helps the network fully understand the hidden features of the signal. It is precisely because 1D-NLB learns these features that the ordinary CNN networks cannot learn, so that the network can obtain better diagnostic results. Influence of the number of 1D-NLBs In order to further explore the impact of the number of 1D-NLB on diagnostic performance, we add one and two 1D-NLBs to the network on the basis of NL-1DCNN-2, which are named NL-1DCNN-2-1 and NL-1DCNN-2-2 respectively. With SNR = −6dB, we performed experiments on these three methods. The accuracy, recall and precision of these three methods are shown in TABLE IV. We find that the number of 1D-NLB has little effect on network performance. The NL-1DCNN-2, NL-1DCNN-2-1 and NL-1DCNN-2-2 achieved similar fault diagnosis performance. This shows that using only one 1D-NLB can capture adequate long-distance dependencies and greatly improve the performance of the network. Although the NL-1DCNN-2-1 is slightly better than NL-1DCNN-2, adding more modules also increases the computational burden to a certain extent. Therefore, in the subsequent experiments, the network structure of our proposed method is consistent with NL-1DCNN-2. Effectiveness of 1D-NLB in existing methods In order to verify the wide applicability of 1D-NLB in the CNN-based fault diagnosis methods, this experiment continues to explore the performance of 1D-NLB in the existing CNN methods. We use the WDCNN as the baseline, and then embed 1D-NLB into different layers of the WDCNN. A total of five different network structures are designed, which are named WDCNN-1, WDCNN-2, ..., WDCNN-5. The number after their name indicates the layer after which the 1D-NLB is embedded. With SNR = −6dB, we performed experiments on these six methods. TABLE V and Fig. 10 show the accuracy, recall and precision of these methods. Obviously, we find that the proposed 1D-NLB can also effectively improve the fault diagnosis performance of the WDCNN. For example, the accuracy of the WDCNN-2 is improved by 4.09% compared with the WDCNN. Consistent with the phenomenon of previous experiments, as the position of 1D-NLB in the WDCNN gets deeper, the diagnostic performance of the network increases first and then decreases. This also shows that the length of the feature signal and the semantic level have a great impact on the performance of 1D-NLB. In addition, we find that the improvement of the WDCNN-2 compared with the WDCNN is smaller than that of NL-1DCNN-2 compared with 1DCNN. This is because the WDCNN used a very large down-sampling rate in the first convolution layer, which caused the length of the feature signal too small, resulting in 1D-NLB unable to achieve better performance. This also shows that in order to maximize the performance of 1D-NLB, we need to design a relatively reasonable network structure. Even though the WDCNN is not optimized for 1D-NLB, this module still considerably improves the fault diagnosis performance of the WDCNN. This strongly proves the wide applicability of 1D-NLB. This experimental phenomenon proves that the proposed 1D-NLB can be simply embedded in other existing CNN architectures to improve their performance, even if these CNNs are not specifically optimized for 1D-NLB. Therefore, 1D-NLB has a very wide application potential, and it could be used as a general module to improve the performance of most CNN networks. Compared with state-of-the-arts methods In order to verify the superiority of the proposed NL-1DCNN and explore its performance under different noise conditions. We compare the NL-1DCNN with six state-of-the-arts deep learning-based fault diagnosis methods under three different noises (SNR = −6dB, 0dB, and 6dB). This effectively proves the fault diagnosis ability of the proposed method in weak noise environment. In addition, when SNR=−6dB, which means the noise intensity is 3.98 times the raw signal, the NL-1DCNN can still obtain 84.33% fault diagnosis accuracy, which is 11.95% higher than Wen-CNN. This is a good proof that the NL-1DCNN has good anti-noise performance even without any de-noising preprocessing. In addition, we find that the LSTM with long-distance dependency learning ability has a good performance in this dataset. At SNR = −6dB, it can obtain a diagnostic accuracy of 81.06%. This also confirms that networks with long-distance dependency learning capabilities can effectively capture more essential signal features and thus obtain better fault diagnosis results when dealing with time-series signal. By contrast, the DTS-CNN only obtained 64.15% accuracy at SNR = −6dB. This shows that the applicability of the DTS-CNN is not satisfactory, and it is difficult to adapt to the fault diagnosis task of wheelset bearing dataset. TABLE VI also shows the parameter quantities of our method and the comparison method. Since we only added one 1D-NLB module, the number of parameters in our model is still relatively small. Therefore, the proposed method achieves a large performance boost with little parameter increase. Compared with state-of-the-arts methods In order to explore the applicability of the proposed method on the CWRU bearing dataset, we compare the NL-1DCNN with six state-of-the-art deep learning methods under three noise situations (SNR = −6dB, 0dB, and 6dB). TABLE VII shows the accuracy, recall and precision of these methods. We find that the NL-1DCNN has better fault diagnosis performance than six comparison methods under three noise conditions. Under SNR = 6dB, the NL-1DCNN achieved 99.89% accuracy of fault diagnosis. At SNR = 0dB, the noise power is equal to the raw signal power, and the NL-1DCNN achieved a 99.17% accuracy for fault diagnosis. This shows the excellent fault diagnosis performance of the NL-1DCNN. Moreover, the NL-1DCNN performs better on the motor bearing dataset than the wheelset bearing dataset, and it can obtain 91.23% accuracy at SNR = −6dB. In addition, we find that the LSTM obtained only 65.27% diagnostic accuracy on this dataset. However, the DTS-CNN exhibited relatively good results, which achieved an accuracy of 88.69% at SNR = −6dB. Although the performance of the DTS-CNN is still far from that of the NL-1DCNN, this proves once again the importance of long-distance dependencies for fault diagnosis tasks. From the performance of these methods on two datasets, the performance of the DTS-CNN and the LSTM is greatly affected by the dataset, and they can only exert their good performance on some datasets. The NL-1DCNN can achieve excellent performance on both datasets, which shows its good adaptability. This reflects the application potential of the NL-1DCNN in other fault diagnosis tasks of rotating machinery to a certain extent. In order to show the performance of these methods more clearly, we use the T-SNE technology to visualize the final output distribution of the NL-1DCNN, the LSTM, the DTS-CNN, the WDCNN, the Wen-CNN, the ResCNN and the ADCNN in a two dimensional space. The visualization results are shown in Fig. 11, where different colors represent different health conditions of motor bearings. Obviously, the output distribution of the NL-1DCNN has the best discrimination, followed by the DTS-CNN and the Wen-CNN. This is consistent with the results of TABLE VIII, which shows that the proposed NL-1DCNN has better performance on the motor bearing dataset. In order to better understand the diagnostic performance of the proposed method for each health category. The confusion matrix of the proposed NL-1DCNN at SNR = 6dB is displayed in Fig. 12. Obviously, our method can distinguish normal samples and fault samples with 100% accuracy. In addition, in the identification of fault types, the NL-1DCNN can also identify inner race fault and outer race fault with 100% accuracy, and it can accurately identify the degree of bearing failure. The NL-1DCNN only made a few misjudgments in the diagnosis of ball fault. And, these misjudgments are just judging a certain fault degree of ball fault as other fault degree. This shows that our method can accurately distinguish different fault categories, and there may be few misjudgments when determining the degree of fault. Conclusions In this paper, we propose the NL-1DCNN for rolling bearing fault diagnosis. This method aims to improve the long-range dependencies learning ability of the network, so as to fully understand the hidden features of the signals. To this end, we introduced the non-local mean method to the CNN and built a 1D-NLB for capturing long-range dependencies. The basic idea of 1D-NLB is to calculate the long-range correlation between the current position and other positions, so that the network can quickly capture the local and global information of the input signal. We validate the effectiveness of the method on two bearing datasets. Experimental results show that the diagnostic performance of the NL-1DCNN is considerably better than the six outstanding methods. The conclusions are summarized as follows: 1) the long-distance dependence can help the network to fully understand the hidden information of the signal, and this information is also very important for fault diagnosis tasks. 2) The proposed 1D-NLB absorbs the advantages of the non-local mean de-noising algorithm and has excellent learning ability for long-distance dependencies. It can be easily embedded in most CNN architectures to improve its fault diagnosis performance. 3) The NL-1DCNN has good fault diagnosis performance, and it has consistent performance on two datasets, which shows its application potential in other fault diagnosis tasks. In addition, the performance of the proposed method is still relatively low in the case of strong noise, which cannot meet the needs of practical applications. Moreover, in practical situations, it is often impossible to obtain enough fault samples, and the proposed method cannot cope with this situation well. Therefore, in future work, we will focus on improving the model's performance in strong noise environments and introduce the idea of few-shot learning to improve the performance of the diagnostic model in the case of limited labeled samples.
8,341.2
2022-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Distribution and genetic diversity of Blastocystis subtypes in various mammal and bird species in northeastern China Background Blastocystis is one of the most common intestinal parasites in humans and animals worldwide. At least 17 subtypes have been identified in mammals and birds. In China, although some studies have reported the occurrence of Blastocystis in humans and animals, our understanding of the role of animals in the transmission of human blastocystosis is only superficial due to a paucity of available molecular data. The aim of the present study was to understand infection rates of Blastocystis and the distribution and genetic diversity of subtypes in various mammal and bird species in northeastern China, as well as to assess the zoonotic potential of Blastocystis isolates. Methods A total of 1265 fresh fecal specimens (1080 from ten mammal species and 185 from eight bird species) were collected in Heilongjiang, Liaoning and Jilin provinces of China. Each specimen was examined for the presence of Blastocystis by PCR amplification and sequence analysis of the partial SSU rRNA gene. Results Fifty-four specimens (4.3%) were positive for Blastocystis. Birds (7.0%) had a higher infection rate of Blastocystis than mammals (3.8%). Blastocystis was found in seven mammal species, reindeer (6.7%), sika deer (14.6%), racoon dogs (7.5%), Arctic foxes (1.9%), dogs (2.9%), rats (3.7%) and rabbits (3.3%), as well as three bird species, pigeons (2.1%), chickens (13.0%) and red crowned cranes (14.0%). Eight subtypes were identified including ST1 (n = 5), ST3 (n = 3), ST4 (n = 13), ST6 (n = 8), ST7 (n = 6), ST10 (n = 13), ST13 (n = 4) and ST14 (n = 2). 64.8% (35/54) of Blastocystis isolates belonged to potentially zoonotic subtypes. Conclusions To our knowledge, this is the first report of Blastocystis in reindeer (ST10 and ST13), rabbits (ST4), racoon dogs (ST3) and Arctic foxes (ST1, ST4 and ST7). The findings of potentially zoonotic subtypes suggest that the animals infected with Blastocystis might pose a threat to human health. These data will improve our understanding of the host range and genetic diversity of Blastocystis, and also help develop efficient control strategies to intervene with and prevent the occurrence of human blastocystosis in the investigated areas. Background Blastocystis is one of the most common parasites colonizing the intestines of humans and numerous animals [1]. Variable infection rates have been observed in humans: 22-56% in European countries and 37-100% in Asian and African countries [2]. The application of PCR-based molecular tools for subtyping Blastocystis isolates has revealed an extensive genetic diversity within this genus; these data contribute to a better understanding of the characteristics of this pathogen, about its host specificity and transmission patterns [3]. Currently, based on sequence analysis of the SSU rRNA gene, at least 17 Blastocystis subtypes have been identified in mammals and birds, with eight subtypes (STs1-8) co-occurring in humans and animals [1]. High similarity or even identity of DNA sequences of Blastocystis isolates from humans and animals suggests the potential of zoonotic transmission [4][5][6][7]. Humans and domestic or zoo animals living in close contact have been reported to be infected with the same subtypes, such as ST1 and ST2 in zoo keepers and one wombat, and five primate species in Australia [5]; ST2 in children and monkeys in Nepal [8]; ST5 in piggery workers and pigs in Australia [9]; and ST6 in breeders and cattle/goats in Nepal [10,11]. Thus, understanding infection rates of Blastocystis in a wide range of animal hosts, exploring genetic characterization and assessing zoonotic potential of animal-derived isolates will aid in making effective strategies to intervene with and prevent occurrence of human blastocystosis. In China to date, Blastocystis infection has been reported in humans and animals distributed in at least 26 and 11 provinces or autonomous regions, respectively [12]. In northeastern China, Blastocystis is prevalent in common livestock such as cattle, sheep, goats and pigs [12,13]. This pathogen has also been found in human immunodeficiency virus (HIV)-infected and acquired immunodeficiency syndrome (AIDS) patients (unpublished data) and cancer patients in these areas [14]. However, relatively few data are available on genetic characterization of Blastocystis and subtype distribution in animal hosts. The contribution of animal sources to human infection of Blastocystis remains unclear. Thus, efforts to subtype Blastocystis isolates from under-sampled hosts should be preferentially carried out. The present study provides information regarding infection rates of Blastocystis, host distribution and genetic diversity of subtypes and the zoonotic potential of Blastocystis isolates from various mammal and bird species in northeastern China. Collection of fecal specimens A cross-sectional investigation of Blastocystis was carried out on various mammals and birds from May 2015 to October 2017 in Heilongjiang, Liaoning and Jilin provinces of northeastern China. A total of 1265 fecal specimens were collected, with 1080 from 10 mammal species and 185 from eight bird species (Table 1). The vast majority of specimens were from Heilongjiang Province. Three deer species were involved in the present study, including 104 wild reindeer (Rangifer tarandus) and 82 sika deer (Cervus nippon) from farms and 48 red deer (Cervus elqphus) from a zoo (30 from Jilin). Three hundred sixty-seven fur animals were randomly selected from farms, including 40 racoon dogs (Nyctereutes procyonoides) (24 from Jilin), 213 Arctic foxes (Alopex lagopus) (66 from Jilin and 40 from Liaoning) and 114 American minks (Neovison vison) (35 from Jilin and 54 from Liaoning). One hundred thirty-six dogs (Canis lupus familiaris) were included in the present study, constituting 76 pet dogs and 60 farm dogs (12 from Jilin). The remaining 343 mammals and 185 birds were all from Heilongjiang. The mammals comprised 20 horses (Equus caballus) from private owners, 215 rabbits (Oryctolagus cuniculus) from farms and 108 brown rats (Mus musculus), including 23 from a granary, 48 from pig farms and 37 from a sheep farm. The birds comprised 46 chickens (Gallus domesticus), 16 ducks (Anas platyrhynchos domesticus) and 20 geese (Anser domestica) from farms, 47 pigeons (Columba livia) from individual aviaries as well as 43 red crowned cranes (Grus japonensis), 6 common cranes (Grus grus), 4 white-naped cranes (Grus vipio) and 3 Siberian cranes (Grus leucogeranus) from Zhalong National Nature Reserve. Farmed rabbits and captured free-ranging pigeons were fed alone in each cage for 24 h and then fecal specimens were collected from the bottom of each cage. All captured wild rats were euthanized by CO 2 inhalation and fecal specimens were collected directly from their intestinal and rectal content. For the collection of fecal specimens of the other mammals and birds, we only picked up fresh feces on the top of droppings on the ground after defecation to avoid contamination. All fecal specimens obtained were stored in refrigerators at -20°C for further molecular analysis. No diarrhea was observed in any animal at the time of sampling. DNA extraction To reduce interference from crude fiber and impurities, the fecal specimens were sieved and washed with distilled water by centrifugation at 1500× g for 10 min. This was done three times at room temperature. Genomic DNA of Blastocystis was extracted from 180-200 mg of each washed fecal pellet using a commercially available kit (QIAamp DNA Mini Stool Kit, Qiagen, Hilden, Germany) according to the manufacturer-recommended procedures. To obtain a high yield of DNA, the lysis temperature was increased to 95°C according to the manufacturer's suggestion. Extracted DNA was stored at -20°C until PCR analysis. PCR amplification and sequencing Considering the characterizations of two sets of primers for amplifying the Santín region and the barcode region of the SSU rRNA of Blastocystis described by Wang et al. [12], in the present study all DNA preparations were screened for the presence of Blastocystis by PCR amplification of the Santín region ( a fragment of approximately 500 bp) [15]. PCR-positive DNA preparations were further analyzed to determine subtypes of Blastocystis isolates by PCR amplification and sequence analysis of the barcode region (a fragment of approximately 600 bp) according to terminology for Blastocystis subtypes -a consensus [3,16]. TaKaRa Taq DNA polymerase (TaKaRa Bio Inc., Tokyo, Japan) was used for all PCR reactions. A negative control (no DNA water control) and a positive control (DNA of a pig-derived Blastocystis isolate) were used in all PCR tests. All PCR products were subjected to electrophoresis in a 1.5% agarose gel and were visualized after staining the gel with GelStrain (Trans-Gen Biotech, Beijing, China). Nucleotide sequencing and analyzing All positive PCR products of the expected size were sequenced with the primers given in [16] on an ABI PRISM 3730 XL DNA Analyzer (Applied Biosystems, Foster, CA, USA), using a BigDye Terminator v.3.1 Cycle Sequencing Kit (Applied Biosystems). Accuracy of the sequencing data was confirmed by sequencing the PCR products in both directions. Two new PCR products were sequenced for some DNA preparations, from which we obtained sequences different to those published in GenBank. Nucleotide sequences obtained in the present study were subjected to BLAST searches (http://www.ncbi.nlm.nih.gov/blast/) and then aligned and analyzed with each other and reference sequences downloaded from GenBank by using the program Clustal X 1.83 (http://www.clustal.org/). Subtypes of Blastocystis isolates were identified according to the proposed standard of Blastocystis terminology [3]. Results Infection rates of Blastocystis Subtypes of Blastocystis isolates All positive DNA preparations in the Santín region were successfully amplified and sequenced in the barcode region. By sequence analysis of the barcode region, eight subtypes were identified out of 54 Blastocystis isolates, including ST1 (n = 5), ST3 (n = 3), ST4 (n = 13), ST7 (n = 1), ST10 (n = 13), ST13 (n = 4) and ST14 (n = 2) in mammals, and ST6 (n = 8), ST7 (n = 5) in birds, with ST7 in both mammals and birds. Among them, ST4 was found in rabbits, rats, foxes and dogs, showing the widest host distribution. Either ST4 or ST10 shared the largest percentage of Blastocystis isolates (24.1%, 13/54). Meanwhile, it was observed that 64.8% (35/54) of Blastocystis isolates belonged to potentially zoonotic subtypes based on the fact that STs1-8 has been found in humans and animals. Detailed information on subtype distribution of Blastocystis by host and by geography is summarized in Tables 1 and 2, respectively. Genetic diversity of Blastocystis subtypes A total of 15 representative sequences were obtained from 54 Blastocystis isolates in the present study. Among them, 11 sequences have been described previously, with seven of them being reported in humans. The remaining four novel sequences were composed of ST7 (n = 1), ST10 (n = 2) and ST13 (n = 1) ( Table 1). The ST7 sequence (MH325365) from a fox had two base differences compared to that (KP233737) from a duck in the Philippines. There were two base differences between the two ST10 sequences (MH325363 and MH325364) from sika deer, and both of them had a base variation compared to that (KC148207) from a camel in Libya. The ST13 sequence (MH325366) from reindeer had the largest similarity with that (KC148209) from a mouse deer (Tragulus javanicus) in the UK, with 10 base differences. Discussion Infection rates of Blastocystis vary in mammals and birds between and within countries worldwide [17] and is as high as 100% in some studies, such as in dogs in Australia and in birds in Malaysia [18,19]. In the investigated areas, Blastocystis was found in seven mammal species and three bird species, with infection rates ranging from 1.9 to 14.6%. There was an absence of Blastocystis in horses, red deer, minks, ducks, geese and three crane species. This might be related to the small number of collected specimens and/or a low prevalence of Blastocystis in these animals. In the present study, eight subtypes (ST1, ST3, ST4, ST6, ST7, ST10, ST13 and ST14) were identified. Among them, ST10 was the most common in deer (68.4%, 13/19). Previous studies have revealed that ST10 is found commonly in some livestock (cattle, sheep and goats) and sporadically in some herbivorous animals including camels, alpacas, wild yaks, ponies, kangaroos, giraffes, wild asses, bisons and oryxes [1,12,17,20,21]. This subtype is also identified in pigs and ostriches from China, cats from the USA and dogs from France [12,[20][21][22]. In the present study, ST13 and ST14 were identified in reindeer and sika deer for the first time, respectively. ST13 is actually a rare subtype and previously had only been found in mouse deer from the UK, kangaroos and quokkas from Australia, and monkeys from China and France [5,17,21,23,24]. ST14 is similar to ST10 in host range. It is composed of Blastocystis isolates from various herbivorous animals including some common livestock (cattle, sheep and goats), and some artiodactyla (camels, alpacas, giraffes, bushbucks, mouflons, common elands, brindled wildebeests and bisons) [1,12,17,21]. The true host range of these subtypes can be defined by subtyping a large number of Blastocystis isolates from different hosts and areas in the future. To date, a total of five subtypes (ST4, ST5, ST10, ST13 and ST14) have been identified in six deer species including reindeer involved in the present study [17,21,23]. Dogs have a long association with human behavior, such as companionship, protection and herding, as well as aiding handicapped individuals. However, potentially zoonotic subtypes of Blastocystis have been found in dogs and their owners in Australia (ST1, ST3 and ST4), the Philippines (ST1 to ST5) and Turkey (ST1 and ST7) [18,25,26], suggesting the possibility of the dogs being involved in the transmission of Blastocystis to humans. In the present study, ST1 and ST4 were identified in dogs. To date, eight subtypes (ST1 to ST7 and ST10) have been identified in these animals [20,[25][26][27]. However, the subtype constitution is observed to be different, such as ST1, ST4, ST5 and ST6 in India, and ST2 and ST10 in the USA [22,27]. This might be closely related to the fact that coprophagia is a common practice in dogs, especially stray dogs. They could have acquired Blastocystis infections with various subtypes by exposure to fecal materials from animal hosts and humans in their environment. Meanwhile, this also could explain the reason that no predominant or specific subtypes are identified in the overall canine population [1,18]. Of course, the mechanical transport of Blastocystis in dogs cannot be ruled out. It has been reported that dogs can serve as a mechanical vector of possibly viable eggs for a variety of helminth parasites [28]. Longitudinal studies of Blastocystis in dogs are needed before a definitive conclusion can be drawn. ST4 is the most common in rodents among the five subtypes identified (ST2 to ST5 and ST17) [1,17]. As expected, ST4 was identified in brown rats in the investigated areas. ST4 is also one of the four most common subtypes in humans (ST1 to ST4) [29]. Besides rodents and humans, this subtype has been found in non-human primates (ring-tailed lemurs, woolly monkeys, siamangs), giraffes, kangaroos, dogs and a snow leopard as well as ostriches [1,7,23,27]. In the present study, ST4 was identified in rabbits and foxes for the first time, expanding its host range. ST6 and ST7 are usually isolated from bird reservoir hosts [1]. In the present study, ST6 and ST7 were identified in chickens and red crowned cranes while ST6 was found in one pigeon. Currently, although seven subtypes (ST1, ST2, ST4 to ST7 and ST10) have been identified in birds [1,21,30], ST6 and ST7 are still the most common subtypes in birds and generally considered as avian subtypes [7]. Besides birds, the two subtypes are occasionally found in some mammals: ST6 in pigs, cattle, goats and dogs [12,27] and ST7 in pigs, goats, cynomolgus monkeys, ruffed lemurs and dogs [1,12,17,25,31]. In humans, ST6 and ST7 only constitute a small share (approximately 9%) of cases of blastocystosis [32]. In the present study, 64.8% (35/54) of Blastocystis isolates belonged to potentially zoonotic subtypes and 80.0% (28/35) of the sequences of these subtypes have been described in humans, including ST1 (n = 5), ST3 (n = 3), ST4 (n = 7), ST6 (n = 8) and ST7 (n = 5) ( Table 1). In several previous studies of the natural infection of Blastocystis in mammals and birds, all Blastocystis isolates are identified as zoonotic subtypes, such as in dogs from India and the Philippines [26,27], in rats from Colombia and Indonesia [33,34] and in birds from Japan and Malaysia [19,35]. Undoubtedly, the percentage of zoonotic subtypes of Blastocystis isolates in animals is an important parameter to assess the risk of zoonotic transmission of blastocystosis in a specific area. Meanwhile, it is reasonable and scientific to understand the infection rates of this pathogen in animal hosts. The low prevalence suggests a minor risk of zoonotic transmission, but more investigation into the epidemiological factors associated with Blastocystis transmission is needed to more accurately assess the potential for zoonotic transmission. In the present study, the vast majority of animals were domestic and captive farmed animals. They were from farms, zoos or individual owners. Thus, the people who have close contact with animals for occupational or recreational reasons are at a high risk of acquiring Blastocystis infection. In fact, high prevalence of Blastocystis infections has been reported among zoo keepers and the same subtypes have been found in humans and animals living in close contact [5,[8][9][10][11]36]. Meanwhile, Blastocystis in animal feces can enter streams and rivers through surface run-off after a heavy rainfall, which causes water contamination downstream and a wide geographical spread of Blastocystis. Conclusions The present study describes the occurrence, subtype distribution and genetic characterization of Blastocystis in various mammal and bird species in northeastern China. Average infection rates of Blastocystis were 3.8% in mammals and 7.0% in birds. Eight subtypes were identified, with subtype overlaps being observed in some host species. 64.8% of subtypes were potentially zoonotic, suggesting that animals infected with Blastocystis might pose a threat to human health. Blastocystis was identified for the first time in reindeer (ST10 and ST13), rabbits (ST4), racoon dogs (ST3) and Arctic foxes (ST1, ST4 and ST7), expanding the host range of Blastocystis. Four novel nucleotide sequences of Blastocystis have never been reported before. These data obtained in the present study increase our understanding of the host range and genetic diversity of Blastocystis and will help develop efficient control strategies to intervene with and prevent the occurrence of blastocystosis in the investigated areas. Abbreviations AIDS: Acquired immunodeficiency syndrome; BLAST: Basic local alignment search tool; HIV: Human immunodeficiency virus; PCR: Polymerase chain reaction; SSU: Small subunit Funding The study was supported by the Natural Science Foundation of Heilongjiang Province (H2017006) and the Heilongjiang Province Education Bureau (12531266). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Availability of data and materials All data generated or analyzed during this study are included in this published article. Sequences were submitted to the GenBank database under the accession numbers MH325363-MH325366. Authors' contributions Experiments were conceived and designed by AL and FY. Experiments were performed by JW, BG, XL, Wei Z and TB. The data were analyzed by JW and BG. Weizhe Z contributed reagents/materials/analysis tools. The manuscript was written by JW and BG, and revised by AL and FY. All authors read and approved the final manuscript. Ethics approval All animals were handled and cared for according to the Chinese Laboratory Animal Administration Act of 1998. The research protocol was reviewed and approved by the Research Ethics Committee and the Animal Ethical Committee of Harbin Medical University. Consent for publication Not applicable.
4,491.2
2018-09-20T00:00:00.000
[ "Biology", "Medicine" ]
TEAM-Atreides at SemEval-2022 Task 11: On leveraging data augmentation and ensemble to recognize complex Named Entities in Bangla Many areas, such as the biological and healthcare domain, artistic works, and organization names, have nested, overlapping, discontinuous entity mentions that may even be syntactically or semantically ambiguous in practice. Traditional sequence tagging algorithms are unable to recognize these complex mentions because they may violate the assumptions upon which sequence tagging schemes are founded. In this paper, we describe our contribution to SemEval 2022 Task 11 on identifying such complex Named Entities. We have leveraged the ensemble of multiple ELECTRA-based models that were exclusively pretrained on the Bangla language with the performance of ELECTRA-based models pretrained on English to achieve competitive performance on the Track-11. Besides providing a system description, we will also present the outcomes of our experiments on architectural decisions, dataset augmentations, and post-competition findings. Introduction and Related Works The task of identifying and classifying entities in text is known as named entity recognition (NER). Some named entities are easy to distinguish in English since each of their words is capitalized; e.g. "The capital of Bangladesh is Dhaka". In this sentence, both "Bangladesh" and "Dhaka" are capitalized named entities. But there are other entity mentions that are not simple nouns and are more difficult to recognize. In the SemEval Task 11: MultiCoNER Multilingual Complex Named Entity Recognition (Malmasi et al., 2022b), the organizers concentrated on the more unusual Named Entities, which can be difficult to identify accurately from the text. *These authors contributed equally NER tasks have received much attention from the research community due to its crucial role in different NLP problems like information retrieval (Etzioni et al., 2005), Question Answering (Banko et al., 2002) (Toral et al., 2005), Relation extraction, Entity linking (Limsopatham and Collier, 2016) and searching (Pasca, 2004). However, there is such a conceptual difference between an ordinary named entity and a complex named entity that traditional tagging strategies cannot be used to recognize these mentions (Brown et al., 1992). Complex NERs can be any language element (single word, abbreviations, imperative clauses, questions) of ambiguous (Multi-type or Overlapping) and non-regular forms (Nested or Discontinuous or Overlapping) (Ashwini and Choi, 2014). What makes the task more challenging is, Complex NER is part of the open-domain with ever expanding and emerging entity sets and categories. In recent days, Transformer-based models (Devlin et al., 2018) (Liu et al., 2019) (Yang et al., 2019) have been performing as the state-of-the-art (Yamada et al., 2020) (Yan et al., 2019) models in different NER benchmark datasets. Although, Augenstein and colleagues, demonstrate in their paper that these powerful models are only good at picking up the conventional NERs from well formed texts (Augenstein et al., 2017), while for complex NERs we still need to integrate external knowledge sources. A recent paper on integrating external sources or Gazetteer features in combination with contextual information, has shown that this can indeed improve performance on complex NER tasks (Meng et al., 2021). Gazetteer-based solutions also show good performance improvements in extracting NERs from both normal and code-mixed webqueries (Fetahu et al., 2021). In tasks like NER, Bangla NLP has not made significant progress. Many linguistic issues arise while training models on Bangla because it is a rich language in terms of both usability and vocabulary (Ekbal and Bandyopadhyay, 2009). In Bangla, there are few markers for tags, such as capitalization (Karim et al., 2019). The same words can have a variety of meanings and types of entities. In addition, because Bangla is a somewhat free word order language, words can exist in any place inside a phrase without changing their meaning (Ekbal et al., 2008). Affixes that are added to the root word to cause complex inflections can modify the meaning and type of the word as well (Ekbal and Bandyopadhyay, 2009). Despite these issues, transfomer models have been used with considerable success for NER tasks in Bangla (Bhattacharjee et al., 2021) (Ashrafi et al., 2020. In this work, we demonstrate our approaches in tackling the concerns raised in the SemEval Task 11, as well as the obstacles posed by the Bangla language's intrinsic complexity. In our proposed architecture, we used a variety of methodologies, primarily focusing on transfer-learning with stateof-the-art deep learning architectures. In particular, we submitted the results obtained from monolingual ELECTRA models, while we also ran experiments with non-contextual word embeddings and multilingual language models. Dataset Description According to the organizers, the data were gathered from Wikipedia and Microsoft Orcas, which included both statements and queries (Malmasi et al., 2022a). The train set contains about 100 domain adaption instances, whereas the test set has significantly more out-of-domain data to measure out-of-domain performance. The test dataset is a large file of 130k+ sentences, with a preset training dataset of 15300 Bangla sentences and a development dataset of 800 sentences. Other important statistics about the dataset is presented in 1. The distribution of NER classes in the training set is shown in figure 1. To perform the experiments, we augmented our datasets in several stages. At first we token-wise translated a portion of our non-Bangla dataset to Bangla using google translate API 1 . In the first stage, we combined translated Hindi and Farsi dataset with our Bangla dataset, as all three lan- System Description The system we proposed for complex Bangla Named Entity Recognition is an ensemble of ELECTRA based models trained on the augmented datasets mentioned in table 2 and a combination of hyperparameters shown in table 3. The representation of each token is fed into our sequence tagging algorithms, which generate a label for each token. The tag of one token is determined by the attributes of that token in context as well as the tag of the token before it. To execute joint inference, these local decisions are connected together. The implementation of our mono-lingual ELECTRA-based systems can broadly be categorized based on the decision of using non-contextual embeddings (word2vec) with a contextual pretrained weight (Bhattacharjee et al., 2021). We defined the vanilla token classification system which is largely based on the huggingface token classifi- cation scripts 2 , as S1. The more advanced NER system incorporating non-contextual embedding and optionally, character CNN (Chiu and Nichols, 2016) and CRF (Qin et al., 2008) is defined as S2. Finally, we developed a majority voting based ensemble scheme, S3, to obtain our final prediction for each token. S1 : Vanilla ELECTRA-based token classification The input to S1 is first normalized using a specific normalization pipeline developed for Bangla mentioned in the (Hasan et al., 2020) paper. The normalized data is then tokenized and aligned with labels. S1 has 12 hidden layers, each with 12 attention heads. A standard training loop, with the hyperparameters mentioned in table 3 is used in different combinations. Since the original huggingface script does not include an early stopping mechanism, we wrote a custom callback based on evaluation loss and a patience of 5. High-level overview of S1 is shown in figure 2. Table 3: Hyperparameter Settings for S1 S1.a and S2 3.1.1 S1.a : Vanilla ELECTRA-based token classification on ENGLISH translated data As a preprocessing step for this approach, the input dataset was tokenized and translated to english using Google Translate API. The translated input set is then used with the standard huggingface base Electra model with different combination of hyperparameters, as presented in table 3. We experimented with several token-translated language here with early stopping mechanism at patience of 5. The overall architecture is similar to S1. S2: Advanced NER system For this system, character and word level features were first extracted and combined with word2vec and ELECTRA embeddings. To generate the final embedding these extracted input features passed through a combination of layers including noncontextual embedding layer, contextual pretrained layer. This is projected through a linear layer and optionally goes through a CRF decoding layer to produce the final predictions. This system also included an early stopping mechanism based on evaluation f1 score. An overview of S2 is presented in figure 3. S3 : Majority Voting Ensemble The basic concept behind this type of classification is that the final output class is chosen based on the most votes. This ensemble technique has previously been used to overcome the constraints of a single classifier, as presented by the authors in (Siddiqua et al., 2016). Before majority voting, we performed a thresholding on the prediction score for each token from each of the 8 models trained using a variety of augmented datasets, pretrained weights, and hyperparameters. We only considered a token label for majority voting if it had a prediction score over 50%. Then, we counted the number of times the distilled labels appeared in the set. A label was added to the final list of labels if it appeared in the majority of the models. Overview of the S3 is shown in 4. S1 S2 S1.A Prediction Threshold > 0.5 Experimental Setup As we have previously discussed in section 2, we augmented our training data in multiple steps which extended the dataset several times compared to original. We split each version of these dataset into a 70%-30% ratio during training. The default dev set containing 800 sentences is used for the final validation, in choosing the best performing model during test phase. We employed accuracy, precision, recall, and F1 score as evaluation metrics, with the macro averaged F1 score as the primary and official metric, as per the benchmark of Sem-Eval 2022 Task 11: MultiCoNER (Malmasi et al., 2022b). We defined each of our best performing model configurations in table 4. While training both S1 and S2 we utilized all versions of the Bangla augmented data. Additionally, to train S1.a we used all versions of the English translated dataset. In table 3 we have provided the range of hyperparameters used for each of our systems. The performance of these individual models is also demonstrated in table 5. However, in case of the English models, we have only presented the configuration and prediction score for the best performing model. It should be noted that, these models were submitted for evaluation after competition deadline. Results We made 4 submissions during the test phase, by applying majority voting scheme on various combinations of model predictions. The performance of the final ensemble outputs are presented in 6. As we can observe, the final ensembles of all models performs the highest and it is ranked 8th among all Model Versions M1 S1 + D1 + MHA M2 S1 + D2 M3 S1 + D4 M4 S2 + D1 + CRF M5 S2 + D2 + CRF M6 S2 + D4 + CRF + MHA M7 S2 + D4 + character CNN M8 S1 + D6 From section 5 we see that, there's hardly any difference among the variations of the S2 models, while major fluctuations can be observed among the variations of S1 models. Furthermore, separately grouped ensembles of S1 and S2 performs almost identically, with the combined ensemble of S1 and S2. However, the performance improves upon including the predictions from S1.a models, which are trained on English translated datasets. Despite this, the final best model is clearly overfitting because it had over 80% score on the development dataset, while performing significantly worse (approximately 60%) during the test phase of the competition. This outcome may be attributed to several factors, including the choice of hyperparameters, dataset augmentations and splitting process, early stopping criteria etc. As per the rules of the competition, we only experimented with mono-lingual models to obtain our results. However, we ran the baseline XLM-RoBERTa model which achieves an f1-score of approximately 68% on the development dataset. There are many scopes of expanding this work. For starters, we would like to refine our data augmentation pipeline to generate more well-formed instances. We would explore and compare the performance of cross-lingual and mono-lingual models. We also believe that, the dataset requires further analysis and should receive both quantitative and qualitative error analysis. In addition, we want to do elaborate ablation studies on the components of our systems. In this paper, we have majorly focused on transfer learning and so, in the future, we want to compare the performance of simpler statistical and shallow models with these deep models. Another thing we don't mention empirically in this paper is the class-wise performance of each of our models. From general observation, we find that all the models perform the worst in identifying CW (creative works) tags, while simpler tags like PER (person) and LOC (location) was the easiest to tag. In future, we look forward to investigate more into the reasons behind this behaviors. Finally, we only exploited a simple majority voting based ensemble scheme during this competition. For our future directions, we would also experiment on fusioning the layers of our models to develop a more sophisticated and informed ensembling scheme.
2,996
2022-04-21T00:00:00.000
[ "Computer Science" ]
Robustness of VMAT to setup errors in postmastectomy radiotherapy of left-sided breast cancer: Impact of bolus thickness Background Volumetric modulated arc therapy (VMAT) with varied bolus thicknesses has been employed in postmastectomy radiotherapy (PMRT) of breast cancer to improve superficial target coverage. However, impact of bolus thickness on plan robustness remains unclear. Methods The study enrolled ten patients with left-sided breast cancer who received radiotherapy using VMAT with 5 mm and 10 mm bolus (VMAT-5B and VMAT-10B). Inter-fractional setup errors were simulated by introducing a 3 mm shift to isocenter of the original plans in the anterior-posterior, left-right, and inferior-superior directions. The plans (perturbed plans) were recalculated without changing other parameters. Dose volume histograms (DVH) were collected for plan evaluation. Absolute dose differences in DVH endpoints for the clinical target volume (CTV), heart, and left lung between the perturbed plans and the original ones were used for robustness analysis. Results VMAT-10B showed better target coverage, while VMAT-5B was superior in organs-at-risk (OARs) sparing. As expected, small setup errors of 3 mm could induce dose fluctuations in CTV and OARs. The differences in CTV were small in VMAT-5B, with a maximum difference of -1.05 Gy for the posterior shifts. For VMAT-10B, isocenter shifts in the posterior and right directions significantly decreased CTV coverage. The differences were -1.69 Gy, -1.48 Gy and -1.99 Gy, -1.69 Gy for ΔD95% and ΔD98%, respectively. Regarding the OARs, only isocenter shifts in the posterior, right, and inferior directions increased dose to the left lung and the heart. Differences in VMAT-10B were milder than those in VMAT-5B. Specifically, mean heart dose were increased by 0.42 Gy (range 0.10 ~ 0.95 Gy) and 0.20 Gy (range -0.11 ~ 0.72 Gy), and mean dose for the left lung were increased by 1.02 Gy (range 0.79 ~ 1.18 Gy) and 0.68 Gy (range 0.47 ~ 0.84 Gy) in VMAT-5B and VMAT-10B, respectively. High-dose volumes in the organs were increased by approximate 0 ~ 2 and 1 ~ 3 percentage points, respectively. Nevertheless, most of the dosimetric parameters in the perturbed plans were still clinically acceptable. Conclusions VMAT-5B appears to be more robust to 3 mm setup errors than VMAT-10B. VMAT-5B also resulted in better OARs sparing with acceptable target coverage and dose homogeneity. Therefore 5 mm bolus is recommended for PMRT of left-sided breast cancer using VMAT. Introduction Female breast cancer has taken the place of lung cancer as the most common cancer in the world, with an estimated 2.3 million patients diagnosed with this disease in 2020 [1]. Though most of the breast cancer patients in the United States choose breast conserving surgery [2], modified radical mastectomy remains the most common technique for patients in China [3]. Adjuvant radiotherapy has been recommended for breast cancer patients treated with mastectomy for the benefits of improving locoregional recurrence rates and reducing cancer-related mortality [4][5][6]. Tangential field based three-dimensional conformal radiotherapy (3DCRT) is the standard technique for treatment planning of breast cancer. For postmastectomy radiotherapy (PMRT) in patients (especially those with left-sided breast cancer) with concave chest and/or regional lymph nodes, it is challenging for the traditional 3DCRT to deliver optimal target coverage and acceptable dose to the adjacent organs. To address this issue, advanced techniques including intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) have been introduced to PMRT of breast cancer [7,8]. Compared with 3DCRT, IMRT and VMAT showed better target coverage, dose homogeneity and conformity, and lower dose to the heart and left lung [9,10]. The improved dose homogeneity in the target and intermediate-high dose to the organs was reported to be associated with lower skin and lung toxicity [11,12]. Furthermore, VMAT was more efficient than IMRT in terms of monitor units and treatment time [13]. Shorter treatment time is beneficial for reducing the possibility of dose uncertainty caused by intra-fractional patient movement. Inter-fractional patient setup errors are another source of dose variability. Generally, the errors are in the order of several millimeters when pretreatment setup verifications have been routinely performed [14][15][16]. Considering the high modulations and steep dose fall-off in VMAT, a tiny setup error of several millimeters is capable to induce significant dose loss in the target [17][18][19][20]. Liao et al. found that a 3 mm setup error appeared to deteriorate plan quality of VMAT for locally advanced breast cancer [19]. In another paper, similar results were recorded in VMAT with 5 mm setup errors in early-stage breast cancer patients [20]. Underdosage in the target caused by patient setup uncertainty was related to the local failures in head and neck cancer treated with VMAT [18], suggesting the importance of plan robustness. Plan robustness refers to sensitivity of planned dose to uncertainties, such as inter-fractional setup errors. bolus thickness remains heterogeneous in clinical practice, typically ranging from 2 mm to 10 mm [23,24]. Massive variations in bolus thickness in the clinic may not only bring difficulties in producing consistent plan quality, but also increase the probability of treatment errors. In a phantom-based dosimetric study, Lobb demonstrated that bolus thickness had an effect on plan robustness of IMRT when mimicking scalp cancer irradiation with tomotherapy [25]. However, reports on such effects in breast cancer with VMAT are scarce. The main purpose of this study is to investigate impact of bolus thickness on robustness of VMAT against setup errors in PMRT of left-sided breast cancer. Dose distribution in the targets and the OARs in VMAT with different bolus thicknesses are also evaluated since most of the published reports only focused on skin dose [26,27]. Patients and volumes delineation Ten consecutive patients diagnosed with left-sided breast cancer and treated with mastectomy were randomly selected and enrolled in this retrospective study. The mean age was 54±9 years (range 36-67). The patients received PMRT in our department between January and June 2020 using VMAT and daily bolus. All the patients completed the treatment course without interruption or early cessation. This study was reviewed and approved by the Institutional Review Boards of the First Affiliated Hospital of Xiamen University. Written informed consent for the patients was waived since the plans in this study were research use only, and patient information was fully anonymized. The patients were immobilized in supine position and free breath with hands above their heads using a breast board with head holder (CVICO Medical Solutions, Coralville, USA). Computed tomography (CT) images were obtained using a 16-slice CT scanner (GE Healthcare, Chicago, USA) with 5 mm slice thickness. The acquired images were subsequently transferred into the Eclipse treatment planning system version 11 (Varian Medical Systems, Palo Alto, USA). An experienced radiation oncologist contoured the clinical target volume (CTV) including the chest wall (CW) and lymph nodes around the supraclavicular fossa (SCF). A uniform 5 mm margin was added to the CTV to form the planning target volume (PTV). The PTV was restricted at least 2 mm (2 mm for the target around CW and 5 mm for the nodes around SCF) from the skin surface. For plan evaluation, the CTV was confined to the edge of the PTV. The lungs, heart, and contralateral breast were delineated as OARs using an automatic contouring tool developed by Manteia (Manteia, Xiamen, China). The radiation oncologist would revise these OARs if necessary. Treatment planning All plans were generated in Eclipse for a Unique linear accelerator (Varian Medical Systems, Palo Alto, USA) with Millennium 120 multileaf collimator (MLC). Beam arrangements are presented in Fig 1: two continuous~240˚arcs (Arcs 1 and 2) with 6MV photons were used to irradiate the lymph nodes in SCF and four split arcs (Arcs 3-6) with the same energy for the CW. The arrangements of the split arcs were similar to that of the four-arc VMAT described by Lai et al. [28]. The collimator angles were set to 2~15˚and 347~358˚to minimize irradiation dose to the adjacent organs. The width of X jaw was limited to 18 cm for Arcs 1~2 and <14 cm for Arcs 3~6. Plans were optimized using the progressive resolution optimizer algorithm (PRO). Final dose calculation was conducted using the anisotropic analytical algorithm (AAA) with 2.5 mm grid size. For VMAT-10B, extended PTV and 10 mm bolus illustrated in ref 28 were used for plan optimization. The prescription dose (PD) was 50 Gy with daily 2 Gy over 5 weeks. Plans were normalized to achieve at least 95% of PTV covered by PD and at least 99% of PTV covered by 95% of PD, meanwhile keeping the hot spot, defined as 110% of PD, as low as possible. The objectives for the OARs were as follows: mean dose (D mean ) < 5~6 Gy and V 20Gy < 10% (V xGy : volume of organs receiving minimum dose of x Gy) for the heart; V 5Gy < 60%, V 20Gy < 30% and D mean <15 Gy for the left lung; D mean < 5 Gy for the contralateral lung and breast and keeping the V 5Gy as low as achievable. VMAT-5B was recalculated from VMAT-10B by replacing the 10 mm bolus with a 5 mm one without changing other parameters. Inter-fractional setup errors were simulated by introducing 3 mm shifts to the isocenter of VMAT-5B and VMAT-10B in anterior-posterior, superior-inferior, and left-right directions. Afterwards, the plans were recalculated on the basis of the original planning CT without changing other parameters. For each patient, a total of twelve perturbed plans were generated. Plan evaluation Dose volume histograms (DVHs) were collected and dosimetric parameters were extracted for plan evaluation. For both PTV and CTV, V 95% , V 110% (volume receiving 95% and 110% of PD, respectively), D 95% , D 98% , D 2% (minimum dose in 95%, 98% and 2% of target, respectively) and D mean were measured. Homogeneity index (HI) was calculated as HI = (D 2% −D 98% )/D 50% . Plans with HI close to zero were considered with homogeneous dose distribution in the target. For OARs, V xGy , D mean and maximum dose, defined as a minimum dose to 1 cm 3 of the OARs (D 1cc ), were recorded. Statistics analysis Quantitative data were presented as mean and standard deviations (SD). Normality of all the data was evaluated using the Kolmogorov-Smirnov test. Two-tailed t-test and paired Wilcoxon signed-rank test was utilized to compare the difference between VMAT-5B and VMAT-10B for normally distributed and non-normally distributed data, respectively using a Dose distribution in VMAT-5B and VMAT-10B Fig 2 shows dose distribution in axial, coronal, and sagittal planes of a representative patient with VMAT-5B and VMAT-10B. No significant differences were observed except the well extended isodose lines along the anterior chest wall in VMAT-10B. Dosimetric parameters of PTV and CTV are summarized in Table 1. Both VMAT-5B and VMAT-10B resulted in great target coverage with V 95% > 99.5%, D 95% � 50.0 Gy and minimum dose (D 98% ) � 49.0 Gy. PLOS ONE Nevertheless, VMAT-10B had better dose coverage in CTV and homogeneity in PTV and CTV, compared with VMAT-5B. Table 2 summarizes dosimetric parameters for the OARs. All the optimization criteria for the OARs mentioned in the 'Materials and methods' section were met for VMAT-5B and VMAT-10B. Dose to the heart and both lungs were significantly improved in VMAT-5B. However, for the right breast, the V 5Gy and D 1cc were slightly increased in VMAT-5B (20.14% vs 19.84% for V 5Gy , P = 0.011; 14.29 Gy vs 13.94 Gy for D 1cc , P <0.001), while D mean was comparable between the two groups. Table 3 presents the mean ΔD 95% , ΔD 98% , ΔD 2% , and ΔV 110% for CTV in VMAT-5B and VMAT-10B. The setup errors resulted in insufficient target coverage in VMAT-10B, regardless of shifting directions. The ΔD 95% and ΔD 98% were of -1.69~-0.25 Gy and -1.99~-0.23 Gy, respectively. Maximum reduction was observed in the posterior direction (-1.69 and -1.99 Gy), followed by the right (-1.48 and -1.69 Gy) and the inferior directions (-0.94 and -1.03 Gy). For VMAT-5B, only isocenter shifts in the posterior, right, and inferior directions led to underdosage in CTV and the differences were in range of -0.79~-0.15 Gy for ΔD 95% and -1.05~-0.16 Gy for ΔD 98% . For isocenter shifts in other directions, slight overdosage in CTV was observed (< 0.6 Gy). Dose inhomogeneity (ΔD 2% and ΔV 110% ) in both groups showed similar trends with ΔD 95% and ΔD 98% . It's interesting to note the values of ΔV 110% in VMAT-10B were very close to zero, indicating that setup errors had little effects on V 110% in VMAT-10B. Plan robustness For better visualization of dose variations in CTV in VMAT-5B and VMAT-10B, boxplot of ΔD 95% and ΔD 98% against the posterior, right, and inferior directions was performed, and the results are presented in Fig 3. In line with the results in Table 3, the deviations in D 95% and D 98% in VMAT-5B were milder than that in VMAT-10B. Besides, for ΔD 95% and ΔD 98% in PLOS ONE VMAT-10B, a large proportion of the plots were below -1.50 Gy (approximate -3.00% of the planned dose) in the posterior and right shifts. For VMAT-5B, almost all the plots were between-1.50~0.00 Gy in the three directions. Dose fluctuations in the heart and left lung in VMAT-5B and VMAT-10B are shown in Tables 4 and 5. Only isocenter shifts in the posterior, right, and inferior directions increased dose to the organs, and the differences in VMAT-10B were smaller. For the heart, the average ΔD mean were 0.42 Gy (range 0.10~0.95 Gy) and 0.20 Gy (range -0.11~0.72 Gy) in VMAT-5B and VMAT-10B, respectively. The ΔV 20Gy ranged from 0.18% to 2.19% and 0.00% to 1.97% for VMAT-5B and VMAT-10B, while the ΔV 30Gy ranged from 0.11% to 1.29% and 0.02% to Discussion VMAT has become common in PMRT of breast cancer, and 5 mm and 10 mm bolus are frequently selected to improve superficial target coverage [23,24]. In this study, effects of bolus thickness on dose distribution and plan robustness of VMAT in left-sided breast cancer were investigated. We demonstrated that VMAT-10B had better dose coverage in CTV and homogeneity in PTV and CTV, whereas VMAT-5B was superior in OARs sparing (Tables 1 and 2). We also found that small setup uncertainties could induce dose deviations in CTV and OARs in VMAT plans. However, VMAT-5B appeared to be more robust than VMAT-10B because slight CTV underdosage and acceptable increased dose to the OARs were noted in this group. Reportedly, couch shifts in patient setup were impossible to eliminate and the mean magnitudes were generally between 1 mm and 5 mm with imaging guidance [14][15][16]. Similar to previous data [14,16], we found~3 mm setup errors in all directions (unpublished) in a group of patients who underwent mastectomy and immediate implant-based reconstruction using weekly cone-beam CT (CBCT). Therefore, setup errors were simulated by shifting the isocenter of VMAT-5B and VMAT-10B by 3 mm along x, y and z axis. As expected, the errors led to dose fluctuations in CTV in both VMAT-5B and VMAT-10B. For VMAT-5B, dose disagreements in CTV D 95% and D 98% were small, with maximum variation of -1.05 Gy in D 98% when shifting the isocenter in the posterior direction ( Table 3). The results were very close to those reported by Jensen et al. who evaluated plan robustness of VMAT to CBCT derived setup errors [29]. The authors attributed the limited dosimetric impacts of isocenter shifts to the robust optimization function in RayStation [29]. Unfortunately, the tool is not available for Varian's Eclipse for VMAT. Moreover, only isocenter shifts in the posterior, right, and inferior directions compromised target coverage while setup errors in the other directions induced a slight rise in CTV D 95% and D 98% . For VMAT-10B, the simulated setup errors led to underdosage in CTV in all directions. The ΔD 95% were between -1.69 and -0.25 Gy and the ΔD 98% were between -1.99 and -0.23 Gy (Table 3). Isocenter shifts in the posterior and right directions contributed most to the dose fluctuations. Using the same method, Liao et al. registered an average ΔD 95% of -0.6 Gy (range -1.40~-0.10 Gy) and ΔD 98% of -1.0 Gy (range -2.80~-0.30 Gy) when applying a 3 mm perturbation to the isocenter of VMAT with 10 mm bolus for left-sided breast cancer PMRT [19]. The data were in the same order of ours in this study. Fig 3 shows the distribution of ΔD 95% and ΔD 98% against perturbations in posterior, right and inferior directions in VMAT-5B and VMAT-10B. Clearly, most of the plots of VMAT-10B in the posterior and right directions were below -1.5 Gy, corresponding to approximate -3.00% of the planned dose, which were clinically unacceptable. For VMAT-5B, the differences were within -3.00% with shifts in all directions. In contrast to our results, Liao et al. reported that setup errors in the anterior and left directions dramatically affected dose coverage in the targets [19]. We believe that the CTV-PTV margin should be the main reason for the difference. In this study, the PTV was obtained by uniformly adding a 5 mm margin. For plan evaluation, PTV was cropped at least 2 mm from skin surface and CTV was restricted to the edge of the PTV in the anterior direction. In the study by Liao et al., a smaller margin (3 mm) was employed to construct PTV. In addition, no description of cropping of PTV or CTV was found in their study [19]. Generally, isocenter shifts in the posterior and right correspond to move the target out of the region encompassed by PD, therefore, induces insufficient dose coverage in the target. For VMAT-10B, the extended isodose lines along the anterior chest wall were expected to account for the setup errors in the posterior and right directions. However, the dose fluctuations were pronounced in these directions. Recently, Oliver et al. evaluated skin dose resulting from chest wall irradiation by the means of Monte Carlo simulation using tangent and arc source model. They considered different bolus thicknesses and materials and found that 10 mm tissue-equivalent bolus could cause significant attenuation of the incident photons at near 55˚by increasing the path length of the incident beams [27]. In this study, the oblique incident angle at near 55˚has been registered for several arcs, as shown in Fig 1. We assume that the setup error may significantly change obliquity and/or path length of incident photons at near 55˚, which in turn contributes to the pronounced underdosage in the target in VMAT-10B. Lizondo et al. performed a systematic investigation to determine the optimal virtual bolus thickness and Hounsfield unit (HU) value for breast VMAT. The results showed that plans with 5 mm PTV extension, 10 mm virtual bolus and -400 HU were robust to 5 mm isocenter shifts in the breath direction (3.5mm along the x and y axis) based on relative differences in D 98% (< ±2.0%) and D 2% (< ±2.5%) [30]. In this work, the relative differences in D 98% and D 2% were up to -3.91% and -1.60%, respectively, in VMAT-10B with 3 mm posterior isocenter shifts. It can't be concluded that VMAT with virtual bolus was more robust than VMAT-10B, since the targets for plan evaluation were cropped 5 mm inwards from the skin surface in the study of Lizondo et al. [30], whereas the targets around chest wall were only cropped 2 mm inside the body contour in our study. Factors including patient selection, beam arrangement and HU assignment to bolus could also lead to the differences between the studies. Liu et al. identified the association of local failures with setup uncertainties caused underdosage in head and neck cancer patients treated with VMAT. The underdosed volumes were either located at the edge or in the middle of the target [18]. Fig 4 presents the distribution of underdosed volumes in CTV of a typical patient in posteriorly perturbed VMAT-5B and VMAT-10B. Similarly, setup errors led to insufficient dose coverage not only at the edge but also in the middle of CTV along the chest wall or near the junction region. Moreover, the underdosed volumes in VMAT-5B primarily overlapped with those in VMAT-10B. This is reasonable because the VMAT-5B was recalculated from VMAT-10B without changing any parameters but bolus thickness. In line with the results in Table 3 and Fig 3, the underdosed volumes in perturbed VMAT-10B were significantly larger than that in perturbed VMAT-5B. For perturbations in the right and inferior directions, similar results were recorded and shown in S1 and S2 Figs. It was suggested that dose differences in normal organs should be evaluated simultaneously with that of the target since a perturbation with little influence on target dose distribution might lead to overdose of the adjacent organs [19]. Dose disagreements in the heart and left lung were considered and estimated in this paper. Increased doses were observed with setup errors in the posterior, right, and inferior directions. The differences in the heart were generally milder than those in the left lung in VMAT-5B and VMAT-10B (Tables 4 and 5). The increments in VMAT-5B were higher than that in VMAT-10B. Similar to the changes in CTV, isocenter shifts in these directions may also affect obliquity and/or path length of incident beams in VMAT-10B, resulting in lower increments. Nevertheless, all the dosimetric parameters in the perturbed plans were still clinically acceptable, except the mean heart dose of four patients in VMAT-10B and one in VMAT-5B with isocenter shifts in the posterior direction (S2 File). The improved long-term survival of breast cancer patients has encouraged radiation oncologists to fully consider the radiation-related toxicities, most notably heart toxicity. Radiation dose to the whole heart and volume of the organ receiving high dose was reported as risk factors of heart toxicity [31,32]. According to Darby et al., the risk of major coronary events was linearly proportional to the mean heart dose by 7.4% per Gy and no clear threshold was observed [31]. Based on the paper, the estimated increase of ischemic heart disease is 7.03% and 5.33% for VMAT-5B and VMAT-10B with posterior perturbations, respectively. The results should be interpreted with caution because the absolute values in perturbed VMAT-5B remained slightly lower than those in perturbed VMAT-10B (S2 File). In another study based on randomized trials, the overall mean heart dose was 4.4 Gy and the estimated absolute risk of cardiac mortality was 0.3% for nonsmoking patients [33]. In this circumstance, the increased absolute risk for the perturbed plans could be very limited because the patient cohorts in our study were all nonsmokers and ΔD mean for the heart was less than 1 Gy (Table 4). The QUANTEC suggested V 25Gy < 10% to keep the probability of cardiac mortality within 1% in approximately fifteen years after radiation therapy [32]. Given that VMAT significantly reduces high dose volumes in the heart [9], a stricter dose constraint for the heart, V 20Gy < 10%, was utilized during plan optimization. As summarized in Table 2, the V 20Gy for the heart in VMAT-5B and VMAT-10B was around 3%. The parameter remained within 10% in the perturbed plans (S2 File), indicating the risk of cardiac mortality in our patient cohorts was acceptable. Dosimetric advantages of multiple partial and split arcs over continuous long arcs were demonstrated in previous papers from our department [28] and other's [34]. The mean heart dose reported by Lai et al. was 7.3 Gy [28], which was higher than that reported by Boman et al. (3.9 Gy for left-sided breast cancer with 240˚split sub-arcs) [34] and those in this work (4.43 and 4.71 Gy for VMAT-5B and VMAT-10B, respectively). We assume that beam arrangements in the paper by Lai et al. may include more volume of the heart to the treatment fields, thus results in higher mean dose. Therefore, in this study, we improved the beam setting by using six partial arcs to separately cover the lymph nodes and chest wall (Fig 1). Collimator angles were carefully selected to minimize dose to the adjacent organs. The results from Boman et al. were slightly lower than ours because deep inspiration breath hold (DIBH) was used in several of their patients [34]. We also found that dose to the lungs was reduced in this study, compared with those in the paper by Lai et al. [28]. However, dose to the contralateral breast was slightly higher in our work, which may be attributed to the large field width of Arcs 1 and 2 that covered a part of the organ upon gantry rotation. Another pitfall of our six-arc VMAT was the treatment efficiency because monitor units were increased from 671 (range 619~695) to 1110 (range 965~1241) on average (S1 Table). The main limitation of this study was the potential inaccuracy of the perturbed dose because it was directly recalculated based on the planning CT images without considering tissue deformations during the treatment course. Two recent studies from one center showed that optimizing breast VMAT with extended PTV and bolus resulted in higher robustness to tissue deformations than those without extension [35,36]. According to Rossi et al., combination of 5 mm PTV extension and 8 mm optimization bolus was the best choice after exploring dose distribution in plans with various PTV extension (0, 5, and 7 mm) and optimization bolus (5,8, and 10 mm) [35]. In the other study, the authors demonstrated that VMAT with 8 mm optimization bolus was able to account for up to 8 mm soft tissue deformations [36]. The plans in our study were optimized with 5 mm PTV extension and 10 mm bolus, which was close to the reported combination, thus the dosimetric effect of tissue deformations might be similar to that of the previous publications. To precisely assess dose fluctuations caused by setup errors, recalculation of plans in registered CBCT images is recommended. Considering the factor that six-degree couch has not been universally used in the clinics, rotational errors are not discussed herein, which might be another limitation of our study. Furthermore, all the acquired data are based on the "one plan solution" for breast cancer, namely, bolus is used throughout the treatment course, which has been routinely used in our department [28] and others' [19,37]. For many other centers with two VMAT plans, one with bolus for a proportional of fractions and the other without bolus to reduce potential skin toxicity, the applicability of our results remains further confirmation. Finally, the sample size in this study was small. In order to obtain robust results, further investigations with more patients and detailed considerations are warranted. Conclusions Small setup errors of 3 mm can cause dose fluctuations in CTV and adjacent organs including the heart and left lung in VMAT plans for PMRT of left-sided breast cancer. VMAT-5B results in acceptable dose reduction in CTV and increments in OARs when compared with VMAT-10B. Additionally, plans with 5 mm bolus deliver less dose to the OARs with acceptable target coverage and homogeneity. The 5 mm bolus is recommended for breast cancer PMRT with VMAT.
6,286
2023-01-24T00:00:00.000
[ "Medicine", "Physics" ]
Quarkonium measurements in heavy-ion collisions with the STAR experiment In these proceedings, we present the latest measurements of J/ψ and Υ by the STAR experiment. The J/ψ and Υ production measured in p+p collisions provide new baselines for similar measurements in Au+Au collisions, while the measurements in p+Au collisions can help quantify the cold nuclear matter effects. The J/ψ v2 is measured in both U+U and Au+Au collisions to place constraints on the amount of J/ψ arising from recombination of deconfined charm and anti-charm pairs. Furthermore, the nuclear modification factors for ground and excited Υ states as a function of transverse momentum and centrality are presented, and compared to those measured at the LHC as well as to theoretical calculations. Introduction Quarkonium suppression in the medium due to the color screening effect has been proposed as a direct signature of the formation of the Quark Gluon Plasma (QGP) [1].However, other effects, such as cold nuclear matter (CNM) effects and regeneration of quarkonium states from deconfined heavy quark pairs, give rise to additional complications in the interpretation of the observed suppression.Measurements of the J/ψ elliptic flow (v 2 ) in different collision systems can help disentangle the different sources contributing to the observed J/ψ population.Compared to charmonia, bottomonia receive less regeneration contribution due to the smaller bottom quark cross-section, thus providing a cleaner probe.Furthermore, different bottomonium states with their different binding energies are expected to dissociate at different temperatures.Measurements of this "sequential melting" can help constrain the medium temperature. In these proceedings, we present the lastest measurements of J/ψ and Υ productions via both the dimuon and dielectron decay channels.The dimuon channel measurements are based on data samples triggered by the Muon Telescope Detector (MTD) corresponding to an integrated luminosity of 14. Results In all figures presented in this section, statistical uncertainties are shown as vertical bars while systematic uncertainties are shown as open boxes or brackets around the data points.Filled boxes around unity represent the global uncertainties. The left panel of Fig. 1 the inclusive J/ψ cross-section scaled by the branching ratio B and measured via the dimuon channel in the transverse momentum (p T ) range of 1 < p T < 10 GeV/c (red circles) and via the dielectron channel in 0 < p T < 14 GeV/c (blue squares) in p+p collisions at √ s NN = 200 GeV.The results are consistent in the overlaping p T range, and can be well discribed by CGC+NRQCD [2] and NLO NRQCD [3] predictions for prompt J/ψ production at low and high p T , respectively.An improved color evaporation model (ICEM) for direct J/ψ production [4] describes the data in p T < 3 GeV/c, but underestimates the yields at higher p T . The right panel of Fig. 1 shows the R pAu , which quantifies the CNM effects, as a function of p T .The R pAu is less than unity at low p T , and consistent with unity within uncertainties at high p T .The data is compared with theoretical calculations with the nuclear PDF effect only (color bands) [5][6][7] and with an additional effect of nuclear absorption (blue dashes) [8].The calculation with an additional effect of nuclear absorption is favored by the data.With BEMC-triggered data from 2015, the Υ(1S+2S+3S) production cross-section in p+p collisions at √ s = 200 GeV within |y| < 0.5 is measured to be B • dσ/dy = 81 ± 5(stat.)± 8(syst.)pb, where B is the branching ratio.The new result of better precision is consistent with the published STAR result [10], and follows the trend of world-wide experimental data as well as NLO CEM predictions, as can be seen in the left panel of Fig. 3. right panel of Fig. 3 shows the Υ(1S+2S+3S) R pAu as a function of rapidity.The new results measured with BEMC-triggered data from 2015 (red stars) are consistent with the published R dAu (red circles) but with smaller relative uncertainties.The R pAu within |y| < 0.5 is 0.82 ± 0.10(stat.)−0.07 +0.08 (syst.)± 0.10(norm.),hinting at sizable CNM effects.The centrality dependence of the combined R AA for Υ(1S) and Υ(2S+3S) is shown in the left and middle panels of Fig. 4, compared with the lastest CMS results [12].Υ(2S+3S) are more suppressed in central collisions than Υ(1S), which is consistent with the sequential melting.Comparing RHIC and LHC results, the levels of suppression for Υ(1S) are consistent, while Υ(2S+3S) seem to be less suppressed at the RHIC than that at the LHC.Right: The combined R AA of Υ(1S) within |y| < 0.5 compared to different model calculations [13][14][15] The right panel of Fig. 4 shows the centrality dependence of the combined Υ(1S) R AA together with three SBS (strongly bound scenario) model calculations which use internal-energy-based heavy quark potential and a WBS (weakly bound scenario) model calculation which uses free-energy-based heavy quark potential.The Strickland-Bazow model [13], which takes into account the feed down contributions from excited bottomonium states, studies both scenarios but includes neither CNM nor regeneration effect.The Liu-Chen model [14] considers only the dissociation of the excited states.The Emerick-Zhao-Rapp SBS model [15] accounts for CNM effects and the regeneration contribution.The SBS models are favored by data, especially in central collisions. The p T dependence of the AA for Υ(1S) and Υ(2S+3S) measured via the dimuon channel is shown in the left and right panels of Fig. 5, compared with the lastest CMS results [12].Again for Υ(1S) the suppression is consistent at the RHIC and the LHC, while for Υ(2S+3S) the suppression seems to be stronger at high-p T at the LHC. Summary and Outlook In summary, we present the latest measurements of J/ψ and Υ productions in p+p, p+Au and Au+Au collisions at √ s NN = 200 GeV and in U+U collisions at √ s NN = 193 GeV from the STAR experiment.In p+p collisions, the inclusive J/ψ production cross-section can be described by the CGC+NRQCD at low p T and NLO NRQCD at high p T .The new Υ(1S+2S+3S) production crosssection result with better precision compared to previously published results follows the trend of world-wide experimental data as well as NLO CEM predictions.In p+Au collisions, the J/ψ R pAu is below unity at low p T and consistent with unity at high p T .The R pAu for Υ within |y| < 0.5 is measured to be 0.82 ± 0.10(stat.)−0.07 +0.08 (syst.)± 0.10(norm.).In U+U and Au+Au collisions, the J/ψ v 2 results are consistent with each other within uncertainties, and consistent with zero for p T > 2 GeV/c, indicating that the regeneration of fully thermalized charm quarks to J/ψ is unlikely to be the dominant source contributing in this kinematic region.For Υ in Au+Au collisions, the excited Υ states are more suppressed than the ground state in central collisions, which is consistent with the sequential melting.There is also a hint of less suppression at the RHIC than at the LHC for excited Υ states.The statistical precision of quarkonium measurements in Au+Au collisions can be further improved by including the similar amount of additional data on tape. 2 nb − 1 for Au+Au collisions at √ s NN = 200 GeV from the RHIC 2014 run, and integrated luminosities of 122 pb −1 for p+p collisions and 409 nb −1 for p+Au collisions at √ s NN = 200 GeV from the RHIC 2015 run.The dielectron channel measurements are based on data triggered by the Barrel Elec-troMagnetic Calorimeter (BEMC) corresponding to an integrated luminosity of 1.1 nb −1 for Au+Au collisions at √ s NN = 200 GeV from the RHIC 2011 run, and integrated luminosities of 97 pb −1 for p+p collisions and 300 nb −1 for p+Au collisions at √ s NN = 200 GeV from the RHIC 2015 run. Figure 1 . Figure 1.Left: Inclusive J/ψ cross-section in p+p collisions at √ s NN = 200 GeV measured via the dimuon (red circles) and dielectron channels (blue squares) as a function of p T .Right: Inclusive J/ψ R pAu as a function of p T , compared with theoretical calculations [5-8]. Figure 2 Figure 2 . Figure 2 shows the J/ψ v 2 as a function of p T .The new result for U+U collisions at √ s NN = 193 GeV (red circles) from 2012 data is consistent within error bars with the result for Au+Au collision at √ s NN = 200 GeV (black circles) from 2010 and 2011 data [9].For p T > 2 GeV/c, v 2 is consistentwith zero within uncertainties, indicating that the contribution from regeneration of fully thermalized charm quarks to J/ψ is likely to be small. Figure 3 . Figure 3. Left: Υ(1S+2S+3S) production cross-section measured in p+p collision at √ s = 200 GeV (red star) and √ s = 500 GeV(blue star) compared with world-wide data and NLO CEM predictions [11].Right: the Υ(1S+2S+3S) R pAu (red stars) as a function of rapidity compared to STAR and PHENIX published R dAu results.The Υ(1S+2S+3S) R AA in Au+Au collisions at √ s NN = 200 GeV has been measured via the dimuon channel with MTD-triggered data from 2014 and the dielectron channel with BEMC-triggered data from 2011.Since the nuclear modification factors in both channels are consistent with each other within uncertainlies, they are combined to improve the precision.The centrality dependence of the combined R AA for Υ(1S) and Υ(2S+3S) is shown in the left and middle panels of Fig.4, compared with the lastest CMS results[12].Υ(2S+3S) are more suppressed in central collisions than Υ(1S), which is consistent with the sequential melting.Comparing RHIC and LHC results, the levels of suppression for Υ(1S) are consistent, while Υ(2S+3S) seem to be less suppressed at the RHIC than that at the LHC. Figure 5 . Figure 5. R AA of Υ(1S) (left panel) and Υ(2S+3S) (right panel) at mid-rapidity as a function of p T in Au+Au collisions at √ s NN = 200 GeV (red stars) compared with CMS results for Pb+Pb collisions at √ s NN = 2.76 TeV (black diamonds).
2,325.4
2018-02-01T00:00:00.000
[ "Physics" ]
Flavourful Axion Phenomenology We present a comprehensive discussion of the phenomenology of flavourful axions, including both standard Peccei-Quinn (PQ) axions, associated with the solution to the strong $CP$ problem, and non-standard axion-like particles (ALPs). We give the flavourful axion-fermion and axion-photon couplings and calculate the branching ratios of heavy meson ($K$, $D$, $B$) decays involving a flavourful axion. We also calculate the mixing between axions and heavy mesons $ K^0 $, $ D^0 $, $ B^0 $ and $ B_s^0 $, which affects the meson oscillation probability and mass difference. Mixing also contributes to meson decays into axions and axion decays into two photons, and may be relevant for ALPs. We discuss charged lepton flavour-violating decays involving final state axions of the form $\ell_1 \to \ell_2 a (\gamma) $, as well as $ \mu \to eee $ and $ \mu-e $ conversion. Finally we describe the phenomenology of a particular"A to Z"Pati-Salam model, in which PQ symmetry arises accidentally due to discrete flavour symmetry. Here all axion couplings are fixed by a fit to flavour data, leading to sharp predictions and correlations between flavour-dependent observables. Introduction One of the puzzles of the Standard Model (SM) is why QCD does not appear to break CP symmetry. The most popular resolution of this so-called "strong CP problem" is to postulate a Peccei-Quinn (PQ) symmetry, namely a QCD-anomalous global U (1) symmetry which is broken spontaneously, leading to a pseudo-Nambu-Goldstone boson (pNGB) called the QCD axion [1][2][3]. The two most common approaches to realising such a PQ symmetry is either to introduce heavy vector-like quarks (the KSVZ model) [4,5] or to extend the Higgs sector (the DFSZ model) [6,7]. The resulting QCD axion provides a candidate for dark matter [8][9][10] within the allowed window of the axion (or PQ symmetry-breaking) scale f a = 10 9−12 GeV [11]. It has also been realised that the PQ axion need not emerge from an exact global U (1) symmetry, but could result from some discrete symmetry or continuous gauge symmetry leading to an accidental global U (1) symmetry. Considering the observed accuracy of strong-CP invariance, it is enough to protect the PQ symmetry up to some higherdimensional operators [12][13][14]. In this regard, it is appealing to consider an approximate PQ symmetry guaranteed by discrete (gauge) symmetries [15][16][17][18][19][20][21]. Alternatively, attempts to link PQ symmetry protected by continuous gauge symmetries to the flavour problem were made in [22,23]. It is possible that PQ symmetry arises from flavour symmetries [24], linking the axion scale to the flavour symmetry-breaking scale, and various attempts have been made to incorporate such a flavourful PQ symmetry as a part of such continuous flavour symmetries [25][26][27][28][29][30][31][32][33]. It is also possible that PQ symmetry could arise accidentally from discrete flavour symmetries [34][35][36][37], as recently discussed [38] in the "A to Z" Pati-Salam model [39], where quarks and lepton are unified. This is difficult to achieve in a grand unified theory (GUT) based on SO(10) [40], which otherwise presents a stronger case for unification. 1 Recent efforts have been made [29,30,48] to unify the U (1) P Q symmetry with a Froggatt-Nielsen-like U (1) flavour symmetry [49]. The resultant axion is variously dubbed a "flaxion" or "axiflavon"; we shall refer simply to a "flavourful axion". In this paper we focus on the phenomenology of flavourful axions, including both standard PQ axions, associated with the solution to the strong CP problem, and non-standard axion-like particles (ALPs) (see e.g. [50]). For a complementary analysis of ALP signatures and bounds at the LHC, see [51]. We present the flavourful axion-fermion and axion-photon couplings both for the standard axion and for ALPs, and show that they quite naturally are non-diagonal. We use these couplings to calculate the branching ratios for two-body decays of heavy mesons K, D, and B involving a flavourful axion. Moreover, we calculate the mixing between axions and neutral hadronic mesons K 0 , D 0 , B 0 and B 0 s and its consequences, which has not been discussed in the literature before. These can lead to new contributions to neutral meson mass splitting, meson decays into axions and axion decays into two photons which may be relevant for ALPs. We also discuss lepton decays involving final state axions, including two-body decays ℓ 1 → ℓ 2 a and radiative decays ℓ 1 → ℓ 2 aγ, as well as µ → eee and µ − e conversion. Finally we describe the phenomenology of the A to Z Pati-Salam model, which predicts a flavourful axion [38], and show how unification leads to correlations between different flavour dependent observables, as the down-type quark and charged lepton couplings are very similar. Notably, as the axion arises from the same flavon fields that dictate fermion Yukawa structures, no additional field content is necessary to solve the strong CP , and all axion couplings are fixed by a fit to quark and lepton masses and mixing. The layout of the remainder of the paper is as follows. Section 2 describes the flavourful axion-fermion and axion-photon couplings both for the standard axion and for ALPs. In 1 These ideas should not be confused with alternatives to PQ symmetry, such as Nelson-Barr type resolutions to the strong CP problem [41][42][43][44], or GUT models where specific Yukawa structures have been proposed [45][46][47]. Section 3 we apply these couplings to calculate the branching ratios of heavy meson decays involving a flavourful axion. Section 4 discusses the mixing between axions and neutral mesons while Section 5 discusses lepton decays. Section 6 focusses on the phenomenology of the A to Z model, which predicts correlations between different flavour dependent observables, and Section 7 concludes. Appendix A gives more details about axion-meson mixing. Appendix B details the calculation the heavy meson branching ratios. Appendix C shows the derivation of the couplings in the A to Z Pati-Salam model and Appendix D tabulates the numerical fit to flavour data. 2 Axion couplings to matter Lagrangian Relevant to a discussion on axion-fermion interactions is the Lagrangian L = L kin + L m + L ∂ + L anomaly , (2.1) where L kin contains the kinetic terms, L m the fermion mass terms, L ∂ the axion derivative couplings to matter, and L anomaly the QCD and electromagnetic anomalies. In the physical (mass) basis below the electroweak symmetry-breaking scale, we have with the axion decay constant f a = v P Q /N DW defined in terms of the PQ-breaking scale v P Q and anomaly (or domain wall) number N DW . The axion-photon coupling is discussed in Section 2.3 below. The physical masses m f i are defined by m f i = (U † Lf M f U Rf ) ii , in terms of the mass matrix in the weak basis, M f , and unitary matrices U Lf , U Rf which transform left-and right-handed fields, respectively. The vector and axial couplings are given by 3) x f L , x f R are the fermion PQ charges in the left-right (LR) basis, 2 written here as (diagonal) matrices. As x f L , x f R are real, V f and A f (as well as chiral coupling matrices X L,R ) are Hermitian. In this formulation, the implications of flavour structure are clear. If all generations of a fermion couple equally to the axion, the charge matrices x Lf,Rf are proportional to the and there is no flavour violation. In standard axion models, e.g. DFSZ, charges can be assigned such that x f L = −x f R and the axion couples only via A f ; this is generally not true in flavoured axion models. Meanwhile if x f L = x f R , the U (1) P Q transformation is not chiral (N DW = 0), the Goldstone field a doesn't couple to the QCD anomaly, the strong CP problem is not solved, and a is then interpreted as an ALP. 3 However, as long as x f L ,f R / ∝I 3 , we still get flavour-violating (vector and axial) interactions due to weak mixing encoded in U Lf,Rf . Physical axion basis The above Lagrangian describes an interacting axion, not necessarily in its mass eigenstate. The off-diagonal couplings to fermions are nevertheless V f and A f for the physical axion, as we will see. Unlike standard DFSZ models with PQ-charged Higgs doublets, our flavoured axion does not mix with the longitudinal component of the Z boson. We still need to identify the physical axion at low energy as the state orthogonal to π 0 and η mesons. One can then determine the canonical axion mass and couplings [52][53][54]. Let us briefly summarize how it works, following the prescription e.g. in [11]. The axion mass generated by the QCD anomaly coupling in Eq. 2.2 is conveniently calculated by rotating away the anomaly via chiral transformations of light quarks (q = u, d, s), This leads to a low-energy effective Lagrangian below the chiral symmetry-breaking scale, Using the relation ū L u R = d L d R = m 2 π f 2 π /(m u + m d ), the axion-pion mixing term vanishes. We identify the state a in Eq. 2.5 as the physical axion and extract its mass, There remains additional mixing with heavier mesons such as η ′ which provide further small corrections. A precise calculation performed in [55] gives us m a = 5.70(6)(4) 10 12 GeV f a µeV. (2.7) The transformation in Eq. 2.4 affects also the axion-quark couplings. For example for u, d and s quarks, the axion-quark Lagrangian in Eq. 2.2 is transformed into the physical basis, We see that the diagonal couplings are modified by an amount proportional to N DW , whereas the off-diagonal couplings are unchanged. Physically, this is a consequence of the QCD anomaly being flavour-conserving, and unable to mediate flavour-violating interactions that contribute to c sd . The above discussion identifies the physical axion basis in the limit of no kinetic mixing between the axion and heavier mesons. Such mixing, induced by the effective Lagrangian in Eq. 2.8, needs to be further diagonalized away to obtain the physical axion basis. This will be discussed in detail in Section 4 and Appendix A. The kinetic mixing contribution is negligibly small for the standard QCD axion with m a ≪ m π and f a ≫ f π , but can be important for an ALP. Decay constant and axion-photon coupling In standard axion scenarios, the decay constant f a is defined by v P Q /N DW , where N DW is the QCD anomaly number. Provided the U (1) P Q symmetry is broken by the VEV of a single field φ with PQ charge x φ , we simply have v P Q = x φ v φ . 4 In more general models, where several fields φ contribute to symmetry breaking, we define v 2 We will encounter exactly this scenario when discussing the A to Z model presented in Section 6. The axion-photon coupling aFF defined in Eq. 2.2 is given in terms of the electromagnetic anomaly number E, through the coefficient In unified models, such as the A to Z model with Pati-Salam unification presented in Section 6, the ratio of anomaly numbers is fixed to E/N DW = 8/3, giving c aγ ≈ 0.75. Heavy meson decays The flavour-changing vector couplings in L ∂ may lead to observable decays of heavy mesons into axions. A general study of such flavour-changing processes involving a (massless) Nambu-Goldstone boson was made in [56], which is applicable to our flavourful axion. For a two-body decay P → P ′ a of a heavy meson P = (q P q ′ ) into P ′ = (q P ′ q ′ ), the branching ratio is given by with V f as defined in Eq. 2.3. Its indices q P q P ′ relate to the constituent quarks, e.g. a K + → π + a decay proceeds bys →da with coupling strength V d sd ≡ V d 21 . For completeness, a rederivation of Eq. 3.1 is provided in Appendix B. It depends on a form factor f + (q 2 ) encapsulating hadronic physics, where q = p a = p P − p P ′ is the momentum transfer to the axion. The lightness of the axion means we can safely take the limit q 2 → 0. For kaon decays, f + (0) ≈ 1 to good approximation. For heavier mesons, we use results from lattice QCD [57], summarised in Table 1. Table 1. Form factors f + (0) extracted from [57] for K, D and B decays. B and B s decays B physics has a rich phenomenology, and is recently of particular interest due to persistent anomalies in observed semileptonic B decays at the LHC, which may be evidence for charged lepton flavour violation (cLFV) [70]. Rare B decays of the type B → π(K)νν, while generally not as tightly constrained as those for kaons, may also provide insights into new physics. A dedicated search for decays like B → π(K)a with a light invisible particle a was made by CLEO, which collected 10 7 BB pairs throughout its lifetime. It provides the limits Br(B ± → π ± (K ± )a) < 4.9 × 10 −5 and Br(B 0 → π 0 (K 0 )a) < 5.3 × 10 −5 at 90% CL [71]. More recent and powerful experiments, namely BaBar and Belle, have not yet provided limits on this exact process. However we may estimate their experimental reach by the stated limits on the decays B → π(K)νν, which are typically O(10 −5 ) (see Table 2), an improvement of approximately one order of magnitude. The upgraded experiment Belle-II at SuperKEKB is expected to collected approximately N = 5 × 10 10 BB pairs, improving the limits on many rare decays [72]; assuming the sensitivity scales as √ N , we may expect an O(10 2 ) improvement in branching ratio limits. It is worth noting that the decay B 0 → π 0 a, predicted by flavoured axion models, has not been analysed explicitly by experiments. However, some information may be gleaned from searches for the SM process B 0 → π 0 νν, which are a background to the axion signal. Generically, any bound on the SM decay will translate into a bound as strong (or stronger) on the two-body decay to an axion. Finally, we remark on the fact that also decays of the form B 0 s → K 0 a and B 0 s → η(η ′ )a are allowed, but no meaningful experimental information is available. D and D s decays Little is said in the literature about decays of charmed mesons of the form D ±,0 → π ±,0 a or D ± s → K ± a, or the corresponding decays involving a νν pair. The branching ratio for D → π(K)a may be easily calculated using the same formulas for K and B decays, given below. The trivial requirement that Br(D → π(K)a) < 1 allows us to place weak bounds on v P Q of O(100) TeV, but without an experimental probe, little more can be said. As we will show below, the predicted branching ratios are anyway expected to be rather small when compared to K and B decays, which have corresponding branching ratios approximately three and one order of magnitude greater. In conclusion, while further experimental probes of D decays are of course welcome, they are not expected to be more sensitive to flavoured axions than other decays. On the other hand, in flavoured axion scenarios only D decays can probe the up-type quark Yukawa matrix. Bounds Ultimately the experimental data can be used to constrain the ratio |V f q P q P ′ |/v P Q for a given decay. Collecting terms in Eq. 3.1, we define a branching ratio coefficientc P →P ′ , which depends only on hadronic physics, by The values ofc P →P ′ are tabulated in Table 2, along with experimental limits on the branching ratio and the corresponding bound on v P Q , where available. D, D s and B s decays have no experimental constraints, however we can compute the numerical coefficientsc, which are all O(10 −14 − 10 −13 ). These are also given in Table 2. Decay Branching ratio Table 2. Branching ratios (upper limits) and corresponding bounds (lower limits) on the PQbreaking scale v P Q from flavour-violating meson decays. Bold font marks the current best limit from searches for P → P ′ a, while parentheses mark the bound on the rare decay P → P ′ νν, which should be comparable. Asterisks ( * ) mark the expected reach of current or planned experiments. Axion-meson mixing In this section we discuss the mixing between axions and neutral hadronic mesons, and the impact on the meson oscillation probabilities. Such a mixing effect can also lead to new contributions to both meson decays into axions and axion decays into two photons. Although the mixing effect will turn out to be negligible for PQ axions which solve the strong CP problem, it may be relevant for non-standard axions such as ALPs. Readers who are not interested in ALPs may skip this section, since it will not lead to any competitive bounds on PQ axions. Parametrisation of mixing Axion-quark couplings in the mass-diagonal basis were discussed in Section 2.2. Relevant to meson mixing are the terms These derivative couplings translate into effective axion-meson couplings where f P is the meson decay constant for P = π 0 , η, η ′ , K 0 , K 0 , and where η P ≡ c P f P /v P Q . This is naturally generalised to include also mesons containing c and b quarks. For a QCD axion with m a ≪ m P and f a ≫ f P , there is almost no impact on the standard meson dynamics. However, the results are valid for generalised ALPs, where the effect may be detectable. Meson mass splitting Axions and ALPs with off-diagonal quark couplings will mediate mixing between a heavy neutral meson P 0 (P = K, D, B, or B s ) and its antiparticle P 0 in addition to that from weak interactions. An explicit calculation, showing how axion interactions yield an additional contribution to meson mass splittings, is given in Appendix A. We quote the result, namely that The total mass difference is then given by ∆m P = (∆m P ) SM +(∆m P ) axion . As an example, consider the effect of axion-kaon mixing on the K 0 L − K 0 S mass difference, experimentally measured to be (∆m K ) exp = (3.484 ± 0.006) × 10 −12 MeV [78]. The error is dominated by the theory uncertainty, which may be large [79]; near-future lattice calculations aim to reduce the error on ∆m K to O(20%) [80], with further improvements from next-generation machines. As a conservative estimate, we shall only demand the axion contribution to any ∆m P is not larger than the experimental central value. We then have |η K 0 | 8 × 10 −8 , which (assuming c K 0 ≈ 1) corresponds to the bound v P Q 2 × 10 6 GeV. Similar results for D, B and B s mixing are tabulated in Table 3. Belle-II is expected to improve the sensitivity of D 0 − D 0 mixing by about one order of magnitude with the full 50 ab −1 of data [81]. Table 3. Limits on v P Q from contributions to neutral meson mass differences. Measured values of ∆m P are given in the PDG [78]. Meson decay constants f P 0 are extracted from global averages given in [82]. Axion-pion mixing and ALPs We have seen that axion-meson kinetic mixing can affect the oscillation probability (and thereby the mass difference) of neutral heavy mesons, arising from off-diagonal quark couplings of axions. In this subsection, we will see that even flavour-diagonal couplings can lead to interesting consequences. As shown in Eqs. 4.1 and 4.2, there arises in particular axion-pion kinetic mixing as a consequence of the physical π 0 containing a small admixture of the nominal axion and vice versa. This induces axion contributions to any process normally involving π 0 . Kinetic diagonalisation (as in Eq. 4.3) induces mass couplings of the form where η π 0 = c π 0 f π /v P Q =c π 0 f π /f a , withc π 0 ≡ c π 0 /N DW . This is subsequently diagonalised by a 2 × 2 rotation in terms of an angle θ π , where Starting from the canonical physical basis in Eq. 4.1, the physical basis accounting also for kinetic mixing is thus obtained by field transformations a → cos θ π a + sin θ π π 0 (4.7) To leading order in η π 0 , we have For a QCD axion with m a ≪ m 0 π and η π 0 ≪ 1, its contribution to the physical pion is vanishingly small. However, this mixing may be interesting for more general ALPs, where the mass and decay constant are not necessarily correlated. The axion-meson mixing effect discussed above can modify decays of heavy mesons to lighter mesons plus an axion, as well as to the decay of an axion to two photons. The basic idea is very simple: in the standard hadronic decay of a heavy meson into two pions, one of the neutral pions in the final state can convert into an axion via the mixing effect discussed above, leading to a final state consisting of an axion. Similarly, the standard decay of a neutral pion into two photons can also mediate the decay on an axion into two photons. Applying Eq. 4.8 to an ALP, still denoted by a, perhaps the most interesting processes induced by mixing are K + → π + a and a → γγ. Considering only the mixing-induced effect, we have Taking the ballpark of Br(K + → π + a) 10 −10 listed in Table 2 and Br(K + → π + π 0 ) = 20.67%, we find a mass-dependent bound which is applicable for m a 110 MeV. Similarly, one finds the axion decay to photons In the SM with massless valence quarks and N C = 3 colours, we have [83] Γ The standard form of the axion-photon coupling, 1 4 g aγ aFF , gives Γ(a → γγ) = 1 64π g 2 aγ m 3 a . We may then write the mixing-induced axion-photon coupling as Therefore the bound in Eq. 4.10 corresponds to Extensive studies of ALPs over a wide range of parameter space (summarised in e.g. Let us finally note that the E787 experiment searched for K + → π + a followed by a → γγ in the range of m a = 5 − 100 MeV [85]. Combining the two expressions in Eqs. 4.9 and 4.13, the E787 result gives (for m a = 10 − 96 MeV) the bound (g aγ ) mix 5 × 10 −5 GeV −1 , (4. 16) which is less stringent than Eq. 4.14. 5 Lepton decays Two-body lepton decays of the form ℓ 1 → ℓ 2 a follow analogously to meson decays, with the notable difference that both axial and vector couplings contribute, since the decaying particle has non-zero spin. We define a total coupling C e ℓ 1 ℓ 2 by As done for mesons in Eqs. 3.2-3.3, the branching ratio may once again be written in terms of a coefficientc ℓ 1 →ℓ 2 , by These are evaluated, with corresponding limits placed on v P Q , for the three possible lepton decays. The results are tabulated in Table 4. The most interesting of these is µ + → e + a, for which the SM background consists almost entirely of ordinary β decay, µ + → e + νν. The muon decay width Γ µ is given to good approximation by Γ µ ≃ Γ(µ + → e + νν) ≃ G 2 F m 5 µ /(192π 3 ). Assuming µ + → e + a decays are isotropic, i.e. the decay is purely vectorial (or axial), the experiment at TRIUMF provides the limit Br(µ + → e + a) < 2.6 × 10 −6 [86], corresponding to v P Q /|V e 21 | (or |A e 21 |) > 5.5 × 10 9 GeV. They searched specifically for decays with an angular acceptance cos θ > 0.975, where θ is the positron emission angle; in this region SM three-body decays are strongly suppressed. The TWIST experiment [87] has performed a broader search, accommodating non-zero anisotropy A as well as massive bosons, but are less sensitive for isotropic decays in the massless limit. The limits for isotropic (A = 0) and maximally anisotropic (A = ±1) decays are given in Table 4. Let us sketch the angular dependence of µ → ea decays, which are not generally isotropic, as these would relate to TWIST; the formulas generalise immediately to τ decays. Consider µ + with a polarisation η = (0, η) decaying into a positron with helicity λ e = ±1 and momentum k e , as well as an axion. Neglecting m e and m a , where η · k e = −|k e | cos ϑ ηe . We can describe the degree of muon polarisation P µ as the projection of η onto the beam directionẑ, i.e. P µ ≃ cos ϑ ηz = η ·ẑ/|η|. For a more precise treatment one should consider the distribution of η in a muon ensemble, but as we shall assume all muons are highly polarised opposite to the beam direction, i.e. P µ ∼ −1, this is sufficient for our purposes. TWIST measures the positron emission angle θ = ϑ ηz − ϑ ηe ; for highly polarised muons, we have cos ϑ ηe ≃ P µ cos θ. Summing over λ e , the differential decay rate is given by where we define the anisotropy The limiting cases are A e 21 = V e 21 , giving A = −1 (corresponding to an SM-like V −A current interaction), or A e 21 = −V e 21 , giving A = 1 (a V + A interaction). The signal strength with respect to the SM background is maximised for A = 1, particularly in the region with cos θ ∼ 1. The A to Z model, discussed below, predicts exactly this scenario, although the high predicted PQ scale v P Q ∼ 10 12 GeV implies the signal is very small despite the enhancement. Finally, the Mu3e experiment, primarily designed to look for µ → eee (discussed below), can also be used to test for µ → ea, and tentatively probe scales of v P Q 10 10 GeV [88] by the end of its run. Decay Branching ratio Experimentc ℓ 1 →ℓ 2 v P Q /GeV Table 4. Branching ratios (upper limits) and corresponding bounds (lower limits) on v P Q from two-body cLFV decays. The assumed anisotropy A can be related to the formula in Eq. 5.6. Additionally, we may examine decays with an associated photon, i.e. ℓ 1 → ℓ 2 aγ. These can be studied in experiments searching for ℓ 1 → ℓ 2 γ, which, if experimentally measured, are unequivocal signs of new physics; in the SM, Br(µ → eγ) ∼ 10 −54 , certainly unobservable. The differential decay rate for ℓ 1 → ℓ 2 aγ in the limit of m ℓ 2 = m a = 0 may be expressed by where f (x, y) is a function of x = 2E ℓ 2 /m ℓ 1 , y = 2E γ /m ℓ 1 , i.e. (twice) the fraction of invariant mass carried away by the lighter lepton and photon, respectively. Energy conservation requires x, y ≤ 1 and x + y ≥ 1. Moreover, the angle θ 2γ between ℓ 2 and the photon is fixed by kinematics to Alternatively one can write the decay rate in terms of x and c θ ≡ cos θ 2γ , i.e. We may relate the branching ratios of decays with and without a radiated photon by The radiative decay possesses two divergences: an IR divergence due to soft photons (x ≃ 1) and a collinear divergence (θ 2γ ≃ 0). In practice, appropriate cuts are made on the minimum photon energy and angular acceptance well away from the IR-divergent region. Such cuts were discussed in the context of ℓ 1 → ℓ 2 γ decays [90], in particular as they related to LAMPF [91] and MEG [92] experiments. The region of interest for MEG is for x, y ≃ 1, or equivalently c θ ≃ π, where the SM background disappears. However, decays with an associated flavoured axion are also suppressed in this limit, i.e. the integral f vanishes for very soft axions. One might consider a broader region of phase space, provided the induced backgrounds 5 are under control. A comprehensive experimental study of such signals, e.g. for the MEG-II upgrade [93], would be welcome. An explicit limit on µ → ef γ, where f is a light scalar or pseudoscalar, is given by the Crystal Box experiment, which sets Br(µ → ef γ) < 1.1 × 10 −9 at 90% CL [94]. Unlike the TRIUMF experiment [86] discussed above, this limit does not assume isotropic decays. Using the same cuts 6 we find f ≃ 0.011, yielding the bound v P Q /GeV > 9.4 × 10 8 |C e 21 |. In Table 5 we summarise current and future experimental limits on branching ratios of ℓ 1 → ℓ 2 γ. µ → eee and µ − e conversion We may also consider processes without an axion in the final state. Axion mediation will induce the decay µ → eee, although the presence of two axion vertices and additional suppression by 1/v P Q means these processes are again only interesting for ALPs. The current upper bound on the branching ratio is Br(µ + → e + e − e + ) < 1.0 × 10 −12 , set by SINDRUM [97]. The Mu3e experiment [98] currently under development is expected to start taking data in 2019, and will significantly improve the sensitivity by four orders of magnitude, i.e. Br(µ → eee) 1 × 10 −16 . To lowest order in m 2 e , the branching ratio for the axion-mediated decay is given by (5.11) Assuming O(1) couplings, we see that such decays are only reachable by experiment provided v P Q 10 6 GeV. As the axion (or ALP) also couples to quarks, one may consider µ − e conversion in nuclei, mediated by the axion. The relevant couplings are now C e 21 and the axion-nucleon coupling g aN = C aN m N /v P Q . The numerical factor C aN is model-dependent, given in terms of flavour-diagonal couplings of the up and down quarks. In standard cases these are essentially given by the quark PQ charges (see e.g. [53] for standard formulae), but in more general scenarios such as a flavoured axion, these can deviate significantly. 7 The axion-mediated µ − e conversion is a spin-dependent process which was discussed in [100]. The conversion-to-capture ratio in a nucleus (A, Z) is qualitatively given by where q 2 ≈ m 2 µ is the momentum-transfer and S (A,Z) N is the total nucleon spin of a nucleus (A, Z). Not accounted for here are nuclear spin and structure form factors, which were discussed in [100] and are O(1). The suppression by v 4 P Q suggests µ − e conversion is only realistically detectable in ALP scenarios. The current best limit comes from SINDRUM-II: R Au µe < 7 × 10 −13 [101]. Assuming again O(1) couplings and form factors, SINDRUM-II sets v P Q 10 6 GeV, comparable to the µ → 3e bound. The upcoming experiments Mu2e and COMET are both looking for µ − Al → e − Al, and both aim to probe R µe < 6 × 10 −17 at 90% CL [102,103], a factor 10 4 improvement over the SINDRUM result. A to Z Pati-Salam Model We present here a recently proposed QCD axion model [38], based on the rather successful A to Z model [39], which seeks to resolve the flavour puzzle by way of Pati-Salam unification coupled to an A 4 × Z 5 family symmetry. The family symmetry is completely broken by gauge singlet flavons φ, which are triplets under A 4 and couple to left-handed SM fields. However, information about the underlying symmetry remains in the particular vacuum structure of the flavons. The initial viability of the model, which predicts certain Yukawa structures based on the so-called CSD(4) vacuum alignment, was demonstrated in [39], and leptogenesis was considered in [104]. In [38], we updated and improved the numerical fit to flavour data, as well as demonstrating that, with small adjustments, the A to Z model can resolve the strong CP problem. The axion then emerges from the same flavons that are responsible for SM Yukawa couplings; in other words, no additional field content is necessary to realise a PQ axion. Moreover, as all Yukawa couplings are fixed by the fit to data, also the axion couplings are known exactly, with no additional free parameters. As the focus of this work is on axion couplings to matter, we limit our discussion primarily to the resultant Yukawa and mass matrices of the SM fermions. However in Appendix C we derive explicitly the axion-matter couplings from the Yukawa superpotential. In Appendix D we provide the best fit parameters for the A to Z model and corresponding axion couplings. Mass matrices and parameters The charged fermion Yukawa matrices are given at the GUT scale by where m i are real, with dimensions of mass and η, ξ are phases. Note that the scales of the various free parameters are constrained by the model itself. By rather simple assumptions about the flavon VEVs, discussed fully in [39], and assuming all dimensionless couplings in the renormalisable theory are O(1), we may infer generic properties of the parameters. Parameters a, b and c correspond closely to the three up-type quark Yukawa couplings, i.e. a ≪ b ≪ c ∼ 1. Meanwhile y 0 d , y 0 b and y 0 s are correlated with the down-type quark Yukawa couplings, i.e. y 0 d ≪ y 0 b ≪ y 0 s . B is an O(1) ratio of couplings, and ǫ i3 ≪ 1 are small perturbations of a flavon VEV. The O(1) factor x is a Clebsch-Gordan factor, introduced by additional Higgs multiplets in a variation of the Georgi-Jarlskog mechanism. In the neutrino sector, the principle of sequential dominance on which the model relies demands a normal ordering and strong mass hierarchy, with m a ≫ m b ≫ m c , predicting the lightest neutrino with a mass of < 1 meV. A fit of these parameters to data has been performed [38], with central results collected in Appendix D. The model is fitted to experimental results 8 by an MCMC analysis. Bayesian credible intervals are also provided, showing that despite a large number of free parameters, small tensions in the predictions for θ ℓ 23 and δ ℓ may be further probed by increased sensitivity in current and future neutrino experiments. The PQ-breaking scale v P Q is determined primarily by the largest VEV among the flavons φ carrying PQ charge. The VEV of this flavon (named φ u 2 ) is proportional to the parameter b in Y u , which in turn is dominantly responsible for the charm quark Yukawa coupling; as the third generation largely does not couple to the PQ symmetry, this is the heaviest relevant fermion in the flavoured axion theory. The numerical fit gives |b| = 3.4 × 10 −3 . The details of how the flavons and parameters are related are given in Appendix C, Predictions Once the fermion mixing matrices are known from the fit, we can immediately determine the vector and axial coupling matrices V f and A f using Eqs. 2.3. Recalling that V f and A f are Hermitian, we have We may immediately compute the branching ratios for all aforementioned meson and lepton decays and neutral meson mass splittings. The only remaining parameter is the axion scale v P Q , which is only loosely constrained by naturalness arguments to be v P Q ∼ 10 12 GeV. In principle, any two measurements of either flavour violation (as discussed in this paper), the axion-photon coupling g aγ , or the axion-electron coupling g ae , would be sufficient to overconstrain v P Q in this model. Here, g aγ is fixed by v P Q and the domain wall number N DW = 6. In other words, although the charge assignments are very different, the A to Z model will resemble the original DFSZ model in experiments sensitive to g aγ , such as haloscopes and helioscopes. In Table 6 we give the model predictions for some of the most phenomenologically interesting experimental probes. We explicitly set v P Q = 10 12 GeV when computing the branching ratio. Process Branching ratio (v P Q = 10 12 GeV) Experimental sensitivity 8.3 × 10 −13 5 × 10 −9 (Mu3e future) Table 6. Predictions for axion-induced processes in the A to Z model. Branching ratios are computed assuming v P Q = 10 12 GeV, which should be true up to an O(1) factor. In summary, we find that evidence for or against the A to Z model must come primarily from the (non-)observation of K + → π + a; the NA62 experiment is expected to be able to exclude most of the model's parameter space. A next-generation experiment could exclude the model definitively. Secondary sources of interest are decays of K 0 L and µ + ; detecting the A to Z model would require v P Q to be slightly lower than the natural prediction. However, two-body decays may be powerful channels for excluding other flavour models, sometimes placing stronger constraints than those from astrophysics, which typically give the strongest limits on v P Q . Decay correlations The prominent feature of unified models is correlations between Yukawa couplings of quarks and leptons. In this A to Z model, Y d ∼ Y e , up to diagonal Clebsch-Gordan factors. Notably, the (2,2) entries differ by a parameter x, which is determined by the fit and acts as a necessary Clebsch-Gordan factor to distinguish the strange quark and muon masses. Naturally, one expects x ∼ m µ /m s > 1; at the GUT scale, m µ /m s ∼ 4.5. Now consider the two decays K + → π + a and µ + → e + a, which are the most experimentally promising among flavoured axion decays. Their branching ratios are determined, respectively, by the couplings |V d 21 | 2 and |C e 21 | 2 = 2|V e 21 | 2 . With all other parameters held constant, the dependence on x of the ratio r = |V e 21 | 2 /|V d 21 | 2 is well approximated empirically by r ≈ 6.9 e −1.8 √ x . We then find that the ratio of branching ratios R µ/K is given by For the model best fit point x = 5.88, R µ/K ≈ 0.38. Should both of these decays be measured experimentally, such a ratio, which is independent of the axion scale v P Q , is a valuable statistic for constraining the flavour sector of the model, giving immediate information about the high-scale parameters. For models where Y d ∼ Y e , typically x > 1; generically one expects R µ/K < 1. Similar ratios can be considered for other decays of K or B mesons and charged leptons. However, as this requires direct observation of both decays, which are suppressed in both sectors, these are realistically feasible only for more general ALPs. Conclusion In this paper we have reviewed and extended the phenomenology of flavourful axions, including both standard PQ axions, associated with the solution to the strong CP problem, and also for non-standard axion-like particles (ALPs) which do not care about the strong CP problem but which may generically arise from spontaneously broken symmetries and multiple scalar fields. We have presented the flavourful axion-fermion and axion-photon couplings both for the standard axion and for ALPs, and shown that they quite naturally are non-diagonal. Using these couplings, we have calculated the branching ratios for twobody decays of heavy mesons K, D, and B involving a flavourful axion. We have also calculated the mixing between axions and hadronic mesons K 0 , D 0 , B 0 and B 0 s and its consequences, which has not been discussed in the literature before. These can lead to new contributions to neutral meson mass splitting, meson decays into axions and axion decays into two photons which may be relevant for ALPs. We have also discussed charged lepton flavour-violating processes involving final state axions, of the form ℓ 1 → ℓ 2 a(γ), as well as µ → eee and µ − e conversion. Correlations between observables may arise in specific flavourful axion models. To illustrate this, we have described the phenomenology of the A to Z Pati-Salam model, which predicts a flavourful QCD axion [38], and shown how unification leads to correlations between different flavour-dependent observables, as the down-type quark and charged lepton couplings are very similar. Within this model, since the axion arises from the same flavon fields that dictate fermion Yukawa structures, no additional field content is necessary to solve the strong CP problem, and all axion couplings are fixed by a fit to quark and lepton masses and mixing. In conclusion, flavourful axions can appear naturally in realistic models and have a rich phenomenology beyond that of the standard KSVZ/DFSZ paradigms. In this paper we have attempted to provide the first comprehensive discussion of a number of relevant processes involving flavourful axions, including meson decays and mixing, as well as charged lepton flavour-violating processes. For a QCD axion, typically the bounds from such processes are very weak. However, K → πa is an ideal channel for looking at these types of decays, especially in specific models such as the A to Z Pati-Salam model, where exactly this type of flavour-violating coupling is the largest. By comparing multiple flavour-violating processes for both quarks and leptons, one may experimentally probe lepton and quark Yukawa structures which determine their masses and mass ratios. Although for QCD axions some of the flavour-violating processes we consider are not competitive, for flavourful ALPs many of them may be important, especially if the symmetry-breaking scale is 10 6 GeV or less. A Axion-meson mixing Kinetic mixing between the axion and neutral mesons (any of the pairs where P 0 , P 0 are strong eigenstates. The superscript 0 signifies we are not in a diagonal (physical) basis. We define the CP eigenstates P 1 (even) and P 2 (odd) by Inversely, In the case of the kaon, the states K 1,2 are close (but not exactly equal) to the physical eigenstates K S and K L , so defined by having definite lifetimes in weak decays. They are given in terms of a small parameter ε K ∼ 10 −3 characterising indirect CP violation, We will neglect such a contribution in this work. Rewriting L 0 kin in terms of P 1,2 , we have Note the wrong sign of the P 2 diagonal kinetic and mass terms; these can be made canonical by letting P 2 → iP 2 , which introduces a factor i in the kinetic mixing term. This can be absorbed in new couplings η 1,2 , defined by We also define a "total" coupling η 2 ≡ η 2 1 + η 2 2 = 2η P η * P = 2|η P | 2 . We diagonalise the kinetic Lagrangian by transformations The mixing is transferred to the mass matrix, giving where Φ = (a, P 1 , P 2 ) and The eigenvalues of M 2 Φ , corresponding to the physical squared masses, are given to good approximation for small η by Recalling that η 2 = 2|η P | 2 , we conclude that We have not taken into account a mass difference from SM physics, such as for kaons, where K S and K L differ by approximately 3 µeV. B Heavy meson decay branching ratio The Feynman rule for the vertex (∂ µ a)q 1 γ µ q 2 defined by the Lagrangian in Eq. 2.2 is where q = p a = p 1 − p 2 is the momentum transfer to the axion. For a two-body decay P → P ′ a of a heavy meson P = (q P q ′ ) into P ′ = (q P ′ q ′ ), the amplitude may be written It depends on a form factor f + (q 2 ) encapsulating hadronic physics. The lightness of the axion means we can safely take the limit q 2 → 0, wherein the form factor is defined by the relation The differential decay rate in the rest frame of P is with the momentum of decay products |p P ′ | = |p a | given by (B.6) Integrating over the solid angle Ω yields a factor 4π, arriving at Superpotential The effective Yukawa superpotential below the GUT scale, once messengers X have been integrated out, is given by with explicit couplings λ, which are naturally O(1) and assumed real by a CP symmetry at high scale. In the corresponding Lagrangian, the fermion part of the chiral superfields F , F c i are denoted f , f c i , respectively. 9 These are the familiar SM fermions as well as a set of right-handed neutrinos. The light Higgs scalar doublets keep the same notation as their corresponding superfield. 10 The fields Σ acquire high-scale VEVs which give dynamical masses to the X messengers in the renormalisable theory, expected to be O(M GUT ). Goldstone field The central actors in the flavoured axion model are the A 4 triplet flavons φ. Taking only the scalar part of superfields φ, we let To be precise: f, f c i are Weyl fermions, by definition transforming as left-handed fields. In other words, f c i are the left-handed components of a weak SU (2)L singlet. 10 This is rather imprecise but tolerable, as the Higgs sector is not relevant to the PQ mechanism, and fields are anyway replaced by their VEVs eventually. where we have expanded around the flavon VEVs, noting that each ϕ consists of a scale v and direction x in A 4 space. The VEVs are aligned according to the CSD(4) prescription, i.e. x ϕ u 1 = (0, 1, 1), The radial fields ρ ϕ are very heavy and phenomenologically uninteresting, so will be neglected henceforth. The phase fields a ϕ are not independent, but related by the single U (1) rephasing symmetry. We identify the Goldstone (or axion) field a by Component fields are given by Lagrangian (SUSY basis) The Yukawa Lagrangian may thus be written as Let us make the SM field components of the PS fields f, f c i explicit: Below the EWSB scale, Q and L further decompose into (u L , d L ) and (ν L , e L ), respectively. In addition, h u → v u , h d , h d 15 → v d , with some small mixing assumed between Higgs bi-doublets to give the MSSM 2HDM; we assume the effects of this mixing are negligible. The fields Σ acquire real VEVs, with magnitudes generically written v Σ , i.e. The interplay between the singlet Σ d and adjoint Σ d 15 also provides Clebsch-Gordan factors which are different for quarks and leptons. To account for the split between down-type quarks and charged leptons, we reparametrise the couplings λ in the charged lepton sector, so λ 1d →λ 1d , λ 2d →λ 2d , and λ ud →λ ud . Lagrangian (left-right basis) It is also convenient to work in the left-right (LR) basis, in terms of Weyl fermions u L,R , d L,R , e L,R , and ν L,R . This amounts to nothing more than taking the Hermitian conjugate of the terms in Eq. C.6. With all above considerations taken into account, the Lagrangian becomes λ 1d →λ 1d , λ 2d →λ 2d , λ ud →λ ud + h.c.. (C.8) This rather hefty expression can be put in a more conventional format by 1) expanding the A 4 triplet products like Q · ϕ , such that we may write the couplings as matrices, and 2) noting that each term must be PQ-invariant, allowing us to replace the flavon PQ charges with those of the SM fermions. 11 Moreover, all λ are real by an assumed CP symmetry at high scales. Lagrangian (condensed linear basis) with dimensionless parameters defined by (C.12) Lagrangian (derivative basis) We perform an axion-dependent rotation of the fermion fields to replace the linear couplings with derivative ones; the anomaly term is also induced. Extending the Lagrangian to include the fermion kinetic terms, f (f Li / ∂f Li +f Ri / ∂f Ri ), we let ijēLi e Rj + h.c. + anomaly. (C.14) We rotate to the mass basis by unitary transformations where by definition m f ≡ U † f M f V f , U Q ≡ U u , and V CKM ≡ U † u U d . Derivative couplings The axion-fermion derivative couplings become 16) where now f L , f R are vectors and x f L , x f R are diagonal 3 × 3 matrices. We define the coupling matrices X L ≡ U † f x f L U f and X R ≡ V † f x f R V f , and note that, since charges x f are real, X L = X † L and X R = X † R . In terms of Dirac spinors, D Couplings in A to Z: numerical fit The best fit parameters, as well as a Bayesian 95% credible interval, are given in Tables 7 (leptons) and 8 (quarks). The corresponding best fit input parameters are given in Table 9. We fit the model to data at the GUT scale. The running from low to high scale was performed, assuming the MSSM, in [105]. They parametrise threshold corrections by a series of dimensionless parameters η i . All but one (η b ) were set to zero, and choosinḡ η b = −0.24 to account for the small GUT-scale difference between b and τ masses. Table 9. Best fit input parameter values.
12,021.8
2018-06-02T00:00:00.000
[ "Physics" ]
Principal modes in multimode fibers: exploring the crossover from weak to strong mode coupling We present experimental and numerical studies on principal modes in a multimode fiber with mode coupling. By applying external stress to the fiber and gradually adjusting the stress, we have realized a transition from weak to strong mode coupling, which corresponds to the transition from single scattering to multiple scattering in mode space. Our experiments show that principal modes have distinct spatial and spectral characteristic in the weak and strong mode coupling regimes. We also investigate the bandwidth of the principal modes, in particular, the dependence of the bandwidth on the delay time, and the effects of the mode-dependent loss. By analyzing the path-length distributions, we discover two distinct mechanisms that are responsible for the bandwidth of principal modes in weak and strong mode coupling regimes. Taking into account the mode-dependent loss in the fiber, our numerical results are in good agreement with our experimental observations. Our study paves the way for exploring potential applications of principal modes in communication, imaging and spectroscopy. INTRODUCTION Recent advances in coherent control of light propagation in random scattering media [1] have triggered experimental investigations of the transmission eigenchannels [2][3][4][5][6][7][8][9][10][11][12][13], which provide a full description of steady-state transmission of monochromatic waves. The pulsed transmission is much more complex and involves not only spatial but also temporal distortions of an input signal. As the multiple scattering creates innumerable possible paths that light can take, the temporal shape of a pulse is severely distorted and stretched. The inherent coupling between temporal and spatial degrees of freedom makes it possible to exert control over the temporal dynamics of the transmitted pulse solely by manipulating the spatial degrees of freedom of the incident wavefront. Spatiotemporal focusing has been achieved by mitigating the temporal distortion in a single spatial channel [14][15][16][17][18][19][20]. A global control of pulsed transmission in all spatial channels is much more challenging, and it is not clear whether the spatial degrees of freedom are sufficient to tailor the temporal dynamics of the total transmission through turbid media. Multimode optical fibers (MMFs) have attracted much attention lately due to practical applications in communication [21,22], imaging [23][24][25][26][27][28][29][30] and spectroscopy [31][32][33][34][35]. Intrinsic imperfections (like an inhomogeneity of the refractive index in the fiber) and external perturbations (such as those causing a cross-section deformation) lead to coupling of the guided modes. Such coupling can be considered as optical scattering in mode space, with the effective transport mean free path given by the propagation distance beyond which the spatial field profile becomes uncorrelated [22,36]. If the fiber length L is less than , the weak mode coupling can be described as single scattering of light from one mode to another. Once L exceeds , light is scattered back and forth among the fiber modes [36]. Note that the scattering occurs in mode space as light still propagates forward but in different modes. In the strong mode coupling regime, light may return to the original mode after hopping to other modes and introduce interference effects. Thus multiple scattering and wave interference become dominant. Light propagating through a MMF experiences spatial distortions that scramble the intensity profile. Such distortions have been effectively corrected at a single frequency by shaping the input wavefront. In fact, an arbitrary output field pattern can be generated with monochromatic light [37,38]. In addition to the spatial distortions, a short pulse propagating through a MMF experiences temporal distortions. Even if a pulse is launched to a single guided mode of the MMF, the random mode coupling spreads light to other modes with different propagation constants. A selective excitation of modes with similar propagation constants results in the formation of a focused spot with minimal temporal broadening at the output of a MMF [39]. This method, however, works only when mode coupling is relatively weak, as multiple scattering spreads the input light to all modes with distinct velocities. To overcome modal dispersion, principal modes (PMs) were proposed for MMFs as the generalization of principal states of polarization in a single mode fiber [40][41][42][43][44][45] and they provide an effective approach to mitigate temporal distortions in the strong mode coupling regime. A PM retains its output spatial profile to the first order of frequency variation [40]. Mathematically PMs are the eigenstates of the group-delay matrix G ≡ −iT −1 dT /dω, where T is the field transmission matrix. In the absence of backscattering in the fiber, the group delay matrix coincides with the Wigner-Smith time-delay matrix, Q ≡ −iS −1 dS/dω, where S is the scattering matrix [46][47][48]. Hence, PMs correspond to the Wigner-Smith time-delay eigenstates [49], and have well-defined delay times that are equal to the real part of the associated eigenvalues. These eigenstates provide the most suitable basis for studying and controlling temporal dynamics of total transmission through MMFs. In the absence of mode coupling, PMs are linearly polarized (LP) modes, i.e., the eigenmodes of a perfect fiber in the weak guiding approximation. Mode coupling entangles spatial and temporal degrees of freedom. However, the output spatial profile of a PM is decoupled from its temporal profile. Different output spatial channels follow the same temporal trace, thus the spatial profile of the output field remains constant in time. Neglecting chromatic dispersion in the fiber, when a transform limited pulse is launched to a single PM, the output pulses in all spatial channels remain short and undistorted, even in the presence of strong mode coupling. Recent studies show that PMs in MMFs with weak and strong mode coupling have distinct spatial profiles and spectral correlation bandwidths [50,51]. An important open question that remains to be solved, however, is how the transition occurs, i.e., how PMs evolve from the weak to the strong mode coupling regime. A physical understanding of PMs in different regimes is not only important for the fundamental comprehension of temporal dynamics of mesoscopic transport, but also relevant to applications in telecommunication and imaging. In this paper, we experimentally study PMs in both weak and strong mode coupling regimes as well as in the transition region between them. With weak mode coupling, each PM is a mixture of a few modes with similar propagation constants, while with strong mode coupling, a PM consists of many modes. We investigate spectral correlation widths of PMs with different delay times and how mode-dependent loss affects the widths. In the weak mode coupling regime, the spectral correlation widths of PMs decrease dramatically with the increase of the delay times. However, in the strong mode coupling regime, the correlations exhibit a plateau within the short delay time range. We perform numerical simulations to further confirm and understand our experimental observations. By calculating the intensity distribution over the path-length, the finite bandwidth of PMs can be explained. Taking into account mode-dependent loss in the MMF, the numerical results show agreement with the experiment data. Experimental setup for measuring the field transmission matrix of a MMF. The continuous-wave output from a tunable laser source (Agilent 81940A) at wavelength ∼ 1550 nm is collimated (C1) and linearly polarized (PBS1). The beam is split into two arms by a beam splitter (BS1). In the fiber arm, light is modulated by the SLM in the reflection mode and then coupled to the MMF by a tube lens (L) and an objective (O). The output light from the MMF is collimated (C2) and linearly polarized (PBS2), before combining with the beam from the reference arm at a second beamsplitter (BS2). To match the optical path-length in the two arms, two mirrors (M1, M2) are inserted to the reference arm to adjust the path-length. BS2 is tilted to produce interference fringes of the two beams, which are recorded in the far field by a CCD camera. EXPERIMENTAL MEASUREMENT OF PRINCIPAL MODES To construct the Wigner-Smith time-delay matrix, we measured the field transmission matrices of a MMF at multiple wavelengths. Figure 1 is a schematic of an interferometer setup. A spatial light modulator (SLM) in the fiber arm prepares the phase front of the light field, which is then imaged to the front facet of a MMF. The output from the fiber combines with the reference beam and forms interference fringes. From the interferogram, we extract the spatial distribution of the transmitted field through the fiber. The measured intensity is I = |E r | 2 + |E s | 2 + E * r E s e ikr sin θ + E r E * s e −ikr sin θ , where E r and E s are the electric fields of the reference arm and the fiber arm, respectively, and θ is the tilt angle between them. The first two terms represent the dc components, and the last two terms are modulated at the spatial frequency ±k sin θ. These terms can be separated in the Fourier domain, namely, by performing the spatial Fourier transform. By applying a Hilbert filter, we select only the third term that has the positive spatial frequency, then remove the factor e ikr sin θ before applying an inverse Fourier transform to obtain the amplitude and phase of E s . The transmission matrix is measured in momentum space. The SLM scans the incident angle of light onto the fiber facet, and the transmitted light is measured in the far field of the distal tip. We apply stress to the fiber with clamps to enhance the mode coupling. By adjusting the stress, we can tune the coupling strength. To evaluate the strength of mode coupling in the fiber, the transmission matrix is transformed to the mode basis by decomposing the input and output fields by LP modes, which are simply referred to as modes below. Figure 2 shows the amplitude and phase of the measured transmission matrices of the MMF. Without external stress, the field transmission matrix is nearly diagonal [ Fig. 2(a)]. The small off-diagonal terms result from weak mode coupling due to inherent imperfection and macro-bending of the fiber. With an increase in the stress applied to the fiber, the off-diagonal terms grow and eventually become comparable to the diagonal terms, as shown in Fig. 2(c). Hence, in the weak coupling regime only modes with similar propagation constants are coupled. However, in the strong coupling regime, light diffuses to all modes regardless of which mode it is injected. Greater loss results in a lower amplitude of higher order modes at the output. However, if higher order modes are launched into the fiber, they can be scattered to lower order modes which experience less attenuation and dominate the output fields. Consequently, the transmission matrix presents a stronger decay for the output modes of high-order than the input ones. The phases of the transmission matrix elements are randomly distributed for 0 and 2π, reflecting the random nature of the mode coupling in the MMF. After measuring the transmission matrices at multiple wavelengths, we construct the group-delay matrix G ≡ −iT −1 dT /dω. An eigenvector of G gives the input field for a PM. We generate the input waveform of the principal mode by the SLM and launch it to the fiber. Since the SLM is limited to phase-only modulation, a complex-to-phase coding technique is used to convert the computer-generated phase-only hologram to a complex function with amplitude and phase modulation [52]. Figure 3(a,b) depict the measured amplitude and phase of the output field pattern Ψ for a PM. For comparison, we also calculate the output field Ψ from the input field of the same PM using the measured transmission matrix [ Figure 3(c,d)]. To quantify their difference, we compute |Ψ − Ψ | 2 dr, with |Ψ| 2 dr = 1 and |Ψ | 2 dr = 1. The difference is 3.8%, confirming the accuracy of our experimental measurement. We note that the transmission matrix is measured for one linear polarization of input and output light only. Since the polarization is scrambled in the MMF, some of the input light is converted to the other polarization and thus is not measured at the output. The transmission matrix is non-unitary even without intrinsic loss, and it is part of the full transmission matrix for both polarizations. Nevertheless, we can still obtain the group-delay matrix for one polarization from the partial transmission matrix. Its eigenstate gives the linearly-polarized input waveform that generates an output field whose one polarization component has a frequency-invariant spatial profile. Below we study the characteristics of such polarized PMs, which are simply referred to as PMs. PRINCIPAL MODES IN WEAK AND STRONG MODE COUPLING REGIMES We now experimentally investigate the differences in PMs of the MMF with weak and strong mode coupling. Figure 4(a-c) shows the far-field patterns of three PMs in the weak mode coupling regime with short, medium and long delay times. The PM with short delay time has small transverse momentum, similar to the low-order modes [ Fig. 4(a)]. With increasing delay time, the PM acquires larger transverse momentum [ Fig. 4(b)]. The far-field pattern of the PM with long delay time consists of large transverse momentum, like the high-order modes [ Fig. 4(c)]. We decompose the output field pattern by the LP modes, and the coefficients are given in Fig. 4(d-f). The PM with short/medium/long delay time is composed mostly of low/medium/high-order modes. Hence, in the weak mode coupling regime, each PM contains only a few modes with similar propagation constants. Figure 5(a-c) plots the spatial distribution of the output field amplitude for three PMs with short, medium and long delay time in the case of strong mode coupling. The far-field patterns contain many transverse momentum components and do not resemble any modes of the perfect fiber. The modal decomposition verifies that these PMs are a superposition of many LP modes [ Fig. 5(d-f)]. Since higher-order modes experience more attenuation, their contributions to PMs, especially to the ones with shorter delay times, are reduced. To be more quantitative, we define the mode participation number as N e ≡ ( n |α n | 2 ) 2 /( n |α n | 4 ), where α n is the decomposition coefficient for the n-th mode. As noted in Figs. 4 and 5, the values of N e for the PMs in the weak mode coupling regime are significantly smaller than those in the strong mode coupling regime. Next we compare the spectral properties of PMs in the weak and strong mode coupling regimes with each other. For this purpose we scan the frequency ω while keeping the input field profile to that of a PM at a given frequency ω 0 . The output field pattern is measured at each frequency and correlated to that at ω 0 . We compute the spectral correlation function C(∆ω) = |Ψ(ω 0 ) · Ψ(ω 0 + ∆ω) * |, where Ψ(ω) is a vector representing the output fields in all spatial channels, and its magnitude is normalized to one at each frequency. Figure 6 (a,b) plots C(∆ω) for three PMs with short, medium and long delay times in weak and strong mode coupling regimes. For comparison, C(∆ω) for a random superposition of modes at the input is also shown. The small revival of the correlation for the random input in the weak mode coupling regime is due to the spectral beating between different modes. One can clearly observe that the PMs decorrelate much more slowly with frequency detuning than the random input, and that they exhibit a plateau at ∆ω = 0. Moreover, the PM with short delay time decorrelates more slowly than the one with long delay time in both weak and strong mode coupling regimes. PM BANDWIDTH To be more quantitative, we define the PM bandwidth ∆ω c as the frequency range over which |C| ≥ 0.9|C(0)|. Since the spectral decorrelation of the output pattern for any input waveform depends on fiber properties, such as the fiber length and numerical aperture, the PM bandwidth is normalized by the average correlation width of random inputs. Figure 6 (c,d) plot the normalized ∆ω c for all PMs versus their delay times. In the weak mode coupling regime the PM bandwidth first drops sharply with increasing delay time, then levels off. In the strong mode coupling regime, ∆ω c remains nearly constant at short delay time, and starts decreasing as the delay time becomes larger. The normalized bandwidths of PMs in the weak mode coupling regime are larger than those in the strong mode coupling regime, indicating the PMs in the presence of weak mode coupling decorrelate slower than those with strong mode coupling. To understand what determines the bandwidths of PMs in the MMFs with weak and strong mode coupling, we perform numerical simulations using the concatenated fiber model [53]. In particular, we consider a one-meter-long step-index fiber with 50 µm core and 0.22 numerical aperture. The fiber is divided into 20 short segments; light propagates in each segment as in a perfect fiber without mode coupling. Between adjacent segments, the guided modes are randomly coupled. The scattering in the mode space is simulated by a unitary random matrix, which is given by A = exp[iH], where H is a random Hermitian matrix. We construct H = G · (R + R † ), in which R is a complex random matrix whose elements are taken from the normal distribution, and G is a real matrix imposing a Gaussian envelope function on the matrix elements along the off-diagonal direction. Specifically, the magnitude of the matrix elements decays away from the diagonal, and the decay rate, i.e., the width of the Gaussian envelope function, depends on the degree of mode coupling. The faster the decay, the narrower the envelope function and the weaker the mode coupling. Therefore, by varying the width of the Gaussian envelope function, we can tune the scattering strength in mode space. To quantify the amount of scattering in mode space, we calculate the effective transport mean free path , which is given by the propagation distance beyond which the spatial field profile becomes uncorrelated [22]. In the concatenated fiber model, the transport mean free path is obtained numerically by launching light into a single mode and computing the number of segments light propagates until all modes are equally populated. The coupling strength is described by the ratio of the fiber length L to the effective transport mean free path . First we ignore the fiber loss and calculate the normalized bandwidths of PMs in the MMF with different degrees of mode coupling. In the weak coupling limit (L/ 1), the PM bandwidth has two maxima at the shortest delay time and the longest delay time [ Fig. 7(a)]. As the mode coupling (L/ ) increases, the normalized bandwidths of all the PMs are reduced. However, the decrease at the medium delay time is slower than that at short and long delay times. Consequently, a new maximum arises at the medium delay time when (L/ 1) [ Fig. 7(b)]. With a further increase of mode coupling, the two local maxima at the shortest and longest delay times disappear entirely [ Fig. 7(c)]. Thus the variation of the bandwidth with the delay time in the strong mode coupling regime (L/ 1) is just opposite to that in the weak mode coupling regime. To interpret these results, we resort to an intuitive picture of optical paths in the MMF. A MMF supports many propagating modes, each having a different propagation constant. From the geometrical-optics point of view, various rays propagate down the fiber at different angles relative to the axis of the fiber, and thus travel different distances and experience different phase delays. Inherent imperfections and external perturbations result in light hopping among the trajectories with different angles and lengths. Hence, light can take many paths of different lengths to transmit through the fiber. The sum of waves following different paths gives the output field. Formally, this fact can be expressed by writing the transmission amplitude t nm (∆ω) from an incoming mode m and an outgoing mode n at frequency ∆ω (central frequency is set to zero) through a sum over infinitely many paths q, each of which contributes with an amplitude A q and with a phase that depends on the path length L q in the following way: t nm (∆ω) = q A q exp(i∆ωL q /c) [54]. This relation follows directly from the Feynman path integral formulation of the Green's function, for which several semiclassical approximations have been worked out (see [55] for an overview). The interesting insight that we now deduce from this path picture is that in the weak guiding approximation one can deduce the path spectrumt nm (L) contributing to the transmission amplitude t nm (∆ω) by a simple Fourier transform t nm (L) = kmax kmin dk t nm (k) exp(−ikL) [56], where we define k = ∆ω/c. Correspondingly, the power spectrum of the total transmission through the fiber is given as T (L) = n,m |t nm (L)| 2 . The width of the intensity distribution over the path-length spectrum determines how fast the output field decorrelates with frequency. The narrower the distribution, the weaker the dephasing among different paths, and the slower the decorrelation. We calculatẽ T (L) for the PMs in different mode coupling regimes. Figure 7(d,e,f) presents the results for three PMs with the shortest, intermediate, and longest delay times. In the case of weak mode coupling, the intensity distribution over the path-length is narrow [ Fig. 7(d)] because each PM contains only a few modes with similar propagation constants. For example, the PM with short delay time consists of a few low-order modes. The adjacent modes that these low-order modes can couple to are higher order modes with smaller propagation constants. However, the PM with intermediate delay time is composed of modes with medium propagation constants, which are surrounded by both lower and higher order modes to which they can couple to. Since the propagation constants of modes in a step-index fiber are almost equally spaced, the constituent modes for an intermediate PM have more neighboring modes to couple to, and the intensity distribution over the path-length is wider than that for the fast PM. Consequently, the fast PM has a broader bandwidth than the intermediate PM. The same argument applies to the slow PM that has a long delay time. Therefore, the fastest and slowest PMs have the maximum bandwidth. As the mode coupling strength increases gradually, the intensity distribution over the path-length is broadened [ Fig. 7(e)], and the bandwidth of PMs is reduced. Eventually all modes are coupled, and the transition from single scattering to multiple scattering occurs in mode space. In the regime of multiple scattering, wave interference becomes significant. Since light can follow many possible trajectories of the same length from the input to the output of the fiber, the interference of the fields from these paths determines the intensity distribution over the path-length. In Fig. 7(f), the fast PM has intensity concentrated on shorter paths, as the destructive interference of different trajectories with the same length makesT (L) is mere pronounced for longer path-length. The opposite happens to the slow PM. Quite remarkably, Such interference effects are completely determined by the input wavefront. PMs with intermediate delay times suppress both short and long paths by destructive interference. In the absence of mode-dependent loss, the central-limit-theorem dictates that the density of path-lengths has a Gaussian distribution that is peaked at the medium delay time [51]. Thus the intermediate PMs, whose delay times coincide with or are close to the medium path-length of maximal density, only need to suppress a small number of trajectories of short or long path-lengths via interference. By contrast, the PMs with short delay times require destructive interference of both medium and long paths. Since there are more trajectories with medium path-length, it is more difficult to suppress them via interference, as evident from the shoulder at medium path-length for the fast PM in Fig. 7(e). Hence, the fast PMs have broader path-length distributions and narrower bandwidths than the medium PMs. The same explanation applies to the bandwidths of the slow PMs. We further analyze the transition from weak to strong mode coupling. As L/ increases, the average bandwidth of random input fields increases monotonically, as shown by the black dashed line in Fig. 8(a). For PMs, the average bandwidth first decreases rapidly, then goes through a turning point at L/ 1, and starts increasing again [ Fig. 8(a), blue solid curve]. In the single scattering regime L/ < 1, the input light spreads further in mode space as the scattering strength increases, and each PM consists of more LP modes. In particular, the number of LP modes in the PM with short or long delay time grows faster and approaches that with medium delay time. Consequently, the path-length distributions broaden more quickly and the bandwidths decrease more rapidly for the slow and fast PMs, leading to the reduction of the two local maxima at the shortest and longest delay times [ Fig. 7(a,b)]. Once L/ exceeds 1, the light is coupled back and forth among the modes, and the interference effects arise. In particular, the multi-path interference narrows the intensity distribution over the path-length spectrum. Stronger scattering enhances the interference effects, leading to an increase of the average bandwith of PMs [ Fig. 8(a), blue solid curve]. Since the multi-path interference effect is more efficient in narrowing the path-length distribution for a PM with intermediate delay time, its bandwidth is broader than that of a PM with short or long delay time. Hence, a local maximum in the bandwidth arises at the medium delay time, as seen in Fig. 7(b,c). In Fig. 8(a), the average PM bandwidth exhibits a minimum at the transition point (L/ 1) from single scattering to multiple scattering in the mode space. At this point, light is spread over all LP modes, yet the multipath interference effect is not yet strong enough to enhance the PM bandwidth. To investigate the fluctuation of PM bandwdiths, we also calculate the difference between the largest and smallest bandwidth of PMs, which exhibits a trend similar to the average bandwidth as seen in Fig. 8 (b). In the weak mode coupling regime, the difference is large but it declines dramatically with the coupling strength. When the system gradually transits to the strong mode coupling regime, the difference increases slightly, but still remains at a small value. EFFECT OF MODE-DEPENDENT LOSS The numerical study in the last section assumes no loss in the fiber. However, loss is common in a MMF, and it is usually greater for higher-order modes. In this section, we investigate the effects of mode-dependent loss (MDL) on PMs. In the concatenated fiber model, we introduce a uniform absorption coefficient to each segment of the fiber. Higher order modes that have longer transit time thus experience more loss. We compare the PM bandwidth with MDL to that without MDL in Fig. 9. In the weak mode coupling regime, MDL significantly reduces the bandwidth of PMs with long delay times, as indicated by the arrow in Fig. 9(a). In contrast, the bandwidth of PMs with short delay time are nearly unchanged by the MDL. This behavior can be explained by the change in the intensity distribution over the path-lengthT (L). The slow PM is composed of long paths, and the stronger attenuation of the longer paths broadens the distribution, as shown in Fig. 9(b). Consequently, the bandwidth of the PM with long delay time is reduced. The fast PM, by contrast, consists of short paths, which experience little loss, thus T (L) remains almost the same, and with it also the bandwidth of the PM. The longer the delay time, the stronger the effect of MDL, and the greater the reduction in the PM bandwidth. In the strong mode coupling regime, the MDL enhances the bandwidth of a PM with short delay time, while reducing the bandwidth of PM with long delay time [ Fig. 9(c)]. Since the fast PM has a broader path-length distribution than that in the weak mode coupling regime, the MDL suppresses the longer paths and narrows the distribution that centers on the short path-length [ Fig. 9(d)]. In contrast, the path-length distribution for the slow PM, which centers on the long path-length, is broadened by the MDL, as the shorter paths experience less attenuation than the longer ones. The variations of the PM bandwidth with the delay time in both weak and strong coupling regimes agree qualitatively with the experimental results in Fig. 6(b,d). We may thus conclude that MDL has a significant impact on the bandwidths of PMs and needs to be taken into account to understand the experimental data. CONCLUSION We have performed experimental and numerical studies on the principal modes (PMs) in a multimode fiber, which are the eigenstates of the Wigner-Smith time-delay operator or the group delay operator. By applying external stress to the fiber and gradually adjusting the stress, we have realized the transition from weak to strong mode coupling. Such a transition is mapped to that from single scattering to multiple scattering in mode space. We experimentally demonstrate that PMs have distinct spatial and spectral characteristics in weak and strong mode coupling regimes. In the weak mode coupling regime, each PM is composed of a small number of fiber eigenmodes with similar propagation constants. In the strong mode coupling regime, however, a PM is formed by all modes. When there is no mode-dependent loss in the fiber, PMs with shorter or longer delay times have broader bandwidths in the weak mode coupling regime. The opposite is true for strong mode coupling where the bandwidth is maximal for PMs with medium delay times. By analyzing the path-length distributions, we discover two distinct mechanisms that determine the bandwidth of PMs in the weak and strong mode coupling regime. For weak mode coupling, fast or slow PMs spread less in mode space and experience weaker modal dispersion, thus having broader bandwidth than intermediate PMs. In the presence of MDL, the bandwdith for a slow PM is reduced significantly while that for a fast PM remains nearly unchanged. In the strong mode coupling regime, interference among numerous trajectories in the multimode fiber becomes significant, and the maximum bandwidth is reached for the PMs whose delay time corresponds to the maximum density of path-length. Without MDL, the density of path-length is peaked at intermediate lengths such that the PMs with medium delay time have the largest bandwdith. With MDL, the maximum density of path-length shifts to shorter paths, due to stronger attenuation of longer paths in the fiber. Consequently, MDL enhances the bandwidth of fast PMs while it reduces the bandwdith of slow PMs.
7,162.4
2016-09-08T00:00:00.000
[ "Physics" ]
Highly versatile cancer photoimmunotherapy using photosensitizer-conjugated avidin and biotin-conjugated targeting antibodies Background Photoimmunotherapy (PIT) employing antibody-photosensitizer conjugates is a promising treatment for cancer. However, the fixed antigen specificity severely limits the efficacy and the applicability. Here we describe a universal strategy for PIT of cancer by using a near-infrared (NIR) photosensitizer IRDye700DX-conjugated NeutrAvidin, designated as AvIR, together with various biotinylated antibodies (BioAbs) for cellular targeting. Methods Cytotoxicity of AvIR-mediated PIT was evaluated by fluorescence imaging and cell viability assay. Phototoxic effect on tumorigenicity was assessed by tumorsphere-formation assay and Matrigel invasion assay. Cancer stem cell-like side-population (SP) cells were identified by flow cytometry. Results CHO cells stably expressing carcinoembryonic antigen or EpCAM were pre-labeled with each BioAb for the corresponding antigen, followed by AvIR administration. NIR light irradiation specifically killed the targeted cells, but not off-targets, demonstrating that the AvIR-mediated PIT does work as expected. CSC-like subpopulation of MCF-7 cells (CD24low/CD44high) and SP of HuH-7 cells (CD133+/EpCAM+) were effectively targeted and photokilled by AvIR-PIT with anti-CD44 BioAb or anti-CD133/anti-EpCAM BioAbs, respectively. As results, the neoplastic features of the cell lines were sufficiently suppressed. Cancer-associated fibroblast (CAF)-targeted AvIR-PIT by using anti-fibroblast activation protein BioAb showed an abolishment of CAF-enhanced clonogenicity of MCF-7 cells. Conclusions Collectively, our results demonstrate that AvIR-mediated PIT can greatly broaden the applicable range of target specificity, with feasibility of efficacious and integrative control of CSC and its microenvironment. Background Photoimmunotherapy (PIT), which is a targeted photodynamic therapy using a photosensitizer (PS)-loaded monoclonal antibody (mAb) specific for tumor-associated antigen (TAA), has been developed as a safe and an attractive therapeutic modality for cancer (reviewed in [1,2]). With excitable light irradiation, PIT exerts a remarkable cytotoxicity against only tumor cells targeted by PS-mAb conjugates. Near-infrared (NIR) phthalocyanine dye, IRDye700DX (IR700), has been accepted to have promising PS moiety of the PIT agents, because of its excitation wavelength (690 nm) with high tissue-permeability and of the photochemical property to induce strong cytotoxicity only when the conjugate bound to the plasma membranes of the target cells is exposed by NIR light [3,4]. Indeed, to date, IR700 have been successfully applied to several PIT utilizing mAbs against clinically relevant TAAs, such as carcinoembryonic antigen (CEA) [5], human epidermal growth factor receptor 2 (HER2) [6,7], and epidermal growth factor receptor (EGFR) [8,9]. Phase III clinical trial of PIT with an ASP-1929 (anti-EGFR cetuximab-IR700 conjugate) in patients with recurrent head and neck cancer is currently underway across countries (ClinicalTrials.gov Identifier: NCT03769506). More recently, the target of IR700-mediated PIT has been expanded to the intra-/peri-tumoral non-neoplastic cells that serve to support and maintain the tumor microenvironment. These cells include, for example, cancer-associated fibroblasts (CAFs) [10], which are important constituents of the tumor stroma, and vascular endothelial cells that construct tumor neovasculature [11]. Thus, IR700-mediated PIT has great potential to be an extensively applicable cancer therapy. However, solid tumors are generally composed of heterogeneous cell populations, which could arise from cancer stem cells (CSCs) [12], and it is well known that the expression pattern of TAAs and the organization of the tumor microenvironment often change dynamically depending on the malignant progression and the course of radiotherapy and chemotherapy [13]. In addition, tumors can acquire resistance to single-agent therapy in many instances. Therefore, the current cancer-targeted therapies involving PIT which utilize a mAb against a single TAA alone are considered to be highly difficult to cure cancer, even if temporary tumor regression is achieved. In order to effectively apply the IR700-PIT to a broad range of cancer types and of changes in TAA expression, it is considered necessary to prepare a panel of IR700-mAb conjugates with different specificity corresponding to various target TAAs on a case-by-case basis; however, such approach is extremely complicated, costly in terms of time and money, and unrealistic. To overcome these problems and realize a highly versatile PIT applicable to various cancers and tumor-supporting cells, we aimed to develop a novel PIT utilizing IR700-conjugated NeutrAvidin, designated as AvIR, in combination with biotinylated antibodies (BioAbs) for cell-specific targeting. In this strategy, target cells are pre-labeled with single or multiple BioAbs specific to cell surface marker(s), followed by binding AvIR exclusively to them owing to the tremendous affinity and specificity to biotin, then NIR irradiation is applied for photokilling of the targeted cells (Fig. 1). Myriad of Bio-Abs, whether commercially and clinically available or in-house developed, can dramatically expand the applicability of conventional PIT, allowing the unlimited target specificity without repetitive preparation of PS-mAb conjugates. If AvIR-mediated PIT works effectively, the sequential or simultaneous use of various BioAbs would be achievable a universal PIT capable of responding to altered expression of TAAs, enabling comprehensive cancer therapy that targets not only heterogeneous tumor cell populations including CSCs that express different Fig. 1 Schematic representations of AvIR-mediated PIT. Due to the cellular targeting by BioAb(s) specific to the tumor cells and/or tumor-supporting cells, AvIR can exert the phototoxicity only on the targeted cells upon NIR irradiation, without any damage to normal tissues. As long as cell type-specific BioAbs are available, potential therapeutic target cells of AvIR-PIT are virtually unlimited, allowing the highly integrated tumor control TAAs, but also stromal and vascular endothelial cells that constitute the tumor microenvironment. Cell lines Luciferase-expressing cell lines derived from human gastric adenocarcinoma (MKN-45), breast adenocarcinoma (MCF-7) and hepatocellular carcinoma (HuH-7), were obtained from the Japanese Collection of Research Bioresources (Osaka, Japan). MKN-45 cells were maintained in RPMI1640 Glutamax medium (Thermo Fisher Scientific, Tokyo, Japan) supplemented with 10% fetal bovine serum (FBS; Equitech-Bio, Kerrville, TX) in an atmosphere of 5% CO 2 at 37 °C. MCF-7 and HuH-7 cells were maintained in Dulbecco's minimal essential medium (DMEM; Thermo Fisher Scientific) instead of RPMI1640. Two Chinese hamster ovary (CHO) cell lines, human CEAexpressing CHO-CEA and human EpCAM-expressing CHO-EpCAM cells, have been described previously [5], and were cultured in α-modified minimum essential medium (α-MEM; Thermo Fisher Scientific) supplemented with 10% FBS and 2 mM glutamine. Murine tumor endothelial cell line 2H-11 was obtained from the American Type Culture Collection (Manassas, VA) and was maintained in DMEM supplemented with 10% FBS and 2 mM glutamine (Wako Pure Chemicals, Osaka, Japan). Primary human breast CAF derived from an infiltrating ductal-carcinoma tissue was purchased from Asterand (Detroit, MI) and maintained in DMEM supplemented with 10% FBS and penicillin-streptomycin. Biotinylation of antibodies BioAbs, except for Bio-CD133, for AvIR-mediated PIT were chemically prepared using EZ-Link Sulfo-LC-NHS-Biotinylation Kit (Thermo Fisher Scientific) following manufacturer's instructions. A Zeba Desalt Spin Column (Thermo Fisher Scientific) was used to remove excess biotin reagent and exchange the buffer with Dulbecco's phosphate-buffered saline (DPBS). We also performed a biotinylation of human immunoglobulin G (IgG) for using as an irrelevant control (Bio-hIgG). Preparation of AvIR NeutrAvidin (2 mg) (Thermo Fisher Scientific) was incubated with IRDye700DX N-hydroxysuccinimide ester (100 µg) (LI-COR Biosciences, Lincoln, NE) in 2 ml of 100 mM sodium phosphate buffer (pH 9.0) for 2 h at room temperature. The reaction mixture was applied onto a Zeba desalting column to purify the AvIR. The concentration of AvIR and the dye/protein ratio was spectroscopically determined by measuring the absorbance at 280 nm and 689 nm, and by using the following molar extinction coefficients (ε): 101,640 M −1 cm −1 for NeutrAvidin at 280 nm [16], 165,000 M −1 cm −1 for IR700 at 689 nm. The dye/antibody ratio of AvIR was typically ~ 2.2. Fluorescence analysis of phototoxicity induced by AvIR-PIT The phototoxic effect of AvIR was assessed by using the LIVE/DEAD Cell Imaging Kit (Thermo Fisher Scientific). Cells were seeded onto an 8-well Lab-Tek II chamber slide (Thermo Fisher Scientific) at a density of 10,000 cells/well 1 day before AvIR-PIT. The next day, the cells were treated by adding biotinylated anti-CEA (Bio-CEA), biotinylated anti-EpCAM (Bio-EpCAM) (5 µg/ml), or an equal volume of DPBS for 30 min, followed by adding AvIR (5 µg/ml) with another incubation for 30 min. The cells were exposed to NIR light (3 J/cm 2 ) from a light-emitting diode (LED) light source (Shiokaze Giken, Niigata, Japan), which emits red light with a peak at 690 nm. The irradiation energy density was measured with a PM100D optical power meter (Thorlabs, Tokyo, Japan). The irradiated cells were incubated with a mixture of Live Green and Dead Red solutions for 20 min and were subsequently imaged using a fluorescence microscope BZ-9000 (Keyence, Osaka, Japan). To assess the target specificity, AvIR-mediated PIT was also performed for co-cultured CHO-CEA and CHO-EpCAM cells. The CHO-CEA cells were stained with CellTracker Blue dye (Thermo Fisher Scientific) and were co-cultured with unlabeled CHO-EpCAM cells in a Lab-Tek II chamber on the day before AvIR-PIT. Quantitative evaluation of phototoxicity The PIT-induced changes in cellular viability were assessed by using the CellTiter-Glo assay (Promega, Madison, WI). Briefly, the cells were plated onto a white-walled 96-well plate (Thermo Fisher Scientific) at 10,000 cells/well and were cultured overnight. On the following day, BioAb was added to the wells at the indicated concentrations with incubation for 30 min. Then, AvIR was added to the wells at the indicated concentrations. After another 30 min incubation, the cells were irradiated with NIR light (3 J/cm 2 ). After irradiation, an aliquot of CellTiter-Glo reagent was added into each well, and the plate was shaken for 2 min. The plate was, then, incubated for 10 min at room temperature, and the luminescence was measured on a TriStar LB 941 multimode reader (Berthold Technologies, Bad Wildbad, Germany). Flow cytometry To examine the binding characteristics of AvIR, CHO cells were labeled with Bio-CEA or Bio-EpCAM for 30 min and were stained with AvIR for 30 min. Tumorsphere-formation assay Tumorsphere assay on MCF-7 cells, derived from breast cancer, was performed using MammoCult medium (Stem Cell Technologies, Vancouver, BC, Canada) with serum replacement, hydrocortisone, heparin, and antibiotics according to manufacturer's instructions. Briefly, CD24 low /CD44 high CSC subpopulation of MCF-7 cells were enriched and obtained by twice-repeated negative selections using the EasySep PE selection kit (Stem Cell Technologies) with CD24-PE followed by a positive selection using the EasySep FITC selection kit (Stem Cell Technologies) with CD44-FITC. The magnetically sorted MCF-7 cells or unsorted bulk cells were seeded onto a 6-well Ultra-low Attachment culture plate (Corning, NY) with 2 ml of complete MammoCult medium at a cell density of 5000 cells/ml. Then, this was incubated for 7 days at 37 °C in a humidified atmosphere containing 5% CO 2 . The resulting tumorspheres (> 60 µm) were counted by visual inspection in light microscopy. To investigate the phototoxic effect of AvIR on the clonogenicity, MCF-7 cells were PIT-treated with the indicated BioAb and AvIR (5 µg/ml each), and then the dead cells were removed by using ClioCell magnetic nanoparticles (ClioCell; London, UK). The resultant live cells were assessed for sphere-formation as above. In the case of HuH-7 cells, derived from hepatocellular carcinoma, the sphere-formation capacity was determined using Cancer Stem Cell Medium (PromoCell, Heidelberg, Germany). HuH-7 cells were FACS-sorted into 4 subpopulations according to the immunophenotype regarding the expression of CD133 and EpCAM using FACSAria Fusion cell sorter. The sorted cells were seeded onto an Ultra-low Attachment plate at a density of 2000 cells/well. The cells were incubated for 10 days in a 5% CO 2 atmosphere at 37 °C. The number of tumorspheres (> 100 µm) was manually counted. Matrigel invasion assay Cell invasion was assayed using the CytoSelect 24-well Cell Migration and Invasion Assay according to the manufacturer's instruction (Cell Biolabs, San Diego, CA). In brief, MCF-7 cells were resuspended in serum-free DMEM containing 0.1% bovine serum albumin. The cell suspension (1 × 10 6 cells/ml) was added to the top insert, whereas DMEM containing 10% FBS was added to the bottom chamber. The cells were incubated at 37 °C for 24 h, and the insert was transferred to a well containing Cell Stain Solution. After incubation for 10 min, the stained insert was washed and air-dried. The migratory cells were counted with a light microscope. Analysis of side-population fraction in HuH-7 cells Side-population (SP) analysis was basically performed according to the protocol by Goodell et al. [17] with some modifications. HuH-7 cells were dissociated into single cells with Accutase (MS TechnoSystems, Osaka, Japan), washed with DPBS, resuspended in pre-warmed DMEM supplemented with 2% FBS and 10 mM HEPES (4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid) (Sigma, St. Louis, MO) at a density of 1 × 10 6 cells/ml. Then, Hoechst 33342 dye (Dojindo Laboratories, Kumamoto, Japan) was added to the cells at a final concentration of 5 μg/ml. The cells were incubated in 37 °C water bath for 120 min in the presence or absence of an ATPbinding cassette transporter inhibitor verapamil (50 μg/ ml) (Wako Pure Chemicals). Following incubation, the cells were washed with ice-cold Hanks balanced salt solution (HBSS) (Wako Pure Chemicals) with 2% FBS and 10 mM HEPES. The cells were filtered through a 40 μm nylon mesh to obtain a single cell suspension and kept at 4 °C in the dark until flow cytometric analysis using FAC-SAria Fusion. Hoechst 33342 was excited with ultraviolet light at 375 nm and fluorescence emission was measured with 450/20 (Hoechst blue) and 670 LP (Hoechst red) optical filters. During the FACS analysis, dead cells were excluded by using the viability dye SYTOX AADvanced (Thermo Fisher Scientific). To investigate the effect of AvIR-mediated PIT on SP fraction, HuH-7 cells were pre-labeled with Bio-CD133 and Bio-EpCAM (2.5 µg/ml each) for 30 min and incubated with adding 5 µg/ml AvIR for another 30 min. The cells were irradiated with NIR light (3 J/cm 2 ), and the dead cells were removed by Cli-oCell treatment. The live cells were cultured under standard condition for another 2 passages, and the SP analysis was performed as described above. Soft agar colony formation assay For the evaluation of CAF-assisted clonogenicity, modified soft agar colony formation assay was performed. Briefly, primary human breast CAFs were seeded onto a well of 6-well plate at a density of 7 × 10 4 cells/well and cultured overnight. The next day, the culture medium was removed, and molten 0.6 ml 0.8% DNA grade agarose in DMEM with 10% FBS was added to the well. After solidification, 0.8 ml of 0.4% soft agar in complete MammoCult medium containing 5 × 10 3 MCF-7 cells was layered on the solidified base agarose. Then, 0.8 ml of complete MammoCult was added to the well, and the cells were cultured for 7 days. The medium was exchanged for complete MammoCult with or without biotinylated anti-FAP (Bio-FAP; 5 µg/ml). After incubation for 12 h, AvIR (5 µg/ ml) or vehicle was added to the medium, followed by another incubation for 12 h. The plate was irradiated with NIR light (6 J/cm 2 ) and returned back to the incubator for further culturing for 11 days. The colonies formed in soft agar layer was counted manually. AvIR-PIT treatment against tumor endothelium model We used 2H-11 cells for the formation of tumor endothelial tubes. The tubes were prepared on the tumor-derived extracellular matrix gel in a well of 96-well plate by using the Endothelial Tube Formation Assay (Cell Biolabs). AvIR-mediated PIT (3 J/cm 2 ) was performed against the 2H-11 tubes using biotinylated anti-CD105 (Bio-CD105) and AvIR (5 µg/ml each). After a gentle washing, LIVE/ DEAD cell imaging was performed. Statistical analysis The data are expressed as the mean ± standard error of the means (SEM) from a minimum of three experiments. Statistical significance was evaluated by Student's t-test or one-way analysis of variance (ANOVA), followed by Dunnett's or Tukey's multiple-comparison test. All statistical analyses were done using the GraphPad Prism 8 (GraphPad Software, San Diego, CA). p-values < 0.05 were considered to be statistically significant. Antigen-specific phototoxicity induced by AvIR-PIT In order to demonstrate the feasibility of AvIR based PIT, we first tested CHO cells stably expressing human CEA or EpCAM as a model of target tumor cells. Flowcytometric analysis showed that AvIR specifically bound to the cells pre-labeled with a BioAb (Bio-CEA or Bio-EpCAM) to the corresponding antigen (Fig. 2a). To explore the phototoxic effects of AvIR, we performed a LIVE/DEAD cell viability assay. We found that AvIR exerted strong antigen-specific cytotoxicity toward the BioAb-labeled CHO cells upon NIR irradiation (Fig. 2b). Even in the presence of AvIR in culture medium, no phototoxic effect was observed when unmatched BioAb was used for pre-labeling of the cells or when AvIR alone, without BioAb, was used for PIT. These results indicate that AvIR-mediated PIT can specifically kill the BioAbtargeted cells, and are consistent with the previous studies on PIT with IR700-mAb conjugates, in which the conjugates exert the phototoxicity only when bound to the cell membranes; however, it was clearly revealed that, to achieve such phototoxicity, IR700 does not necessarily have to be conjugated directly with targeting antibody molecules. To further investigate the target-specificity of AvIRmediated PIT, we carried out PIT experiment on CHO-EpCAM cells co-cultured with CellTracker-stained CHO-CEA cells, followed by LIVE/DEAD imaging. When co-cultured cells were pre-labeled with Bio-CEA, AvIR-PIT treatment selectively killed the CHO-CEA cells Fig. 2 Target-specific phototoxicity of AvIR-mediated PIT. a TAA-specific binding of AvIR was assessed by flow cytometry. CHO cells were first incubated with unlabeled mAb or BioAb and then stained with AvIR. b CHO-CEA and CHO-EpCAM cells were incubated with the indicated BioAb (5 µg/ml) or DPBS for 30 min and further incubated with AvIR (5 µg/ml) for 30 min. Subsequently, the cells were irradiated with NIR light (3 J/cm 2 ). After NIR light exposure, the cells were stained using LIVE/DEAD cell imaging kit, and the images were acquired using fluorescence microscope to determine whether they were alive (green) or dead (red). c LIVE/DEAD cell images of AvIR-PIT-treated co-culture of CHO-CEA and CHO-EpCAM cells. On the day before PIT treatment, the CHO-CEA cells were pre-stained with CellTracker Blue and then co-cultured with unstained CHO-EpCAM cells. 19:299 with no damage to any CHO-EpCAM cells even adjacent to ruptured CHO-CEA cells (Fig. 2c, top row). On the other hand, when Bio-EpCAM was used for pre-labeling, only the CHO-EpCAM cells were damaged (Fig. 2c, middle row). If both BioAbs were used, almost all of each kind of CHO cells was killed (Fig. 2c, bottom row). These results indicate that the phototoxicity of AvIR-mediated PIT is highly antigen-specific and again confirmed that membrane binding of AvIR via BioAb are requisite for evoking effective photocytotoxicity. In order to evaluate the phototoxic effect, the cell viability after AvIR-PIT was assessed with the CellTiter-Glo assay, which quantify ATP amount in the living cells. AvIR-PIT using Bio-CEA showed potent, agent-dosedependent phototoxicity on MKN-45 cells positive for CEA (Fig. 3a). In all the following experiments, unless otherwise specified, we used BioAb and AvIR at 5 µg/ml, respectively. MKN-45 cells express CD44 as well as CEA, and indeed, Bio-CD44 was also found to be an effective MKN-45-targeting antibody for AvIR-PIT (Fig. 3b). Coadministration of Bio-CEA and Bio-CD44 (2.5 µg/ml each) showed improved phototoxicity compared with either BioAb (5 µg/ml) alone. Of note, the phototoxic effect of AvIR-PIT with Bio-CEA was not so inferior to The cell viability after AvIR-mediated PIT with each indicated BioAb was evaluated (left panel). When IR700-anti-CEA was used for cellular targeting, NIR light irradiation (3 J/cm 2 ) was done without adding AvIR as conventional PIT treatment. The data are the means ± SEM (n = 3, *p < 0.05, **p < 0.01 vs. combination therapy using both Bio-CEA and Bio-CD44, one-way ANOVA with Dunnett's test). Representative dot plot of the FACS analysis for the expression of CEA and CD44 is also shown (right panel) that of conventional PIT using IR700-conjugated antihuman CEA mAb, as we previously reported [5], again suggesting that indirect IR700-labeling of the target cells is not particularly detrimental to the efficacy of PIT. AvIR-mediated PIT targeting CSCs In order to investigate an application of AvIR-PIT for better control of tumors, we next examined the CSC population as a therapeutic target. MCF-7 cells contained CD24 low /CD44 high subpopulation, which was judged to have highly tumorigenic CSC-like property based on tumorsphere-formation assay and Matrigel invasion assay (Fig. 4a-c). AvIR-PIT with Bio-CD24 or Bio-CD44 markedly reduced the viability of MCF-7 cells (Fig. 4d). When the cells that survived the CD44-or CD24-targeted AvIR-PIT treatment were tested in tumorsphere assay, it was revealed that the sphere forming capacity of the PIT-treated cells was substantially abolished (Fig. 4e). Note that, although the observed phototoxicity for cell viability of AvIR-PIT with Bio-CD44 was lower than that of AvIR-PIT with Bio-CD24 (Fig. 4d), the anti-tumorigenic effect of the former was almost equal to the latter, suggesting superior effectiveness of CSC-targeting in PIT treatment on tumor control. To further investigate the efficacy of CSC-targeted AvIR-PIT, we next used HuH-7 cell line, in which CD133 + /EpCAM + subpopulation was reported to have a CSC-like property [18]. Indeed, in our hands, the CD133 + /EpCAM + HuH-7 cells were capable of forming much more tumorspheres than the other population (Fig. 5a). We also found that the SP cells, which are defined by their ability to exclude the DNA-binding dye Hoechst 33342 and known to share some characteristics of CSCs [19], were highly enriched in the CD133 + / EpCAM + subpopulation (Fig. 5b). We performed tumorsphere assay using 2000 viable cells after AvIR-PIT with Bio-CD133 and Bio-EpCAM (2.5 µg/ml each) and found that the PIT treatment completely abolished the sphere-forming ability of HuH-7 cells (Fig. 5c). The survivors from the AvIR-PIT treatment were further subcultured for another 2 passages and then subjected to FACS analysis. It was revealed that CD133 + /EpCAM + cells . c Sphere-forming capability of AvIR-PIT-treated HuH-7 cells. By using Bio-CD133 and Bio-EpCAM, CSC-like subpopulation-targeted AvIR-PIT was performed against HuH-7 cells. After removal of the dead cells by using ClioCell magnetic particles, the resulting live cells were examined by tumorsphere-fomation assay. The data are the means ± SEM (n = 3, ND; not detected). d FACS analysis of AvIR-PIT-treated HuH-7 cells. The survived cells collection from CSC-targeted AvIR-PIT was performed as in c. The cells were further cultured by 2 passages and FACS-analyzed on SP and expression of CD133 and EpCAM that had been killed by AvIR-PIT with Bio-CD133 and Bio-EpCAM hardly reemerged after passaging and SP cells did little as well (Fig. 5d), suggesting that successful CSC-targeted killing was elicited and that CSC-targeted AvIR-PIT can effectively dampen the tumorigenicity of surviving HuH-7 cells. AvIR-PIT against the cells composing tumor microenvironment In order to further verify the applicability of AvIRmediated PIT, non-malignant cells that construct tumor microenvironment were targeted. We first performed a modified soft agar colony assay with MCF-7 cells and human primary breast CAFs ( Fig. 6a; see also "Materials and methods"). When MCF-7 cells in the top agar were co-cultured with CAFs at the bottom of the culture well, much more MCF-7 colonies were formed than when MCF-7 cells were cultured alone, indicating the capability of CAFs to accelerate the tumor cell clonogenicity (Fig. 6a). However, AvIR-PIT with BioAb against FAP, a specific marker of CAFs, was performed on culture day 8, the CAF-enhancing clonogenicity of MCF-7 was completely canceled. Next, AvIR-PIT with BioAb specific for CD105, one of the markers of tumor neovasculature, was performed against capillary-like tubular structures formed by tumor endothelial 2H-11 cells. As shown in Fig. 6b, the tube structures were collapsed by AvIR-PIT treatment using Bio-CD105, but not affected by AvIRalone treatment with NIR irradiation. Taken together, these results demonstrate that AvIR-mediated PIT has great potential and versatility to kill not only tumor cells themselves but also various components of the tumor tissues effectively, if the targeting BioAbs are appropriately selected. Discussion Tumors often have heterogeneous expression of surface antigens and may differ not only between individuals but even within the same patient [20]. Furthermore, tumor cells generally lose expression of the surface antigens during the progression of the malignancy, and such antigen loss is one of the major factors contributing to tumor relapse after specific therapy that was initially effective [21]. Thus, general antibody-based immunotherapy with a fixed target specificity cannot combat therapy-resistant cancers in many cases. Because of the marked target specificity and the localized NIR irradiation, if IR700-mAb conjugates specific to different TAAs depending on the situation were to be prepared each time, it would be possible to safely repeat the cancer-targeted PIT treatment without adverse side effects due to injury to normal tissues; however, it is unlikely to be practical. In this study, we provided a feasible and universal solution to such cumbersome circumstances by introducing the AvIR and BioAbs into PIT, while retaining the advantages of conventional one (Fig. 1). A notable merit of the AvIRmediated PIT demonstrated here is that a wide variety of BioAbs are readily available and only avidin needs to be chemically conjugated to IR700 once prior to the PITtreatment. Additionally, various biotinylated molecules such as biotin-labeled small compounds and nucleic acids may be potential candidates for cellular targeting ligands as like BioAbs, unless they are not internalized or incorporated into the target cells. We and other group have previously shown that IR700mediated PIT, in contrast to conventional photodynamic therapy, exerts the phototoxic effect as long as PS-conjugates bind to the target cell membrane without a need for their entries into the cell, and effectively works even in hypoxia because the phototoxicity results from the reduction of cell membrane integrity, which is induced by photochemical damage independent of production of reactive oxygen species [4,5]. Such features are thought to be especially suitable for targeting the CSCs, because they have chemo-resistant property with enhanced drug excretion functions and reside in a hypoxic tumor niche [12,19]. We demonstrated that AvIR-mediated PIT using BioAbs specific to CSC markers is able to reduce the tumorigenicity of MCF-7 and HuH-7 cell lines. Moreover, the viability of MKN-45 cells was greatly decreased by CD44-targeted AvIR-PIT (Fig. 3b). MKN-45 cell line has been reported to include tumorigenic CSC-like cells expressing stemness factors such as Oct4 and Sox2 in a CD44-positive subpopulation [22], thus suggesting that the CSC-like subpopulation of MKN-45 could also be effectively removed by the AvIR-PIT. As shown in Fig. 4, anti-tumorigenicity induced by AvIR-PIT with Bio-CD44 was as effective as with Bio-CD24, while the effect on the reduction of MCF-7 viability was weaker for Bio-CD44 than Bio-CD24. This implies that CSCtargeting strategy is sufficient to control tumor growth. However, considering the in vivo situation, because CSCs are hidden deep within tumors, it is likely to be important to kill bulk tumor cells (CD24 + cells in this case) and effectively deliver photosensitive agents to the site where CSCs exist. Therefore, simultaneous targeting of TAAs and CSC markers enabled by AvIR-mediated PIT would be useful for more efficacious tumor suppression and further for eradication of cancer cells. In antibodybased therapies, especially when mAbs with strong binding affinity are used and/or tumor cells express high levels of antigen, a phenomenon known as the "binding site barrier", in which mAbs are saturated in the perivascular space and cannot penetrate deeper into the tumor, is sometimes problematic [23]. Nakajima et al. reported that the problem could be overcome by using a cocktail of two different IR700-mAbs for more homogeneous intratumoral distribution of the PIT agent, showing the enhanced therapeutic effects compared with the use of either IR700-mAb [24]. Such approach can also be readily applied to AvIR-based PIT by using BioAbs with different profiles. The data are the means ± SEM (n = 3). b Effect on CD105-targeted AvIR-PIT on the capillary-like structure formed by 2H-11 tumor endothelial cells. LIVE/DEAD cell imaging was performed against the 2H-11 tubes after AvIR-mediated PIT with Bio-CD105. Data in the rightmost panel shows the relative fluorescence intensity, i.e. the ratio of total green fluorescence intensity per well after NIR irradiation to that of before the irradiation. The data are the means ± SEM (n = 3) We also demonstrated that AvIR-PIT can target the tumor-supportive cells that reside in tumor microenvironment, such as CAFs and the tumor endothelial cells. Because these types of cells have been shown to play a crucial role in the development and maintenance of the majority of solid tumors [25][26][27], the therapeutic approaches that target them are one of the non-limiting therapeutic strategies and can be effectively applied to the wide range of tumor types. In the light of clinical success of bevacizumab (Avastin ® ), a humanized mAb against vascular endothelial growth factor (VEGF), the antiangiogenic treatment approaches of solid tumors have been extensively investigated, including a recent report by Nishimura and colleagues on tumor neovasculaturetargeted PIT using IR700-conjugated anti-VEGF receptor 2 mAb [11]. Because CSCs are preferentially located in the specialized perivascular niche [28], which maintains stemness of them, and the disruption of the tumor vessels leads to increase in vascular permeability and leakage for macromolecules like BioAbs, combined treatment of CSCs and tumor vasculatures by AvIR-mediated PIT is supposed to be especially promising. The application of PIT currently attracting the most attention is activation of tumor immunity. Previously, Sato et al. demonstrated that PIT with IR700-conjugated anti-CD25 for targeting regulatory T cells (T regs ) can cause site-specific killing of T regs in the NIR-irradiated tumor bed and induce regression of not only PIT-treated tumors but distant non-treated tumors [29]. This is probably due to locally activated CD8 + T cells and NK cells in the treated tumor site by spatially selective depletion of T regs leading to reversal of immunosuppressive environment. If AvIR-mediated PIT is applied to this strategy, T regs and/or other immune-suppressor cells, such as myeloid derived suppressor cells and tumor associated macrophages, can be surely targeted and treated simultaneously with tumor cells by using a cocktail of BioAbs against surface markers of the tumor cells and the suppressor cells. Combination with immune-checkpoint therapy may also be possible to further enhance the host immunity. On the other hand, one of the important obstacles in successful clinical application of AvIR-PIT is the potential immunogenicity of NeutrAvidin to humans. However, this issue is likely to be avoidable by using, instead of NeutrAvidin, a hypoimmunogenic avidin mutant [30] or a Bradavidin II, which is originated from Bradyrhizobium japonicum, a nitrogen-fixing bacteria, and is reported to have low immunogenic potential [31], or more straightforwardly, by use of a commercially available humanized anti-biotin antibody. Another problem may be the molecular size of AvIR. NeutrAvidin, which was deglycosylated version of avidin and used for preparation of AvIR, contains four identical biotin-binding subunits with a total molecular mass of 60 kDa. This size is much smaller than that of IgG (~ 150 kDa) and close to that of immunoglobulin Fab fragment (~ 50 kDa). While such a small targeting protein might be undesirable for PIT due to the pharmacokinetics, i.e. faster clearance from the circulation and lower tumor retention, it is also likely to be superior in terms of rapid tumor accumulation and better penetration into tumor tissues. Actually, previous reports demonstrated the smaller antibody fragments are advantageous in some PIT settings [24,32]. Conclusions In summary, we developed a novel type of PIT utilizing an AvIR, the IR700-conjugated avidin protein, as a universal PIT agent, together with BioAbs for specific cellular targeting. Our results suggest that AvIR-mediated PIT would enable a sequential or simultaneous targeting to not only bulk tumor cells but to multiple tumor-supporting and/or immunosuppressor cells and allow integrative and efficacious control of tumor and its microenvironment, overcoming the tumor heterogeneity. In vivo studies are now being pursued in our laboratory to further confirm the therapeutic potential and evaluate the clinical impact of AvIR-mediated PIT.
7,312.2
2019-11-15T00:00:00.000
[ "Biology", "Chemistry" ]
Analysis of the Implementation of the Construction Safety Management System (SMKK) on the Cisempur-Budiwangi Road Reconstruction Activity, Cibalong District Tasikmalaya Regency . The construction service industry is an industrial sector with a high risk of work accidents. For this reason, implementing the Construction Safety Management System (SMKK) is expected to reduce the number of work accidents. Many Road Reconstruction construction projects are carried out without implementing the Construction Safety Management System (SMKK) following the rules referring to PUPR Minister Regulation Number 10 of 2021 concerning Construction Safety Management System Guidelines. This study aims to determine the implementation of the Construction Safety Management System (SMKK) on Cisempur – Budiwangi Road Reconstruction Activities, Cibalong District, Tasikmalaya Regency. The research was conducted in several places in the Budiwangi Road Reconstruction project, Cibalong District, Tasikmalaya Regency. This research is qualitative descriptive, with data obtained by observation, conducting interviews, and filling out checklists. The results of the study: 1). The level of implementation of the Construction Safety Management System (SMKK) with a percentage value of appropriate application of 67.44% and findings of nonconformity of 32.56% (Minor Category). 2). The factor causing the non-fulfillment of applications is the absence of processing documents and special formats for changes in the field that impact K3. 3) Response and improvement actions that can be taken are to create special procedures and formats related to changes that have implications for K3. INTRODUCTION Construction projects are work sectors that have a level of risk and work accidents, and this is due to low awareness of the importance of implementing a good Occupational Safety and Health Management System (SMK3) following applicable laws and regulations.Applying K3 to a construction project is often considered only as a cost burden rather than as an investment to prevent work accidents.However, it can provide a loss level from the construction project itself.Given the high urgency of K3 in the construction sector in Indonesia, the government has regulated the implementation of K3 in the Law and the obligations in its implementation in all sectors of the construction industry. This is done so that the application of K3 becomes an absolute thing to protect and minimize the risk of work accidents, which aims to increase performance productivity and can guarantee the quality and safety of a job so that zero accidents can be achieved.b) Secondary Data, is a study of literature, books, papers, online media, and reports obtained from similar previous studies.Data to be reviewed on construction projects such as: the company's K3 structure and supporting documents for the implementation of SMK3 are reviewed to assess the implementation and improvement of the system that has been implemented. DISCUSSION The company's Construction Safety Management System (SMKK) is effectively run by leadership and commitment, with objectives that lead to corrective actions and continuous improvement.Continuous improvement can ensure that the system, the manuals, and other procedures and components that make up the system are improved and developed continuously to improve its efficiency and effectiveness.Here is the cycle in the system that shows.The relationship of each component/system carried out by the company, along with its primary duties and responsibilities, can be seen in the following figure: Nuruzzaman, Wildan | 92 The results and analysis are obtained after conducting an Audit based on questions/assessments in the form of a checklist that refers to the provisions of fulfillment requirements with an assessment using a predetermined calculation formula; henceforth, the percentage value of the level of achievement of application in the Construction Safety Management System (SMKK) on the Cisempur -Budiwangi Road Reconstruction Project, Cibalong District, Tasikmalaya Regency.The following is a description of the assessment and percentage of fulfillment of requirements consisting of each of the 12 Audit Criteria Elements, namely with a total of 86 Sub Elements of Criteria, which can be seen in the The findings in Table 2 show that the number of application criteria consisting of 86 Sub Elements of Audit Criteria is 58 criteria with appropriate/fulfilled application and 28 criteria with inappropriate/unfulfilled application (Minor Category).There is some documentation of the Application of SMMK in the Cisempur-Budiwangi Road Reconstruction Project, Cibalong District, Tasikmalaya Regency, as follows: Figure 1 . Figure 1.Service Provider Organizational Structure Figure 1 . Figure 1.PPE Wearing during Safety Talk commemorated as the National Occupational Safety and Health (K3) month.However, sadly, the number of work accident cases in Indonesia yearly is increasing.At least that is shown from the data on BPJS Employment work accidents that occurred in the last three years.Based on data from BPJS Ketenagakerjaan, in 2020, the number of work accidents reached 221,740 cases; that number increased in 2021 to 234,370 cases and continued to increase in 2022, where until November 2022 there had been 265,334 work accidents.The high number of work accidents in Indonesia, according to the Central Bureau of Statistics and BPJamsostek (BPJS-Ketenagakerjaan), is generally caused by natural disasters as much as 3%, inadequate and unqualified environment and equipment as much as 24% and 73% caused by unsafe behavior such as ignoring the use Data collection techniques are methods used in research activities that aim to collect data in the form of research tools through surveys conducted in the study/research area.The techniques for collecting such data can be described as follows: 1. Literature Studies Literature studies are carried out by searching literature through national and international journals, previous research, the internet and books related to research and problems being studied; 2. The Data Collection Instrument used is an interview in the form of a Check List in the form of questions referring to the provisions of the Audit criteria elements based on PP No. 50 of 2012 concerning the Application of SMK3 and PP No. 14 of 2021 and PUPR Minister Regulation No. 10 of 2021 concerning SMKK, data taken from several respondents who are considered experts and have authority in the application of the Construction Safety Management System (SMKK) in the construction project to be studied.This study uses 2 (two) types of data, namely, primary data and secondary data, as follows: This is what becomes important in its application in construction companies.This Construction Safety Management System (SMKK) adopted ISO 45001 of 2018 with several adjustments, especially in the Indonesian construction services sector after the issuance of Law No. 2 of 2017 concerning Construction Services.Law No. 2 of 2017 concerning Construction Services mandates article 3. The purpose of providing construction services is to provide a direction for the growth and development of Construction Services to realize a strong, reliable, highly competitive business structure and quality Construction Services results (BPSDM PUPR, 2021). of Dir Protective Equipment (PPE), marker signs, K3 control procedures.The Cisempur-Budiwangi Road Work Package of Cisempur Village, Cibalong District, is a construction project with a relatively high risk of work accidents.This is because many workers use sophisticated tools or machines that require unique methods, expertise, and supervision.It can cause various unwanted impacts, including occupational safety and health aspects.Neglect of the application of K3 in construction projects can cause the risk of work accidents.Construction activities must be managed by considering the standards of regulations or legislation and applicable K3 provisions.This research analyzes whether the Road Reconstruction Project, Cisempur-Budiwangi Road Work package, Cisempur Village, Cibalong District, has implemented a Construction Safety Management System (SMKK) following applicable laws and regulations.a) Primary data, obtained through field surveys with techniques of conducting direct observation / observation, interviews and conducting internal audits based on assessment criteria for the application of the K3 system in the Cisempur-Budiwangi Road Reconstruction Project, Cibalong District, Tasikmalaya Regency. Table 1 . assessment is included in the Category (Advanced Level) which consists of 86 Sub Elements of Criteria that must be met in implementing SMKK in the Cisempur -Budiwangi Road Reconstruction Project, Cibalong District Tasikmalaya Regency, to fulfill the implementation of SMKK and work accident prevention.Implementation Compliance Assessment Calculation of the achievement value of application using the general formula as Table 2 . Table and Graph: Assessment of the Application of SMKK Audit Criteria Elements Figure 2. Percentage Graph of Application of 12 Elements of SMKK Audit Criteria
1,863.4
2023-10-16T00:00:00.000
[ "Engineering", "Environmental Science" ]
An Efficient Feature Extraction Method, Global Between Maximum and Local Within Minimum, and Its Applications . Feature extraction plays an important role in preprocessing procedure in dealing with small sample size problems. Considering the fact that LDA, LPP, and many other existing methods are confined to one case of the data set. To solve this problem, we propose an e ffi cient method in this paper, named global between maximum and local within minimum. It not only considers the global structure of the data set, but also makes the best of the local geometry of the data set through dividing the data set into four domains. This method preserves relations of the nearest neighborhood, as well as demonstrates an excellent performance in classification. Superiority of the proposed method in this paper is manifested in many experiments on data visualization, face representative, and face recognition. Introduction Nowadays with the continual development of information technology, the amount of data has largely expanded, such as in the domain of pattern recognition, artificial intelligence, and computer vision.Because the dimension of the samples of data set is a lot greater than the number of the obtained samples of data set, it results in "the curse of dimensionality" 1 .Feature extraction method plays an important role in dealing with small sample size SSS problems.It represents original high dimensional data in the low-dimensional space through capturing some important data structure and information and is a common preprocessing procedure in multivariate statistical data analysis.At present, feature extraction methods have successfully been applied in many domains such as text classification 2 , remote sensing image analysis 3 , microarray data analysis 4 , and face recognition 5, 6 . 1 GBMLWM method shares excellent properties with LDA and MMC.In this paper, we maintain global merit in the process of global between maximum.Similar to LDA, we first keep all samples in the data set away from the class centroid, and then let the samples, labeled the same class with the fixed sample and beyond its nearest neighborhood close to its class centroid.So, GBMLWM is a way to supervise, and it is feasible to take apart between class of the data set and keep close within class of the data set. same class with the fixed sample in its nearest neighborhood approach.At the same time, let the samples different from the fixed sample away from it.So, GBMLWM maintains the submanifold space of the fixed sample. 3 As connection to PCA, LDA, MMC, LPP, and ANMM, we could derive those methods from GBMLWM framework by imposing some conditions, that is to say, those methods are the special case of GBMLWM.Visual and classification experiments have also indicated that proposed method in the paper is superior to the above methods. The rest of this paper is organized as follows.Section 2 briefly reviews global and local methods, that is, PCA, LDA, MMC, LPP, and ANMM.The GBMLWM algorithm is put forward in Section 3, and its relationship with the above methods is also discussed in this section.The experimental outcomes are presented in Section 4. The conclusion appears in the Section 5. Brief Review of Global and Local Methods Suppose that X x 1 , x 2 , . . ., x n ∈ R m×n is a set of m-dimensional samples of size n, and it is composed of C i , i 1, . . ., C, where each class contains n i samples, C i 1 n, and let x i j a m-dimension column vector which denotes the jth sample from the ith class.Generally speaking, the aim of the linear feature extraction or dimensionality reduction is to find an optimal linear transformation W ∈ R m×d d m from the original high-dimensional space to the goal low-dimensional space y i W T x i , so that those transformated data in terms of different optimal criteria best represent different information such as that of algebra and geometry structure. Principle Component Analysis PCA attempts to seek an optimal projection direction so that covariance of the data set is maximized, or average cost of projection is minimized after transformation.The objective function of PCA is defined as follows: where Applying algebra knowledge, 2.1 may be rewritten as where is the sample covariance matrix.m x is the mean of the all samples.The optimal W w 1 , w 2 , . . ., w d is the eigenvectors of S t corresponding to the first d largest eigenvalues. Linear Discriminant Analysis The purpose of LDA is to discriminate and classify, and it seeks an optimal discriminative subspace by maximizing between-class scatter matrices, meanwhile, minimizing within-class scatter matrices.LDA's objective is to find a set of vectors W according to where respectively, represent the between-class scatter matrix and the within-class scatter matrix.m i x is the mean of the ith class.The projection directions W are the generalized eigenvectors w 1 , w 2 , . . ., and w d solving S b w λS w w associated with the first d largest eigenvalues. Maximum Margin Criterion MMC keeps similarity or dissimilarity information of the high-dimensional space as much as possible after dimensionality reduction by employing the overall variance and measuring the average margin between different classes.MMC's projection directions matrix is as follows where S b , S w are defined as 2.5 . Locality Preserving Projection PCA, LDA and MMC aim to preserve global structure of the data set, while LPP is to preserve the local structure of the data set.LPP models the local submanifold structure by maintaining the neighborhood relations of the fore and aft transformated samples in data set.With the same mathematical notations as above, the objective function of LPP is defined as follows: where D is a diagonal matrix, that is, D i, i j SL ij , i 1, . . ., n, L D − SL is the Laplacian matrix.And SL SL ij n×n is a similarity matrix, defined as follows: where S i, j exp − x i − x j 2 /t , for i, j 1, . . ., n, t is a kernel parameter, N i is the set of nearest neighborhood of x i .The optimal W is given by the d eigenvectors corresponding to minimum eigenvalue solution to the following generalized eigenvalue problem: 2.9 Average Neighborhood Margin Maximum Different from PCA and LDA, ANMM aims to obtain effective discriminating information by using average local neighborhood margin maximum.For each sample, ANMM aims at pulling the neighborhood samples with the same label towards it as near as possible, meanwhile, pushing the neighborhood samples with different labels away from it as far as possible.ANMM's solutions as follows: where A is called the scatterness matrix, B is called the compactness matrix and N e i , N o i , respectively, is ξ the nearest heterogenous and homogenous neighborhood of the x i , | • | is the cardinality of a set.Here, we can regard ANMM as the local version of the MMC. Global between Maximum and Local Within Minimum In this section, we present our algorithm-global between maximum, simultaneously local within minimum GBMLWM .It profits from global and local methods.GBMLWM algorithm preserves not only the local neighborhood of submanifold structure, but also the global information of the data set.To state our proposed algorithm, we first give four domains about x i as follows: Domain I: those samples are a subset of the nearest neighborhood of x i and labeled the same class with x i . Domain II: those samples are also a subset of the nearest neighborhood of x i , but labeled the different class from x i . Domain III: those samples labeled the same class with x i , but do not lie in the nearest neighborhood of x i . Domain IV: those samples do not lie in the nearest neighborhood of x i and also are labeled as the different class from x i . Figure 1 shows us an intuition about the above four domains.The nearest neighborhood of x i consists of domain I and II.The samples labeled the same class with x i lie in domain I and III, and the samples labeled the different class from x i lie in II and IV. Global between Maximum The purpose of classification and feature extraction is to make the samples labeled as different class apart from each other.We first operate those points in domain II and IV via maximizing global and local between-class scatter.That is to say, our aim is not only to make the data globally separable, but also to maximize the distance between different classes in the nearest neighborhood.Thus, our objective functions are defined as follows: 3.1 Local Within Minimum As for classification, maximizing between class is not adequate, and compacting within-class scatter is also required.So, we now make the samples from domain I close to x i itself, the samples from II away form x i and the samples from domain II close to their own class centroid. where 3.5 GBMLBM Algorithm In the previous description, nearest neighborhood of x i is indicated as K nearest neighborhood based on Euclidue distance between two samples from the data set.Our objective function is defined as follows: where And then our optimal projection directions W are solutions to the following optimization problem: 3.8 So, W w 1 , . . ., w d is the eigenvectors of Mw λw corresponding to the first d largest eigenvalues.It is obvious that the GBMLWM algorithm is fairly straightforward instead of computing inverse matrix, and thus it absolutely avoids the SSS problem.Now, the algorithm procedure of GBMLBM is formally summarized as follows: 1 as for each sample x i , i 1, . . ., n, dividing the samples from data set except x i , i 1, . . ., n into four domains: I, II, III, and VI; 2 computing S t , L 1 , L 2 , SL w , according to 2.3 , 3.3 , 3.4 , and 3.5 , respectively; 3 and then, we can obtain matrix M according to 3.7 ; 4 computing the generalized eigenvectors of Mw λw, and the optimal projection matrix W w 1 , . . ., w d corresponding to the d largest eigenvalues, where d is the rank of matrix M. For a testing sample x, its image in the lower dimensional space is given by x −→ y W T x. 3.9 Discussion Here, we find those methods limited to global structure or local geometry of the data set are special case of GBMLWM algorithm.PCA regards the data set as a whole domain and demands all the samples away from the total mean of the data set.Thus, we see that PCA is an unsupervised version special case of GBMLWM algorithm.Both MMC and LDA divide Training cost is the amount of computations required to find the optimal projection vectors and the sample feature vectors of the training set for comparison.We compare the training cost of the methods based on their computational complexities.Here, we suppose that each class has the same number of training samples.If we regard each column vector as a computational cell and do not consider the computational complexity of eigen-analysis, we estimate approximately computational complexity for six different algorithms which include based-local methods and based-global techniques.Table 1 gives the analysis of computational complexity for the six different algorithms.From Table 1, we can see that our method has the largest training cost.However, in practice, the size of neighborhood and the number of class are often not large enough to cause much more computation of our algorithm.The computational complexity of GBMLWM also shows that our algorithm not only considers the global information, but also utilizes the local geometry.That makes our algorithm efficiently reflect the intrinsical structure of the training set.The following experimental results also manifest this point. Experiments In this section, we will carry out several experiments to show the effectiveness of the proposed GBWMLW method for data visualization, face representative, and recognition.Here, we will compare the global methods, that is, PCA, LDA, MMC, and local methods, that is, LPP, NPE, and ANMM, with our proposed method on the following four databases: MNIST digit, Yale, ORL, and UMIST database.In the processing of the PCA, we only maintain the N − C dimensions to ensure scatter matrix nonsingular.In testing phases, the size of neighborhood k is determined by 5-fold cross validation in all experiments, and the nearest neighbor NN rule is used in classification.In using the LPP and GBMLWM algorithms, the weight of two samples is computed with Gaussian kernel, and the kernel parameter is selected as follows: we firstly compute the pairwise distance among all the training samples, then, t is made equal to the half median of those pairwise distance. Data Visualization In this subsection, we first use a publicly available handwritten digits to illustrate data visualization.MNIST database 20 has 10 digits, and each digit contains 39 samples.The number of total samples is 390 which each image has the size 20 × 16.Here, we only select 20 samples from each digit.So the size of the training set is 320 × 200, and each image is represented lexicographically as a high-dimensional vector of the length 320. Figure 2 shows all the samples of the ten digits.For visualization, we project the data set in 2-D space by all seven subspace learning methods.And the experiment results are depicted in Figure 3.With the exception of LDA and GBMLWM, the samples from the different digits seem to heavily overlap.Compared with GBMLWM algorithm, LDA makes the samples from the same class become a point.Although this phenomenon is helpful for classification, it has a poor generalization ability since it does not exhibit the case in each object oneself.GBMLWM algorithm not only separates each digit, but also shows what is hidden in each digit.When the number of the nearest neighbor of x i reduces from K 15 to K 2, the samples from the same object become more and more compacted.That also verifies that LDA is a special case of GBMLWM. Yale Database This experiment aims to demonstrate the ability of capturing the important information on Yale face database 21 , called face representative.The Yale face database contains 165 gray scale images of 15 individuals.There are 11 images per subject, one per different facial expression or configuration: center-light, with/without glasses, happy, left/right light, normal, sad, sleepy, surprised and wink.All images from the Yale database were cropped and the cropped images normalized to the 32 × 32 pixels with 256 gray level per pixel.Some samples from the Yale database are shown in Figure 4. Here, the training set is composed of all the samples from this database.And the most significant 10 eigenfaces obtained from the Yale face database through using the seven subspace learning methods are shown in Figure 5. From the Figure 5, we obviously see that our algorithm captures more basic information of the face than other methods. UMIST Database The UMIST database 22 contains 564 images of 20 individuals, each covering a range of poses from profile to frontal views.Subjects cover a range of race, sex, and appearance.: Two-dimensional projections of the handwritten digits, respectively, by using seven related subspace learning methods." " denotes 0, "•" denotes 1, " * " denotes 2, "×" denotes 3, " " denotes 4, " " denotes 5, "Δ" denotes 6, "∇" denotes 7, " " denotes 8, " " denotes 9. We use a cropped version of the UMIST database that is publicly available at S. Roweis' Web page.All the cropped images normalized to the 64 × 64 pixels with 256 gray level per pixel.Figure 6 shows some images of an individual.We randomly select three, four, five, and six images of each individual for training, and the rest for testing.We repeat these trails ten times and compute the average results.The maximal average recognition rates of seven subspace learning methods are presented in Table 2. From Table 2, we find that GBMLWM algorithm's highest accuracy, respectively, are 79.88%,86.10%, 91.85%, and 93.80% on the different training sets and corresponding testing sets.The improvements are significant.Furthermore, the dimensions of the four GBMLWM subspaces corresponding to the maximal recognition rates are remarkably low, and they are 15, 13, 11, and 18, respectively. ORL Database In the ORL face database 23 , there are 40 distinct subjects, each of which contains ten different images.So there are 400 images in all.For some subjects, the images are taken at different times, varying the lighting, facial expressions and facial details.All the images are taken against a dark homogeneous background with the subjects in an upright, frontal position.All images from the ORL database are cropped, and the cropped images normalized to the 32×32 pixels with 256 gray level per pixel.Same samples from this database are showed in Figure 7.In this experiment, four training sets, respectively, correspond to the numbers of samples from each subject three, four, five, and six.And other samples, respectively, form the testing sets.We repeat these trails ten times and compute the average results.The recognition rates versus the reduced dimensions are shown in Figure 8.The best average recognition rates of seven subspace learning methods are presented in Table 3.It can be seen that GBMLWM algorithm's recognition rates remarkably outperform the other methods in all the four training subsets with the highest accuracy of 90.07%, 94.67%, 97.20%, and 97.31%, respectively.The standard deviations of the GBMLWM corresponding to the best results are 0.03, 0.02, 0.02, and 0.02. Conclusions In this paper, we have proposed a new linear projection method, called GBMLWM.It is an efficient linear subspace learning method with the supervised and unsupervised character.Similar to PCA, LDA, and MMC, we consider the global character of the data set.At the same time, similar to LPP, NPE, and ANMM, we also make the best use of the local geometry structure of the data set.We have pointed out that the existing linear subspace learning methods are a special case of our GBMLWM algorithm.A large number of experiments demonstrate that the method which we propose is obviously superior to other existing methods, such as LDA and LPP. Figure 1 : Figure 1: Here are four domains into which other samples except x i in the data set are divided.The left figure shows four domains in the original high-dimensional space, and the right depicts four domains in the low-dimensional space. Figure 2 : Figure 2: All the samples of handwritten digits from number 0 to 9 used in our data visualization experiment. Figure 4 : Figure 4: Some face samples from the Yale database. Figure 5 : Figure 5: The most significant 10 eigenfaces obtained from the Yale face database through using the seven subspace learning methods: PCA, LDA, MMC, LPP, NPE, ANMM, and GBMLWM from top to bottom. Figure 6 : Figure 6: Some face samples from the Yale database. Figure 7 : Figure 7: Some face samples from the ORL database. Figure 8 : Figure 8: Average recognition rates of seven subspace learning methods and the different samples from each object versus the reduced dimensions on the ORL database. Table 1 : Estimate approximately computational complexity for the six different algorithms, where n denotes the total number of training samples, k and C are the size of neighborhood and the number of class, respectively.i in the data set into two domains: one is composed of the samples labeled as the same class with x i , called within-class S w ; the other contains the samples labeled the different class from the x i , called between-class S b .They, respectively, correspond to the domains I ∪ III and II ∪ VI, as illustrated in Figure1.The local methods, such as LPP and ANMM, are different from the above methods based on global structure.LPP and ANMM divide the whole data set into two domains according to the nearest neighborhood of x i .LPP is operated in I ∪ II, while ANMM method in the domain I and II, as depicted in Figure1.Those local methods do not utilize the global information of the data set and are local version special case of the algorithm proposed in this paper.The superiority of the GBMLWM algorithm is manifested in the data experiments in the following section. Table 2 : Recognition accuracy % of different algorithms and the numbers in the bracket corresponding to dimensions on the UMIST database. Table 3 : The best recognition accuracy % of different algorithms and the numbers in the bracket corresponding to standard deviation on the ORL database.
4,779.6
2011-07-12T00:00:00.000
[ "Computer Science" ]
Use of interaction domains for a displacement-based design of caisson foundations Caisson foundations, typically adopted for both onshore and offshore structures, are usually subject to combined loading acting during working conditions and exceptional events such as earthquakes. Assessment of their performance under general loadings is therefore fundamental, for both serviceability and ultimate limit states. In this study, a simplified displacement-based approach, aimed at preliminary designing caisson foundations subjected to combined loading, is presented. Such an approach requires the definition of both interaction domains (IDs) and generalised pushover curves, together with the assumption of an associative flow rule. The IDs and pushover curves are obtained by interpreting the results of a set of 3D finite element nonlinear static analyses, where the response of massive cylindrical onshore caisson foundations, embedded in a layered soil profile and subjected to both centred vertical (N) and combined loads (N–Q–M), is investigated. Following previous works, the influence of initial loading factor and caisson embedment ratio on both IDs shape and size is investigated. Additionally, the effect of soil drainage conditions on the IDs is discussed. Role of load reference point (LRP) is also assessed, since a suitable choice of LRP may strongly simplify the geometrical representation of the ID. Analytical expressions for dimensionless IDs and pushover curves are presented and used at a preliminary design stage to evaluate the maximum generalised load acting on the caisson for a given threshold generalised displacement, so as not to exceed either serviceability or ultimate limit states. jFj Dimensionless generalised force as defined in Eqs. (2) jFj lim Limit value of the dimensionless generalised force juj Dimensionless generalised displacement as defined in Eqs. (2) juj el ¼ jFj lim =K 0 Elastic value of juj corresponding to jFj ¼ jFj lim A c Caisson cross-section area a n , a l Ellipse semi-axes [Eqs. (9)] C Elastic compliance matrix of the soilcaisson system c' Soil effective cohesion c' red Soil reduced effective cohesion according to [10] c 11 , c 12 , c 13 Interpolating parameters [Eq. (8) Poisson's ratio n = Q/N lim,net Dimensionless horizontal force r v (z=H) Total vertical stress at z = H in lithostatic conditions u 0 Soil internal friction angle u 0 red Soil reduced internal friction angle according to [10] v = 1/F Sv Initial loading factor w Soil dilatancy angle x Ellipse rotation angle [Eq. (8)] 1 Introduction Caissons are embedded foundations characterised by large mass and stiffness, typically employed for both offshore and onshore structures, and usually subject to combined loading acting during working conditions (weights of superstructure and traffic, wind, etc.) as well as exceptional events (strong earthquakes, storms, etc.). According to an ultimate limit state approach, a foundation must be designed so that the load combinations expected during its service life are lower than the collapse loads, thus guaranteeing an adequate factor of safety. However, if the collapse load is attained for displacements incompatible with serviceability of the superstructure, a displacement-based approach is recommended when evaluating the soil-foundation system capacity, as in the case of large-diameter piles. Such an approach may be particularly useful for caisson foundations, as failure mechanisms involve large soil volumes due to their noticeable dimensions, leading to high axial and lateral loading capacity. This need is supported by these foundations being typically used for critical facilities characterised by tall superstructures, such as long-span bridges and wind turbines, whose limit conditions are typically defined in terms of generalised displacements and permanent rotations rather than generalised forces. In view of the above, developing a simplified procedure aimed at computing the limit load acting on caissons for a given threshold displacement may prove useful, as an expeditious tool to be used at a preliminary design stage. Such a procedure would require the definition of: (1) the combined N-Q-M failure domain of the soil-foundation system, where N and Q are the vertical and horizontal forces and M is overturning moment; (2) the generalised load-displacement curve describing the response of the system from the onset of loading until its failure condition. For foundations subject to combined loads, interaction diagrams (IDs, i.e. three-dimensional failure envelopes in the N-Q-M space) are a useful tool to relate the different loading components at failure. The use of IDs would allow to define the factor of safety either as the minimum distance of the current N-Q-M combination from the envelope or the distance evaluated along the load path [16,27]. Interaction diagrams have been investigated for a variety of foundations by employing different approaches (i.e. experimental, analytical and numerical): shallow footings [7,9,18,30,33], solid and skirted shallow foundations for offshore structures typically characterised by an embedment ratio H/D \ 1 (H being the embedment depth and D the in-plane dimension) [5,37] and spudcan footings employed for jack-up units [23]. Less studies have been devoted to failure conditions of massive caisson foundations for onshore structures subjected to combined loads, concerning either cuboid-shaped [2,17,38] or cylindrical caissons [3,8], both with values of H/D C 1. Furthermore, most of the numerical studies [4,5,18,19,37] are based on 2D plane-strain numerical analyses, while just a few account for the three-dimensional stress/strain state properly [3,8,17,38]. Under undrained conditions, the analyses are always carried out in terms of total stresses [3,5,17,18,37], where the soil is described as an equivalent single-phase medium, therefore ignoring its two-phase nature. In this study, the IDs of massive cylindrical caisson foundations are obtained for different embedment ratios H/ D = 0.5, 1, 2, installed in a layered alluvial deposit under both centred vertical and combined N-Q-M loads. The IDs are computed through a series of 3D finite element (FE) nonlinear static analyses carried out assuming both fully undrained and drained conditions. To account for the twophase nature of the saturated soil, the undrained analyses have been performed in terms of effective stresses, rather than the typically adopted total stress approach. For the sake of simplicity, a linear elastic-perfectly plastic behaviour is assumed for the foundation soils. This choice, in agreement with what already done by many other authors also in the recent past [3][4][5]19], is justified by the theoretical objectives of the paper, aimed at defining the mechanical response of the system, irrespective of the specific datum. In fact, when monotonic loads are applied and hydro-mechanical coupling is disregarded, the use of more sophisticated constitutive relationships could modify the values but not the nature of the system mechanical response. In the first part of the paper, after giving insights on the problem layout (Sect. 2) and the numerical model (Sect. 3), the results of the numerical analyses are presented and discussed (Sect. 4). The FE results are validated against those coming from the application of limit analysis (LA) and those discussed by [5,17,38] for a homogeneous soil. Additionally, the choice of considering a two-layer foundation soil showed that, for a given profile of shear strength (either constant or linearly increasing with depth), the size of IDs is governed by the stratigraphic profile but not its shape: this allowed to generalise the obtained results to a large number of practical applications in which a colluvial stratum is positioned on a cohesive one. Following previous works by [17,38], influence of foundation geometry (H/D) and initial loading factor (v) on the IDs have been first investigated, showing a good agreement with their results. Furthermore, the comparison of the IDs obtained in the two limit drainage conditions is provided for a given geometry and initial loading factor, showing that drainage scales the ID size rather than influencing its shape. A short discussion on how the choice of the load reference point (LRP) affects the shape of the IDs is also given, with the caisson centroid being the most suitable choice as it strongly simplifies the geometrical representation of the ID. In the second part of the paper, based on the results of the nonlinear static analyses, the authors propose an approach for the assessment of the performance of caisson foundations: (1) an analytical expression for ID in the nondimensional N-Q-M space for both drainage conditions (Sect. 5) and (2) the equation of dimensionless generalised pushover curves (Sect. 6). The expressions (1) and (2) are integrated in a displacement-based simplified approach; moreover, the hypothesis of an associative flow rule, confirmed by previous experimental and numerical works [35,38], allows to compute the displacement components, for an assigned load path and a given threshold for generalised displacements. Problem layout IDs have been obtained by performing nonlinear static (i.e. pushover) analyses on different cylindrical caisson foundations subject to combined N-Q-M loading until the collapse is attained. In Fig. 1a, a sketch of the adopted geometrical scheme is illustrated. A rigid cylindrical caisson of diameter D = 12 m and height H is embedded in an alluvial deposit consisting of a 5-m-thick layer of gravelly sand and a layer of silty clay. Water table is located at the bottom of the gravelly sand layer: an initial hydrostatic pore water pressure regime is imposed. While the diameter of the caisson is kept constant over the parametric study, three embedment ratios H/D = 0.5, 1, 2 are considered. In this study, the load reference point (LRP), defined as the point where loads and displacements are referred to, is chosen to be coincident with the caisson centroid unless specified differently; the sign convention adopted in the analyses is also illustrated in Fig. 1a. The profiles of both overconsolidation ratio OCR and small-strain shear modulus G 0 are plotted in Fig. 1b. The silty clay is assumed to be slightly overconsolidated with an OCR profile consistent with a uniform erosion process (unloading vertical stress Dr v = -340 kPa), reproduced in the analyses by means of the stepwise profile of Fig. 1b. G 0 is assumed to increase with depth according to the empirical relationship proposed by [20] for gravelly sands (assuming a maximum void ratio e max = 0.8, a minimum e min = 0.4 and a relative density D r = 60%) and by [29] for the silty clay (assuming a plasticity index I P = 25%). The earth pressure coefficient at rest is evaluated by using the relationship proposed by [24]. A linear elastic-perfectly plastic behaviour is assumed for the foundation soils with a Mohr-Coulomb failure criterion. The influence of the choice of such a constitutive law on the numerical results is discussed in ''Appendix 1''. The physical and mechanical properties adopted in the analyses are listed in Table 1, where c is the unit weight, m the Poisson's ratio, G/G 0 the ratio between the current and the small-strain shear modulus, c' the cohesion, u' the internal friction and w the dilatancy angles. Numerical modelling The numerical analyses have been carried out by using the FE code Plaxis 3D AE [6]. In case of H/D = 0.5 and 1, the numerical model shown in Fig. 1c, d has been used: the 3D mesh consists of 95,500 10-node tetrahedral elements with 4 Gaussian points [12]. In case of H/D = 2, the depth z of the model has been doubled using a mesh of approximately 188,000 elements. Thanks to the problem symmetry with respect to the x-axis, only half domain has been modelled. Lateral boundaries are located at a distance x = y = 6.25 D from caisson axis, where the contours of both soil displacements and stresses have been checked not to be affected by the mesh boundaries (e.g.: [13,14]). Table 1 Properties of the foundation soils and parameters adopted in the soil constitutive model Horizontal displacements on vertical boundaries, as well as horizontal and vertical displacements at the base, are not allowed. In the analyses, the caisson is always ''wished in place'', neglecting the simulation of the construction process. A linear-elastic behaviour is assumed for the caisson with a Young's modulus E c = 30 GPa and a Poisson's ratio m = 0.15; the unit weight of reinforced concrete is assumed equal to c c = 25 kN/m 3 . At the soil-caisson interface, purely attritive interface elements with Mohr-Coulomb failure criterion are inserted to simulate relative sliding, with a friction angle d = tan -1 [2/3 tanu']. The choice of such a value of d, although consistent with common practice, does not affect the generality of the numerical results since, as shown by [17,38], the friction angle at the foundation-soil interface seems to have a slight influence on the IDs especially for high values of H/D. Analyses and results Two series of 3D FE analyses have been performed to investigate the bearing capacity of the caissons (1) under centred vertical loading and assuming drained conditions, (2) under a general combination of N, Q and M, assuming both drained and undrained conditions for the foundation soils. Specifically, in (2) drainage conditions have been varied only in the calculation phases during which horizontal forces and overturning moments are applied, bearing in mind that such components can be representative of seismic-induced inertial forces acting under undrained conditions. Conversely, drained conditions are always considered for the vertical load, assuming that the excess pore water pressures in the foundation soil, due to the construction of the superstructure, are fully dissipated at the end of the construction phase. Centred vertical loading To investigate the bearing capacity under a centred vertical load, three displacement-controlled analyses have been performed. They consist of the following calculation phases: (1) initialisation of the effective stress state; (2) wished-in-place caisson activation; (3) progressive application of a vertical displacement w. For each embedment ratio H/D, the resulting N-w curves are plotted in Fig. 2a. As was expected, the limit load increases with H/D and is attained for very large values of w. Ultimate loads are compared with the values of N lim provided by the code OPTUM G3 [26], where LA calculations are combined with the FE method [32]. The numerical model, characterised by the same dimensions and boundary conditions as the ones described in Sect. 3, comprises the soil and the caisson, the latter modelled as a rigid body. The soil domain is discretised by means of solid elements characterised by a rigid-perfectly plastic behaviour assuming a Mohr-Coulomb failure criterion and an associative flow rule. In each analysis, an adaptive mesh with automatic refinement in proximity of plastic strain localisations has been used. After a few iterations, the analysis stops when the range of the upper and lower bounds to the bearing capacity is small enough to give an accurate estimation of the exact solution. The drained FE analyses, carried out by assuming a nonassociative flow rule (dilatancy is nil), cannot provide bearing capacities coincident with those obtained from LA calculations, where associativeness is imposed. For this reason, FE results have been compared with LA results computed using both the original strength parameters, c 0 and u 0 , and the reduced values, c 0 red and u 0 red , evaluated as proposed by [10]. Since the role of dilation is more crucial for deeper failure mechanisms, as it was expected, the agreement between FE and LA results, the latter obtained with the original strength parameters, is better for H/ D = 0.5, whereas the opposite is true for H/D = 2 ( Fig. 2a). H/D = 1 may be considered as an intermediate case. In Fig. 2b, the N-w curves are plotted in the non-dimensional plane N/N lim -w/w el , where w el corresponds to the elastic vertical displacement calculated for N = N lim : therefore, w el = N lim /K 0 , where K 0 is the initial tangent stiffness of the N-w curves. By definition, all normalised curves exhibit an initial linear-elastic response with tangent stiffness equal to 1, followed by a nonlinear load-displacement curve until the ultimate condition (N/N lim = 1) is attained. The nonlinear and irreversible response of caisson foundations to vertical load is strongly influenced by the embedment ratio: as H/D increases, the ''structural hardening'' related to the progressive increase in the plastic zone developing in the soil surrounding the caisson's base and shaft becomes more and more evident. Similarly to what observed in [11] for the unloading occurring at the face of a deep tunnel, the progressive loading of the foundation causes an expansion of the plastic domain: yielding starts from the edges of the foundation's base to deepen and widen as loading increases, with plastic strains spreading over the soil surrounding the caisson. As soon as the mechanism reaches the upper boundary, the horizontal asymptote of the load-displacement curve is attained and failure occurs. In Fig. 3, the progressive evolution of the yielded volume during the unloading at the face of a deep tunnel (Fig. 3a) and that computed for a caisson with H/ D = 2 ( Fig. 3b) is shown. Under the assumption of a sufficiently large ratio of the cover to tunnel diameter, the mechanism ( Fig. 3a) involves the soil surrounding the tunnel face and lining without reaching the ground surface, this resulting in the absence of a plateau in the load-displacements curve [11]. The following analytical expression of the non-dimensional N-w curves is proposed to simulate the numerical results: This represents a modified hyperbole with the following properties: (1) the initial tangent stiffness is equal to 1; (2) the ultimate condition (N/N lim = 1) is reached for a finite value of w/w el = k, where k describes the ''structural hardening'' not governed by the upper boundary and uniquely determined once the constitutive relationship and the constitutive parameters are assigned; (3) the curve is discontinuous in the first derivative for w/w el = k; (4) parameter r = 1.9 is chosen to best-fit the curves obtained from the numerical analyses. The discontinuity of the curve for w/w el = k emphasises the sudden change in stiffness observed when the plastic zone reaches the upper boundary. In Table 2, the values of k evaluated for each embedment ratio are listed. The non-dimensional Nw curves computed using Eq. (1) are compared in Fig. 2b with those obtained numerically. The agreement is more than satisfactory. Bearing in mind that the limit vertical load is attained for high values of the vertical displacement, definitely incompatible with the superstructure operating conditions, Eq. (1) can be employed to introduce an alternative criterion to define the attainment of an ultimate Push-over Combined N-Q-M loading To investigate the bearing capacity under a general load combination, about 370 load-controlled pushover numerical analyses have been performed. After the initialisation of effective stresses and the caisson activation, the following calculation phases are completed: (1) In contrast to what observed for shallow footings [7,25], in case of caisson foundations: (1) when N net = 0 bearing capacity is different from zero; (2) owing to the shaft resistance ID extends also for N net \ 0 (traction); and (3) similarly to what observed also by [17,38], the ID section in the Q-M/D plane shows a nonzero strength even for N net very close to N lim,net ; in any case, this portion of the load space has not been investigated further in this paper. From the results of the numerical analyses, the collapse of the soil-foundation system under a general N-Q-M load combination (plateau of the pushover curves) may be inferred to be attained for values of displacements varying in a wide range: from centimetres to metres, depending on (1) the loading path (a G ), (2) the embedment ratio (H/D), (3) the vertical load (N net ) and (4) the drainage conditions. The highest values of displacements are attained for the deepest caissons (H/D = 1, 2) subject to high vertical loads and under drained conditions. Therefore, as was suggested by [17,38], a different criterion to define the ultimate condition is adopted here in the following: the attainment by the tangent stiffness of the pushover curve of the 1% of the initial stiffness (K tan /K 0 = 1%) (Fig. 5a). Such a condition leads to the evaluation of limit loads corresponding to computed displacements much smaller than those referred to the plateau of the pushover curves. For H/ D = 1, N net = 7 MN and drained conditions in Fig. 5b, IDs obtained by employing the two different criteria, plateau (solid line) and K tan /K 0 = 1% (solid line with crosses), respectively, are compared. The pushover curve for a G = 0 in Fig. 5a is represented in the non-dimensional generalised force-displacement plane |F|-|u|, defined as: In case the tangent stiffness criterion is adopted, a value |u|= 0.015 is obtained for the case under consideration, whereas |u|= 0.089 is computed at last step of convergence. Owing to its symmetry with respect to the origin of the M/D-Q plane, half of the envelope is represented in Fig. 5b. IDs obtained by using the different criteria are characterised by the same shape and this is true for Fig. 4b-d as well, where the other sections are plotted. The same may be inferred in case of loci corresponding to assigned values of juj (juj= 0.005, 0.015, 0.025, 0.035, 0.045): an almost homothetic expansion of the envelopes correspond to an increase in juj, progressively less spaced as ultimate condition is approached. This observation, in view of modelling the soil-foundation response to general load combinations by means of a macro-element approach [22,25,28,31,35] based on the theory of elasto-plasticity, seems to suggest the adoption of an isotropic hardening rule in the Q-M plane. Finally, in Fig. 5b the results of FE pushover analyses are compared with those obtained by employing limit analysis and in particular code OPTUM G3, in which strength parameters are reduced as proposed by [10]: again, the agreement is generally good for all loading paths (for any values of a G ) although less satisfactory for high vertical loads. As was previously discussed (Sect. 4.1), the reduction to be applied to strength parameters is however problem dependent. The model capability to predict the bearing capacity under a general loading combination has also been checked by comparing the IDs obtained for the H/D = 1 caisson subject to N = 0 assuming in turn (1) undrained and (2) drained conditions with published data (Fig. 6). The results are represented in the non-dimensional plane Q/Q u -M/M u , where Q u and M u denote the values of Q and M bringing the soil-caisson systems to collapse when the other load component is zero: Q u = Q (M=0) and M u = M (Q=0) . For case (1), the solutions obtained by [5,17] by means of 2D and 3D total stress analyses are shown, the former assuming a shear strength constant (S u = const.) or linearly increasing with depth (S u = k z), the latter assuming S u = const. only. For case (2), instead, the comparison is made with the ID obtained in [38] by means of 3D numerical analyses where the foundation soil consists of a layer of sand. In case (1), the IDs are referred to the base of the caisson, chosen by [5] as LRP. The best agreement is obtained with the ID computed in [5] under the assumption of S u = kÁ z, as the same assumption of a soil shear strength increasing with depth is made. For both drainage conditions, the comparison is satisfactory showing that, for the case of a two-layer soil deposit hereby considered, the ID shape is very slightly affected by the assumed soil profile Parametric study In this section, the role in affecting the shape and size of IDs of (1) the initial loading factor v = 1/F Sv , where Prior to showing the results of the parametric study, a discussion on how the LRP location affects the shape of failure envelopes is presented. Given that LRP can be chosen arbitrarily, the most frequently adopted location is the top [17,38], the centroid [23] and the base of the foundation [5]. The effect of LRP location is presented in Fig. 7a Fig. 7a), the envelope is almost perfectly symmetric and crosses the x and y-axes perpendicularly: this is due to the decoupling between the rotational and horizontal degrees of freedom. In view of developing a macro-element model for caisson foundations, it is evident that both symmetry and decoupling are desirable for an easy analytical representation of ID. Decoupling is however lost for higher values of embedment (Fig. 7b). To highlight this item, vector plots of some failure mechanisms computed for the two caissons are shown in Figs. 8 and 9 together with the relevant contours of the deviatoric strain. IDs, relative to the caissons centroid, are represented in the nondimensional plane Q/Q u -M/M u to focus on their shape. In Fig. 8, the caisson with H/D = 0.5, subject to M only (a G = !, point A), undergoes an almost pure clockwise rotational mechanism around a point close to the centroid (''scoop'' mechanism, according to [38]). Similarly, when subject to Q only (a G = 0, point C), the caisson undergoes a pure translational mechanism with deviatoric strains developing in front and at the back of the caisson, where pseudo-active and pseudo-passive wedges are detected. Between points A and B, a coupled (''scoop-slide'') mechanism is attained with the caisson rotation pole moving downwards from the centroid to infinite (in point C). Between points B and D, a translational mechanism is observed, while after point D the mechanism is coupled again, with an anticlockwise rotation of the caisson (''reverse-scoop'') and the rotation pole progressively approaching the caisson centroid again, moving from above as Q decreases (points E and F). Similar failure mechanisms are observed for the deeper caisson (H/D = 2, Fig. 9). However, a rotation about a point deeper than the centroid is observed in A and a coupled mechanism in C: both symmetry and orthogonality of the failure envelope with respect to both x and y-axes are lost. The way the coupling effect of sliding and rotation affects the depth of the rotation point has also been observed in [3,38]. This condition becomes more and more evident as H/D increases, as the lever arm of the resultant force transferred to the caisson centroid increases with H. Indeed, for the caisson with H/D = 2, the pure translational mechanism is attained when the horizontal force is applied to a point deeper than the centroid, at a depth z = H/2-a G = 16 m = 2/3ÁH (point D in Fig. 9). Influence of the initial loading factor For increasing vertical loads (v = 0.63, F sv = 1.6), symmetry and orthogonality of ID are lost even for H/D = 0.5 (Fig. 10). Higher values of vertical load induce a deepening of failure mechanisms, as shown by the comparison between the contour plots plotted in Figs. 8 and 10, with a more pronounced asymmetry in the Q-M plane, this resulting in a different shape of the envelope, especially To better appreciate the influence of the vertical load on both ID size and shape, in Fig. 11 these are plotted for caissons with H/D = 0.5 and 1, for undrained applications of Q and M, both in the dimensional Q-M/D and nondimensional Q/Q u -M/M u planes. Owing to the progressive change in the failure mechanisms geometry, which become deeper and more asymmetric (Figs. 8, 10), at increasing values of v failure envelopes in Fig. 11a, c, first harden and subsequently shrink, in accordance with [38]. Specifically, at increasing v values up to a certain threshold, the bearing capacity increases since larger volumes of soil are involved as mechanisms deepen. However, above a threshold value of v, bearing capacity reduces since the asymmetry of the mechanisms seems to prevail, with a contraction of the failure envelopes. Comparison of Fig. 11b, d allows us to appreciate the influence of v on the shape of IDs: this is more pronounced for the shallower caisson (H/D = 0.5). A significant change Influence of caisson embedment ratio While the influence of embedment ratio H/D on IDs size has been already discussed (Sect. 4.3), here the focus is on the role of H/D in affecting the shape of IDs (Fig. 12) in case undrained conditions are accounted for. Consistently with the results shown by [17], a significant change in the shape of IDs is observed for small vertical loads, v B 0.21, as H/D increases (Fig. 12a, b): the envelope eccentricity progressively increases in the IV quadrant of the non-dimensional plane. By contrast, the influence of H/D vanishes for higher vertical loads (Fig. 12c, d). Influence of drainage conditions The influence of drainage conditions, on both size and shape of IDs, is illustrated in Fig. 13, where IDs obtained imposing both undrained and drained conditions are compared for v = 0.09 and different embedment ratios. As was expected for compression loading paths, undrained soil behaviour always results into smaller envelopes with respect to those computed under drained conditions (Fig. 13a, c, e). Indeed, due to the excess pore water pressure, foundation soils undergo an effective stress and in turn a shear strength reduction. Conversely, drainage conditions do not affect the shape of the failure envelopes (Fig. 13b, d, f). This observation may lead to assuming a failure envelope hardening homothetically as drainage occurs with time, until reaching the drained envelope. Proposed relationships for interaction diagrams An analytical expression for IDs is proposed to fit all the conditions investigated in this study. The general equation, valid for every initial loading factor v, embedment ratio H/ D and for both undrained and drained conditions, is that of a unit circle in the n 0 -l 0 plane: By scaling Eq. (3), the equation of an ellipse is obtained in the form n 0 a n where a n and a l , defined in the following, are a function of v, H/D and drainage conditions. Equation (4) is then mapped into the n-l plane by applying a rotation of an angle x (positive if counterclockwise) where n and l, following [7], are defined as Hence, the proposed equation in the n-l plane (Eq. 7) is that of a rotated ellipse having semi-axes, a n and a l , and rotation angle, x, depending on v, H/D and drainage conditions: The numerical IDs corresponding to the five values of v under consideration have been first interpolated by ellipses to obtain the values of x, a n and a l plotted by symbols in Fig. 14, for each H/D ratio and for both drainage conditions. In turn, these values have been best-fitted to define the following functions of v: and a n v ð Þ ¼ c 21 ðv À c min Þ c 22 ðc max À vÞ where the non-dimensional parameters c i are listed in Table 3 as a function of H/D and the drainage conditions. Figure 14 shows a satisfactory agreement between the proposed Eqs. (8)(9) and the values of x, a n and a l represented by symbols. The n-l plane locus is obtained from the analytical expression above according to the method illustrated in Fig. 15. Specifically, with reference to undrained conditions and H/D = 0.5, the unit circle (Fig. 15a) is first distorted to the ellipse (Fig. 15b), by applying Eq. (4), and then rotated by the values of x provided by Eq. (8) (Fig. 15c). In Fig. 16a, the three-dimensional ID under drained conditions provided by Eq. analytical solution proposed by [7] for shallow footings resting on sand is also presented (dotted line). It is evident that: (1) IDs increase in size with H/D, and (2) in contrast to the case H/D = 0, the cross sections of the caissons IDs in planes n = 0 and l = 0 (Fig. 16c, d) do not close neither for v = 0 nor for v = 1 (Sect. 4.2). Conversely, in the solution proposed by [7], an almost perfectly symmetric to v = 0.5 parabola is suggested. Finally, in planes v = const., the solution proposed for shallow footings is an ellipse rotated by a constant angle x = -14°, while for embedded caisson foundations the rotation angle x is found to vary with v, especially for low values of H/D. The IDs provided by Eq. (7) for undrained conditions and each embedment ratio are plotted in Fig. 17. Similarly to what observed for drained conditions, the tendency of failure domains not to close for v = 1 becomes more evident as H/D increases: K 0 denotes the elastic generalised displacement corresponding to F lim and K 0 is the initial tangent stiffness of the F À u j j curves. As it is evident in Fig. 18, for deep caissons, in contrast to what is observed in case of low values of H/D (B 1), the dependency of LDCs on a G is less significant, being the scatter not marked. The same observation does apply for drained conditions, although the representation is here omitted for the sake of brevity. In view of defining a pre-design simplified approach, the following equations of LDCs are proposed for both drainage conditions: These simple expressions, neglecting the dependency of LDCs on parameters v, H/D, a G , are justified by the range of values of u j j u j j el considered for design purposes, far from F . F lim ¼ 1. In preliminary design computations, the user can evaluate F lim from the rotated ellipse discussed in Sect. 5 (Eq. 7) and K 0 from the elastic solutions proposed by [15,21,34] (see ''Appendix 2''), as a function of input parameters such as v, H/D, a G and drainage conditions. Then, the threshold displacement ratio u j j u j j el needs to be selected only in Eqs. (10). Table 4, for three values of v and both drainage conditions, correspondingly to u j j lim ¼ 1&. As it is shown in Fig. 18, for undrained conditions, in the range of values of u j j u j j el here considered for design purposes, u j j u j j el 1, the analytical expressions [Eqs. (10)] fairly match the curves obtained from the numerical analyses. It is worth mentioning that one can also compute the Conclusions Rigid and massive caisson foundations are typically subject to combined loading conditions, thus withstanding vertical and horizontal forces at the same time, as well as moment. For this reason, interaction domains may be a useful tool to assess their safety against limit states when designing these foundations, as the distance of the image generalised stress point from the boundary of the interaction domain may be interpreted as a sort of safety factor. However, as full mobilisation of the caissons capacity under general loading conditions tends to be achieved for large displacements not compatible with working conditions, a displacement-based approach for assessing the safety factor at a preliminary design stage may be preferred. This approach has been followed in this paper, based on the non-dimensional expressions provided for both the IDs and generalised load-displacement curves obtained by performing FE analyses. Indeed, the response of massive cylindrical onshore caisson foundations in a two-layer soil, subject to general loading conditions, has been investigated by means of a series of 3D elastic-plastic FE numerical analyses. Bearing capacity under a centred vertical load is first investigated by assuming a drained response for the foundation soil and three different values for the embedment ratio. Numerical results testify that load-displacement relationships are severely affected by the caisson embedment ratio: indeed, structural hardening related to spatial propagation of the plastic zone, from the caisson base to the upper horizontal boundary of the FE mesh, becomes more and more pronounced as H/D increases. Ultimate response of the caissons under a general combination of vertical and horizontal loads (N, Q), as well as overturning moment (M), is also investigated. The 3D envelopes in the N-Q-M/D space present a rugby-ball shape, similar to that obtained for shallow footings in previous works. However, soil-caisson strength under Q-M combinations is different from zero when the vertical load is equal to zero and close to its limit value (N lim,net ), differently from shallow foundations. From the comparison of the results obtained in this study (two-layer soil) with those from previous ones (homogeneous soil), it follows that IDs shape is very slightly affected by the assumed soil layering, whereas it is influenced by the profile of shear strength (constant or linearly increasing with depth). A parametric study has been carried out to understand the factors mainly affecting both size and shape of failure envelopes in the N-Q-M space. The influence of initial loading factor (v) and caisson embedment ratio (H/D) has been evaluated for both fully undrained and drained conditions, showing that the assumed drainage condition scales up the ID size rather than modifying its shape. The influence of the load reference point location is also discussed, with the centroid of the caisson being usually the most suitable choice, as this strongly simplifies the shape of ID, leading to decoupling of rotational and horizontal degrees of freedom for low embedment ratios (H/D = 0.5). Analytical expressions best-fitting the three-dimensional interaction diagrams computed in the non-dimensional N-Q-M/D load space, together with the generalised loaddisplacement curves are also proposed, for both undrained and drained conditions: these are integrated according to a simplified approach proposed as a useful tool at a preliminary design stage to ensure the foundation displacements to be compatible with the desired structural performance, for both serviceability and ultimate limit states. Future work will be oriented towards integrating the obtained results in an elastic-plastic isotropic hardening macroelement model, where the ID equation can be used to define the yield function. Appendix 1 The influence of the soil constitutive law on the IDs is shown in Fig. 19. For the H/D = 1 caisson subject to v = 0.21, the IDs have been obtained from two sets of purely undrained numerical analyses where the following models have been used to describe the soil behaviour: (1) linear elastic-perfectly plastic and (2) nonlinear elasticplastic model with isotropic hardening (Hardening Soil with small-strain stiffness, [1]), both assuming a Mohr-Coulomb failure criterion. Table 5 summarises the parameters assumed for HS Small: details about the meaning and choice of such parameters can be found in [13]. The choice of different constitutive laws affects the evolution of the soil stiffness and, consequently, the excess pore pressure arising under undrained conditions. This affects the evolution of the generalised force-displacement curves attaining a different plateau, as shown in Fig. 19b for a G = 0. However, Fig. 19a shows that the choice of the constitutive law does not affect the shape of the IDs and that the overestimation of the Q-M strength does not exceed the 10%. Under drained conditions, instead, nor the shape neither the dimensions of the IDs are affected since the plateau attained by the curves is the same as a consequence of the same failure criterion assumed by the two models (Fig. 20). If a criterion different from the plateau is used to obtain the IDs, as those based on the K tan /K 0 ratio or on the generalised displacement |u| described in Sect. 4.2, its influence on the IDs shape and dimensions is negligible though the constitutive law affects the evolution of the generalised force-displacement curves. In view of the above and of the purpose of the paper, a less complex constitutive law has been adopted in the parametric numerical study, thus taking advantage of a lower computational effort. Appendix 2 To calculate u j j el ¼ F lim . K 0 , stiffness K 0 is to be evaluated as follows (a) (b) Fig. 19 Influence of the constitutive law under undrained conditions: ID for the H/D = 1 caisson subject to v = 0.21 obtained assuming the elastic-perfectly plastic (black) and the HS Small (grey) model for the soil (a); pushover curve for the load path a G = 0 (b) where ||Á|| stands for the matrix norm. In Eq. (11) C represents the elastic compliance matrix of the soil-caisson system calculated as the inverse of the stiffness matrix The translational, rotational and coupled stiffness terms in Eq. (12) can be evaluated according to [15,21] for H/ D B 1 and to [34] for H/D = 2. The ± sign in Eq. (11) is introduced to properly exploit the previously cited elastic solutions since they assume the caisson base (?) as LRP for H/D B 1 and the caisson top (-) for H/D = 2, respectively. Authors' contributions Not applicable. Funding Open access funding provided by Università degli Studi di Roma La Sapienza within the CRUI-CARE Agreement. Availability of data and materials Data will be made available on request. Code availability Not applicable. Declaration Conflict of interest The authors have no conflict of interest do declare that are relevant to the content of this article. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
9,689.4
2022-04-30T00:00:00.000
[ "Geology" ]
Cell Biological Responses after Shiga Toxin-1 Exposure to Primary Human Glomerular Microvascular Endothelial Cells from Pediatric and Adult Origin Hemolytic uremic syndrome (HUS) is characterized by a triad of symptoms consisting of hemolytic anemia, thrombocytopenia and acute renal failure. The most common form of HUS is caused by an infection with Shiga toxin (Stx) producing Escherichia coli bacteria (STEC-HUS), and the kidneys are the major organs affected. The development of HUS after an infection with Stx occurs most frequently in children under the age of 5 years. However, the cause for the higher incidence of STEC-HUS in children compared to adults is still not well understood. Human glomerular microvascular endothelial cells (HGMVECs) isolated and cultured from pediatric and adult kidney tissue were investigated with respect to Stx binding and different cellular responses. Shiga toxin-1 (Stx-1) inhibited protein synthesis in both pediatric and adult HGMVECs in a dose-dependent manner at basal conditions. The preincubation of pediatric and adult HGMVECs for 24 hrs with TNFα resulted in increased Stx binding to the cell surface and a 20–40% increase in protein synthesis inhibition in both age groups. A decreased proliferation of cells was found when a bromodeoxyuridine (BrdU) assay was performed. A trend towards a delay in endothelial wound closure was visible when pediatric and adult HGMVECs were incubated with Stx-1. Although minor differences between pediatric HGMVECs and adult HGMVECs were found in the assays applied in this study, no significant differences were observed. In conclusion, we have demonstrated that in vitro primary HGMVECs isolated from pediatric and adult kidneys do not significantly differ in their cell biological responses to Stx-1. Introduction Hemolytic uremic syndrome (HUS) is a thrombotic microangiopathy with hemolytic anemia, thrombocytopenia and the kidneys are the major organs affected. Infection with Shiga toxin (Stx) producing Escherichia coli bacteria (STEC) is one of the causes for the development of HUS (STEC-HUS) and is mostly seen in children under the age of 5 years [1]. The reason for a higher incidence of STEC-HUS in children compared to adults is still not understood. Infection with STEC mainly occurs by ingestion of contaminated food products [2] leading to an acute, often bloody diarrhea. Approximately 10-15% of E. coli infected children will develop HUS [2]. After the transfer from the gut into the bloodstream, Stx interacts with cells by binding to its main receptor, globotriaosylceramide (Gb3) [3]. Uptake of the toxin into the cell results in the inhibition of protein synthesis and triggers apoptosis with cell death as result [4,5]. As Gb3 is the main functional receptor for Stx [3], it was hypothesized that the agerelated incidence of HUS is caused by differential expression levels of Gb3 by the vascular endothelium of the glomerular capillaries. Studies have shown different levels of Gb3 in different parts of renal tissue with the highest levels present in the cortex [6], but an age-related difference has not been described yet. In line with previous published work, we had the chance to isolate and culture primary human glomerular microvascular endothelial cells (HGMVECs) from pediatric kidneys. The interaction with Stx-1 was studied and examined in vitro [7] on primary HGMVECs isolated from two different pediatric kidneys (referred to as pediatric donor I and pediatric donor II) and from eight adult donors (referred to as adult donor I to adult donor VIII) with respect to binding, protein synthesis, cell proliferation and migration. Stx-1 Binding to Pediatric and Adult Primary HGMVECs To study the binding of Stx to the endothelial cell surface of primary cultured HG-MVECs derived from two pediatric and four adult kidneys, flow cytometry analysis with the use of fluorescent labelled Stx-subunit B (Stx-B) was performed. This method is a reliable and sensitive method for the indirect detection of Gb3 [8]. The proinflammatory cytokine TNFα was used as positive control as it has been described to cause the upregulation of Gb3 [9]. In addition, TNFα is involved in the pathogenesis of STEC-HUS [10]. As shown in Figure 1, there was a high donor variation in terms of Stx-B binding on the cell surface of HGMVECs with the highest binding of Stx-B on HGMVECs derived from adult donor IV. Binding of Stx-B was low in untreated HGMVECs (control) derived from pediatric donor II and HGMVECs derived from adult donor III. Incubation of cells with 10 ng/mL of TNFα for 24 h resulted in an increase of Stx-B binding in all six donors with an average factor of 2.1 (factor 1.8 for pediatric HGMVECs and factor 2.3 for adult HGMVECs). Despite the results not being significantly different, Stx-B binding was less on the HGMVECs derived from pediatric tissue compared to the HGMVECs derived from adult tissue under basic conditions and after TNFα exposure ( Figure 1B). of HUS (STEC-HUS) and is mostly seen in children under the age of 5 years [1]. The reason for a higher incidence of STEC-HUS in children compared to adults is still not understood. Infection with STEC mainly occurs by ingestion of contaminated food products [2] leading to an acute, often bloody diarrhea. Approximately 10-15% of E. coli infected children will develop HUS [2]. After the transfer from the gut into the bloodstream, Stx interacts with cells by binding to its main receptor, globotriaosylceramide (Gb3) [3]. Uptake of the toxin into the cell results in the inhibition of protein synthesis and triggers apoptosis with cell death as result [4,5]. As Gb3 is the main functional receptor for Stx [3], it was hypothesized that the agerelated incidence of HUS is caused by differential expression levels of Gb3 by the vascular endothelium of the glomerular capillaries. Studies have shown different levels of Gb3 in different parts of renal tissue with the highest levels present in the cortex [6], but an agerelated difference has not been described yet. In line with previous published work, we had the chance to isolate and culture primary human glomerular microvascular endothelial cells (HGMVECs) from pediatric kidneys. The interaction with Stx-1 was studied and examined in vitro [7] on primary HGMVECs isolated from two different pediatric kidneys (referred to as pediatric donor I and pediatric donor II) and from eight adult donors (referred to as adult donor I to adult donor VIII) with respect to binding, protein synthesis, cell proliferation and migration. Stx-1 Binding to Pediatric and Adult Primary HGMVECs To study the binding of Stx to the endothelial cell surface of primary cultured HGMVECs derived from two pediatric and four adult kidneys, flow cytometry analysis with the use of fluorescent labelled Stx-subunit B (Stx-B) was performed. This method is a reliable and sensitive method for the indirect detection of Gb3 [8]. The proinflammatory cytokine TNFα was used as positive control as it has been described to cause the upregulation of Gb3 [9]. In addition, TNFα is involved in the pathogenesis of STEC-HUS [10]. As shown in Figure 1 Effect of Stx-1 on Protein Synthesis of Pediatric and Adult Primary HGMVECs The cytotoxic effect of Stx-1 on primary pediatric and adult HGMVECs was investigated by using a 3 H-leucine incorporation assay/protein synthesis assay. Stx-1 concentrations were derived from experiments and results published in the past [7]. Stx-1 at concentrations ranging from 0.001 pM till 1000 pM affected protein synthesis in a concentration-dependent manner with a 20-40% increase in protein synthesis inhibition when cells were preincubated with 10 ng/mL of TNFα for 24 h (Figure 2). The protein synthesis of HGMVECs derived from two pediatric tissues (pediatric I and II) incubated with 1000 pM of Stx1 for 24 h decreased by 40% in the control group and with 80% in cells prestimulated with TNFα ( Figure 2). In contrast, the protein synthesis of primary HGMVECs derived from three adult tissues (adult V, VI, VII) incubated with 1000 pM of Stx1 for 24 h decreased by 75% in the control group and 95% in cells prestimulated with TNFα ( Figure 2). In summary, primary HGMVECs from adult kidney tissues showed a tendency towards a higher sensitivity for Stx-1 in terms of protein synthesis inhibition compared to HGMVECs derived from pediatric tissues. by Mann−Whitney test). Mean values and SEM are given. Experiments were performed three times for pediatric donor I and II, adult donor I and III. Experiments were performed twice for adult donor II and once for adult donor IV. Effect of Stx-1 on Protein Synthesis of Pediatric and Adult Primary HGMVECs The cytotoxic effect of Stx-1 on primary pediatric and adult HGMVECs was inves gated by using a 3 H-leucine incorporation assay/protein synthesis assay. Stx-1 concent tions were derived from experiments and results published in the past [7]. Stx-1 at co centrations ranging from 0.001 pM till 1000 pM affected protein synthesis in a concent tion-dependent manner with a 20-40% increase in protein synthesis inhibition when ce were preincubated with 10 ng/mL of TNFα for 24 h (Figure 2). The protein synthesis HGMVECs derived from two pediatric tissues (pediatric I and II) incubated with 1000 p of Stx1 for 24 h decreased by 40% in the control group and with 80% in cells prestimulat with TNFα ( Figure 2). In contrast, the protein synthesis of primary HGMVECs deriv from three adult tissues (adult V, VI, VII) incubated with 1000 pM of Stx1 for 24 h d creased by 75% in the control group and 95% in cells prestimulated with TNFα (Figure In summary, primary HGMVECs from adult kidney tissues showed a tendency towar a higher sensitivity for Stx-1 in terms of protein synthesis inhibition compared HGMVECs derived from pediatric tissues. Figure 2. The effect of Stx-1 on the protein synthesis of primary HGMVECs measured by a 3 H-leucine incorporation assay in pediatric HGMVECs from two donors and adult HGMVECs from three donors. Stx-1 at concentrations ranging from 0.001 pM to 1000 pM in control or cells preincubated with TNFα 10 ng/mL for 24 h were used. An increase of 20-40% in protein synthesis inhibition was measured when cells were preincubated with 10 ng/mL of TNFα for 24 h, however no significant difference by ANOVA between adult and pediatric conditions was found (p = 0.712). Mean values and SEM are given. Experiments were performed three times with HGMVECs from pediatric donor I and once with HGMVECs from pediatric donor II. Experiments were performed twice with HGMVECs from adult donor VII, once with adult donor V and once with adult donor VI. The effect of Stx-1 on the protein synthesis of primary HGMVECs measured by a 3 H-leucine incorporation assay in pediatric HGMVECs from two donors and adult HGMVECs from three donors. Stx-1 at concentrations ranging from 0.001 pM to 1000 pM in control or cells preincubated with TNFα 10 ng/mL for 24 h were used. An increase of 20-40% in protein synthesis inhibition was measured when cells were preincubated with 10 ng/mL of TNFα for 24 h, however no significant difference by ANOVA between adult and pediatric conditions was found (p = 0.712). Mean values and SEM are given. Experiments were performed three times with HGMVECs from pediatric donor I and once with HGMVECs from pediatric donor II. Experiments were performed twice with HGMVECs from adult donor VII, once with adult donor V and once with adult donor VI. Effect of Stx-1 on the Proliferation of Pediatric and Adult HGMVECs The effect of Stx-1 on the proliferation of HGMVECs was studied using a BrdU proliferation assay. BrdU is incorporated into the cell during DNA synthesis in proliferating cells [11]. The proliferation of HGMVECs was investigated after the preincubation of cells with 10 ng/mL of TNFα and incubation of cells with 0.1 to 1000 pM Stx-1 for 24 h. As depicted in Figure 3, the proliferation pattern is similar in primary pediatric and adult HGMVECs. TNFα alone decreased cell proliferation by 27-33%, compared to control cells. Preincubation with 10 ng/mL of TNFα for 24 h and incubation with 0.1, 10 or 1000 pM of Stx-1 over a time period of 24 h decreased cell proliferation in a concentration-dependent manner. A concentration of 0.1 pM of Stx-1 with TNFα decreased cell proliferation by 38-42 % in both pediatric and adult HGMVECs, while a concentration of 1000 pM Stx-1 with TNFα decreased cell proliferation by 64% in pediatric HGMVECs and 73% in adult HGMVECs as compared with control cells. No significant difference in proliferation was established between pediatric and adult HGMVECs when statistics were applied for every single condition. However, a concentration of 10 pM Stx-1 or 1000 pM Stx-1 resulted in less proliferation of adult HGMVECs compared to pediatric HGMVECs. This is in line with the results of the protein synthesis assay. Effect of Stx-1 on the Proliferation of Pediatric and Adult HGMVECs The effect of Stx-1 on the proliferation of HGMVECs was studied using a BrdU proliferation assay. BrdU is incorporated into the cell during DNA synthesis in proliferating cells [11]. The proliferation of HGMVECs was investigated after the preincubation of cells with 10 ng/mL of TNFα and incubation of cells with 0.1 to 1000 pM Stx-1 for 24 h. As depicted in Figure 3, the proliferation pattern is similar in primary pediatric and adult HGMVECs. TNFα alone decreased cell proliferation by 27-33%, compared to control cells. Preincubation with 10 ng/mL of TNFα for 24 h and incubation with 0.1, 10 or 1000 pM of Stx-1 over a time period of 24 h decreased cell proliferation in a concentration-dependent manner. A concentration of 0.1 pM of Stx-1 with TNFα decreased cell proliferation by 38-42 % in both pediatric and adult HGMVECs, while a concentration of 1000 pM Stx-1 with TNFα decreased cell proliferation by 64% in pediatric HGMVECs and 73% in adult HGMVECs as compared with control cells. No significant difference in proliferation was established between pediatric and adult HGMVECs when statistics were applied for every single condition. However, a concentration of 10 pM Stx-1 or 1000 pM Stx-1 resulted in less proliferation of adult HGMVECs compared to pediatric HGMVECs. This is in line with the results of the protein synthesis assay. Effect of Stx-1 on the Migration of Pediatric and Adult HGMVECs The effect of Stx-1 on the migration of HGMVECs was investigated by using an endothelial wound closure assay. A concentration of 10 pM Stx-1 was chosen as this concentration resulted in approximately 50% decreased protein synthesis in HGMVECs preincubated with TNFα 10 ng/mL as shown in Figure 2. TNFα alone had no significant effect on endothelial cell migration. A significant delay in wound closure was only measured after 8 h and 12 h of incubation between adult HGMVECs in the control group or preincubated with TNFα when compared with cells preincubated with TNFα and 10 pM Stx-1 (Figure 4). No significant difference was detected when the wound closures of pediatric and adult HGMVECs were compared. However, after 12 h of wound closure, TNFα in combination with 10 pM Stx-1 showed, although not significant, about 10% delay in wound closure when pediatric HMVECs were used and a significant 20% delay when adult HG-MVECs were used and compared to their own control group. This is in line with the results found in the protein synthesis assay (Figure 2) and the results of the proliferation assay ( Figure 3) where adult HGMVECs displayed a tendency to a higher sensitivity for Stx-1. The effect of Stx-1 on the migration of HGMVECs was investigated by using an endothelial wound closure assay. A concentration of 10 pM Stx-1 was chosen as this concentration resulted in approximately 50% decreased protein synthesis in HGMVECs preincubated with TNFα 10 ng/mL as shown in Figure 2. TNFα alone had no significant effect on endothelial cell migration. A significant delay in wound closure was only measured after 8 h and 12 h of incubation between adult HGMVECs in the control group or preincubated with TNFα when compared with cells preincubated with TNFα and 10 pM Stx-1 ( Figure 4). No significant difference was detected when the wound closures of pediatric and adult HGMVECs were compared. However, after 12 h of wound closure, TNFα in combination with 10 pM Stx-1 showed, although not significant, about 10% delay in wound closure when pediatric HMVECs were used and a significant 20% delay when adult HGMVECs were used and compared to their own control group. This is in line with the results found in the protein synthesis assay (Figure 2) and the results of the proliferation assay ( Figure 3) where adult HGMVECs displayed a tendency to a higher sensitivity for Stx-1. Discussion In this study, the interaction of Stx-1 with primary HGMVECs derived from pediatric and adult kidney tissue was examined by studying binding, protein synthesis, cell proliferation and migration. There was no difference between the binding of Stx-B to pediatricderived HGMVECs as compared to adult HGMVECs. In both pediatric and adult HGMVECs, preincubation with TNFα led to increased Stx binding. Stx-1 was cytotoxic for HGMVECs in a dose-dependent manner, and TNFα increased the protein synthesis inhibition by 20-40%. Decreased proliferation of HGMVECs in a concentration-dependent manner was measured when cells were incubated with TNFα alone or when incubated with Stx-1 in combination with pretreatment with TNFα. There was no significant difference in pediatric cell migration after treatment with Stx-1, however a clear trend towards Discussion In this study, the interaction of Stx-1 with primary HGMVECs derived from pediatric and adult kidney tissue was examined by studying binding, protein synthesis, cell proliferation and migration. There was no difference between the binding of Stx-B to pediatric-derived HGMVECs as compared to adult HGMVECs. In both pediatric and adult HGMVECs, preincubation with TNFα led to increased Stx binding. Stx-1 was cytotoxic for HGMVECs in a dose-dependent manner, and TNFα increased the protein synthesis inhibition by 20-40%. Decreased proliferation of HGMVECs in a concentration-dependent manner was measured when cells were incubated with TNFα alone or when incubated with Stx-1 in combination with pretreatment with TNFα. There was no significant difference in pediatric cell migration after treatment with Stx-1, however a clear trend towards a delay in wound healing was visible. Even though an increased sensitivity of adult HGMVECs for Stx-1 compared to pediatric HGMVEC was seen at increasing concentrations of Stx-1, no significant differences were observed. It was suggested that the glomerular endothelium of young children expresses higher levels of Gb3 compared to the glomerulus of adults. We have expanded on studies published in the past and used primary HGMVECs derived from two pediatric kidney tissues. Primary cells most closely represent the tissue of origin and are not genetically modified. They have not been used before to study Gb3 levels and primary pediatric cells are scarce, which make them a unique tool to use in the current examination. Lingwood et al. compared the binding of Stx-B between the kidney tissue of infants and adults [6,12]. They found that Stx-B bound to the glomeruli of infants < 2 years and not to the glomeruli of adults [12]. It should be noted that, for the most part, kidney sections of steroid sensitive nephrotic syndrome (SSNS) patients were examined, which have been in contact with systemically and locally released cytokines which might result in higher levels of Gb3 and more Stx-B binding in different parts of the kidneys. In another publication from Boyd et al., they investigated the Gb3 content and Stx-B binding to human kidneys as a function of age [6]. Although they only had two samples of children and five samples of adults, the levels of Stx-B binding increased significantly in adult kidneys [6]. However, it must be noted that small amounts of a second Stx-binding glycolipid were detected in their experiments. This glycolipid terminated in the gal-α1-4gal structure. This structure is necessary for the binding of the toxin and thus may have played a role in the increments found [6]. Ergonul et al. compared frozen renal sections from subjects aged between 6 months to 85 years and showed that the pattern of Stx-1 binding was identical between the different age groups [13]. It is clear that we cannot compare our results of in vitro endothelial cell monoculture experiments with the above-mentioned pathology studies as they do not mimic the same cellular interactions and extracellular environment. The pediatric HGMVECS used in this study were isolated from kidneys of children under the age of 3 years, not suitable for transplantation, and are not representable for the glomerular endothelium of STEC-HUS patients. It is not feasible to obtain and culture HMGVECS from STEC-HUS patients as mostly no biopsy is clinically needed and not preferable, due to thrombocytopenia. However, it is possible to examine the host characteristics of STEC-HUS patients using blood outgrowth endothelial cells (BOECs) derived from pediatric patients with a history of STEC-HUS. BOECs are not derived from the kidneys of patients, nonetheless they do represent the endothelial characteristics and (epi)genetical background of the donor. In a recently published study, BOECs isolated from STEC-HUS patients showed no differences [14]. It is most likely that other factors, such as cytokines, chemokines, and circulating blood cells activated by the gastrointestinal STEC infection, activate the glomerular endothelial cells and their surrounding environment to a proinflammatory state. The variation in results upon Stx incubation between the cell cultures of various donors was most likely multifactorial and dependent on the expression of Gb3 on the cell surface. It is known that Gb3 expression at the cell surface can vary and depends on cell confluency as well as passages of primary cells used in the experiments. Subconfluent monolayers of HGMVECs as well as HUVECs show a higher expression of Gb3 on the surface leading to a higher cytotoxicity of Stx as compared to confluent monolayer of cells [15]. Van Setten et al. [7] showed that a confluent layer of HGMVECs only became susceptible for Stx-1 after preincubation with TNFα. Van Setten et al. [7] maintained cells for five days in a confluent state, while we treated the cells with Stx-1 the first day after reaching 80−100 % confluency. Blood cells as well as other glomerular cells are considered to play a role in the pathogenesis of disease. Ichimura [16] investigated bacterial responses under the influence of different concentrations of nitric oxide (NO). Their group suggested that nitric oxide generation in macrophages might stimulate the production of Stx. It has also been reported that Stx decreases the secretion of vascular endothelial growth factor (VEGF) [17]. VEGF is a potent angiogenic factor mainly produced by podocytes, that induces the formation of fenestrations in the endothelium of the glomerulus [18]. Decreased VEGF levels caused glomerular thrombotic microangiopathy (TMA) in mice [18]. Therefore, VEGF may play a role in the pathogenesis of STEC-HUS as HUS falls under the clinical picture of TMAs. Another pathway which may play a role in pathogenesis is the complement system, a system which is part of innate immunity. Overactivation of this system is known to be the main driver of disease in atypical HUS [19]. It has been shown that Stx binds factor H (FH), with delayed and reduced co-factor activity as result [20]. Furthermore, Stx caused downregulation of the complement inhibitor CD59 on tubular epithelial and glomerular endothelial cells [21] and from a clinical perspective, induced complement protein levels in patients with STEC-HUS at the time of hospital admission have been measured [22,23]. Among other publications, these studies suggest complement activation as a component of the pathophysiology of STEC-HUS. Still, at the moment, no model is available that recapitulates all the features of STEC infection and Gb3 expression is different in the kidneys from animals as compared to humans [24,25]. Future experiments using various kidney cells and/or kidney organoids in a co-culture system would bring us closer to the actual situation inside the body. Another arsenal available to study the pathophysiology of diseases is animal models. In conclusion, no differences in cell biological responses after Stx-1 exposure to primary HGMVECs from pediatric and adult origin were established. Other extrinsic or (epi)genetic factors might contribute to the sensitivity of the glomerular endothelium of children and the use of more sophisticated models, would help to gain a better understanding about the pathophysiology of this rare disease. Ethics This study was approved by the Medical Ethical Review Board of the Radboudumc, Nijmegen, The Netherlands. Written informed consent was obtained with a signature from parents/legal guardians or controls whose HGMVECs were used in this study. This study was executed in keeping with the regulations of the Declaration of Helsinki. Isolation and Purification of Human Glomerular Microvascular Endothelial Cells (HGMVECs) Studies were performed with human pediatric kidney tissue obtained from two kidneys of pediatric donors between the age of 2 and 3 years as well as eight healthy adult donors that were not suitable or disapproved for transplantation. The isolation and purification of the HGMVECs was carried out as previously described [7]. Briefly, glomeruli were isolated by dissecting the cortex followed by a gradual sieving procedure. Because the glomeruli of children are considerably smaller than those of adults, glomeruli were collected on top of smaller size screens (38, 53, 90 and 108 µm). Subsequently, glomeruli were digested with 0.1% (w/v) collagenase type 2 CLS (Worthington, NJ, USA) for 2 h at 37 • C. Glomerular remnants were resuspended in complete medium which consisted of M199 (Gibco Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 10% heat inactivated newborn calf serum (Gibco Thermo Fisher Scientific, Waltham, MA, USA), 10% heat inactivated human serum (Innovative Research, Novi, MI), 2 mmol/L glutamine (Gibco Thermo Fisher Scientific, Waltham, MA, USA), 1% penicillin/streptomycin (Gibco Thermo Fisher Scientific, Waltham, MA, USA), 5 U/mL heparin (Leo Pharmaceuticals, Weesp, The Netherlands) and 150 mg/L of endothelial cell growth factor (extracted from bovine brains as described by Maciag et al. [26]) and plated on gelatin coated plates (Corning Incorporated, Kennebunk, ME, USA). Attachment occurred within one or two days after which endothelial and epithelial cells started to grow out with high proliferation rates. Primary outgrowths were selectively trypsinized and filtrated through a 38 µm sieve to enrich HGMVECs. HGMVECs were specifically collected by performing an immunomagnetic separation technique using a monoclonal antibody against platelet cell adhesion molecule-1 (PECAM-1) as an endothelial specific antibody. Highly purified populations of pediatric HGMVECs were obtained by repeating the immunomagnetic separation technique once or twice. Cells with 80-100 % confluency, passage 6-11, were used for experiments. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of the Radboudumc, Nijmegen, The Netherlands. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Not applicable.
6,154.4
2021-05-25T00:00:00.000
[ "Medicine", "Biology" ]
Coupling Influence on the dq Impedance Stability Analysis for the Three-Phase Grid-Connected Inverter The dq impedance stability analysis for a grid-connected current-control inverter is based on the impedance ratio matrix. However, the coupled matrix brings difficulties in deriving its eigenvalues for the analysis based on the general Nyquist criterion. If the couplings are ignored for simplification, unacceptable errors will be present in the analysis. In this paper, the influence of the couplings on the dq impedance stability analysis is studied. To take the couplings into account simply, the determinant-based impedance stability analysis is used. The mechanism between the determinant of the impedance-ratio matrix and the inverter stability is unveiled. Compared to the eigenvalues-based analysis, only one determinant rather than two eigenvalue s-function is required for the stability analysis. One Nyquist plot or pole map can be applied to the determinant to check the right-half-plane poles. The accuracy of the determinant-based stability analysis is also checked by comparing with the state-space stability analysis method. For the stability analysis, the coupling influence on the current control, the phase-locked loop, and the grid impedance are studied. The errors can be 10% in the stability analysis if the couplings are ignored. Introduction The integration of renewable energy sources is normally assisted by power electronic converters due to its ability for asynchronous connection and fully-AC voltage control. The high demand for renewable energies requires more and more inverters to be connected to the grid. The interaction between the grid-connected inverter and the grid may cause instabilities [1]. The stability analysis for the grid-connected inverter is essential to ensure secure power transportation to the grid. Two stability analysis methods can be applied according to the small-signal linearization technology. The state-space stability analysis [2] is a mature and commonly-used method. However, a high order and a complex state matrix have to be built. Impedance stability analysis is achieved via the impedance ratio, which is determined via the equivalent impedance of the inverter and the grid impedance. The impedance ratio can also be drawn as the Bode plot for the frequency analysis. Both Norton-based [3] and Thevenin-based [4] equivalent impedances of the inverter can be derived in the impedance stability analysis. For a three-phase inverter controlled via the dq frame, the impedance ratio is normally derived in the dq frame, which is a 2 × 2 matrix. Both eigenvalues of the impedance-ratio matrix are required for the stability analysis via the generalized Nyquist criterion (GNC) [5]. The criterion is commonly used in the grid-connected inverter system to identify the negative impact on the stability, such as increasing the cut-off frequency of the phase-locked loop (PLL) [6] and current control loop [7], increasing power injection from the inverter and grid impedance [8]. For a grid-connected current control inverter, only the q-axis is used for the PLL to synchronize the dq frame. Therefore, its impedance-ratio matrix is a coupled asymmetrical matrix, whose eigenvalues are difficult to derive. Couplings are normally ignored to simplify the eigenvalue derivation during the impedance stability analysis [8,9]. To achieving an accurate impedance stability analysis based on GNC, the impedance-ratio matrix is transferred into the stationary frame in order to decouple the matrix [10]. However, it is found that couplings still exist because the impedance-ratio matrix is asymmetrical [11]. The determinant, rather than both eigenvalues of the impedance-ratio matrix, which is derived simply for including couplings, was used for the three-phase rectifier's stability analysis [12,13] in the 1990s. Recently, the impedance stability analysis based on the determinant was applied for the inverter system [14,15]. Only the determinant, rather than two eigenvalues, is figured as one pole map or one Nyquist plot for the stability analysis, which simplifies the analysis process. Another method for including couplings is to convert the multi-input and multi-output dq impedance into its sequence domain single-input and single-output equivalents [16]. Then, the Nyquist criterion, rather than the generalized Nyquist criterion, can be applied. In this paper, the coupling influence on the dq impedance stability analysis is studied. The question about whether ignoring couplings causes unacceptable analysis errors will be answered. Analysis errors are defined and quantified to assist the study of the coupling influence. The dq impedance stability analysis based on the determinant rather than eigenvalues is used to include the couplings easily and present the accurate analysis results. The mechanism by which the stability of the inverter system is determined only by the determinant of the impedance-ratio matrix will be unveiled. The dq impedance stability analysis results will be validated in the time-domain simulation. The state-space stability analysis will be used as the benchmark to validate the accuracy of the determinant-based impedance stability analysis. This paper is organized as follows: In Section 2, the dq impedance stability analysis is introduced. The equivalent dq impedance of the inverter is derived in Section 3. The simulation verification and the coupling influence are shown in Section 4. dq Impedance Stability Analysis Grid-connected inverters are normally controlled in the dq frame as a current source. Therefore, its small-signal model is built according to the Norton law [3], as shown in Figure 1. A list of variables, which are shown in Figure 1, is explained below: • i s : reference deviation of the control system • v o : output voltage deviation • i g : feeding current deviation from the inverter • the bold variables stand for its d-q matrix such as i g = i gd i gq . The frequently-used notations are summarized below: • d,q: d-axis and q-axis parameters • dd, dq, qd, qq: the postion of each element in the matrix • rt: ratio matrix • u,l: upper and lower parameters From the inverter side, the relation between v o and i g is derived as: From the grid side, the relation between i g and v g is derived as: Substituting i g in (2) with (1) yields: Rearranging (3) for v o yields: where the impedance-ratio matrix is ( It is reported [3] that the system stability is determined by the impedance-ratio matrix (I − Z g Y o ) −1 based on (4). dq Impedance Stability Analysis via Eigenvalues Based on the generalized Nyquist criterion, both eigenvalues of the impedance-ratio matrix need to be drawn as Nyquist plots for the stability analysis [8]. Each element of the impedance-ratio matrix is presented below: where notation dd, dq, qd, and qqmeans the postion of each element in the matrix, rt means the ratio matrix, and Y rt dq (s) , Y rt qd (s) are couplings. The eigenvalues λ 1 (s)&λ 2 (s) of the matrix are calculated as: The transfer functions of the eigenvalues are found by rearranging the equation above: It is difficult to do the square root of in (7), as each element of the impedance-ratio matrix is a complicated transfer function in the s domain. If the couplings Y rt dq (s) , Y rt qd (s) are ignored, eigenvalues are therefore simplified and calculated below based on (7): The ignoration removes the coupling influence on the system stability analysis. The stability analysis will be more accurate if the couplings are considered. dq Impedance Stability Analysis via the Determinant It was found that the determinant of the impedance-ratio matrix is the key factor that determines the system stability. Couplings of the impedance-ratio matrix are contained in the determinant; thus, their influences on the stability are all accounted for. The Nyquist plot or the pole map as the stability analysis tool can be drawn via the determinant to check the right-plane poles. The mechanism of the determinant as the key factor for the stability analysis is shown below. The impedance-ratio matrix can be reconstructed as two parts: an adjacent matrix and a determinant as shown below: The adjacent matrix is calculated based on (5): Each element of Y o and Z g is rewritten as the form such as , where the numerator Y o ddu (s) stands for all its zeros and the denominator Y o ddl (s) stands for all its poles as shown below: where n and m are the number of zeros and poles, respectively. The equivalent admittance Y o of the inverter and the impedance Z g of the grid can be presented as below: One element of adj(I − Z g Y o ) such as Y rt dd (s) will be calculated based on (10), (13), and (14): All poles of Y rt ddl can be derived from (15): The equivalent admittance Y o of the inverter has no right-plane poles ( [3]), neither the grid impedance Z g . Therefore, no right-plane poles exist in Y o ddl (s), Z g ddl (s), Y o dql (s), and Z g qdl (s). It can be identified via (16) that Y rt dd (s) has no right-plane poles. Following the same way, the other elements of It can be concluded finally via the identification above and (9) that the system stability is determined only by the determinant det((I − Y o Z g ) −1 ) of the impedance-ratio matrix. For the stability analysis, one Nyquist plot or one pole map can be used based on the determinant for the stability analysis by checking the right-plane poles. Small Signal Impedance of a Current-Controlled Inverter Before validating the accuracy of the determinant-based stability analysis, Y o of the grid-connected inverter will be derived in this section. The grid-connected inverter is usually controlled in the dq frame as a current source, and the frame is synchronized via a PLL, as shown in Figure 2. The abc-dq transformation in terms of the PLL is linearized first, and the impedance derivation is followed. The variables are shown in Figure 2 and are listed and explained below to help define the equations. • T del : time delay from the control and pulse width modulation (PWM) dead time. • θ: synchronized phase from PLL. : inverter voltage after abc-dq transform : inverter current after abc-dq transform : output voltage after abc-dq transform, : admittance of LC filter. Linearization of the abc-dq Transformation It is convenient to derive the impedance of the inverter in the dq frame due to the applied dq control. The three-phase abc system is therefore presented as the dq form in the derivation. Theses dq-presented abc parameters (v o i c v c ) are equal to their dq parameters (v s o i s c v s c ) after the abc-dq transformation at steady state, but are different when a synchronized phase error θ is applied at the transformation. Their relations that take v s o as an example are summarized below via the small-signal modelling: where V o and V s o are the corresponding steady state values. Equation (17) can be linearized as below due to the small value of θ, V o and V s o are equal at steady state, and (18) is therefore simplified as: Following the same way, the relationship between i s c and i c is derived: Considering the dq-abc transformation for the dq-presented inverter voltage v c , this yields: Small-Signal Model of the Phase-Locked Loop The synchronized-phase error as the output of the phase-locked loop is generated by its input v q s . Their relation is summarized as below according to the control diagram Figure 2: . Notation PLL stands for phase-locked loop; notation p and i are the proportion parameter and the integration parameter of the PI controller, respectively. Inverter Admittance Derivation After the linearization, the current control, LC filter, and grid impedance will be accounted for in the derivation for the final admittance of the inverter. The current control is presented below according to Figure 2: where (27) with (24) and (25) yields: Substituting v s c in (26) with (28) and taking the time T del including control delay and the dead time of PWM into account yields: where i c can be presented as the crossing voltage over Z f : Substituting v c in (29) with (30) and rearranging yield: Rewriting (31) yields: where Considering the influence of the capacitor of the LC filter yields: Substituting i c in (32) with (33) yields: Therefore, Y o has been derived based on the above equations, and the dq impedance stability analysis based on the determinant can be applied. Comparison between Determinant-Based Impedance Stability Analysis and State-Space Stability Analysis It is essential to validate the accuracy of the determinant-based impedance stability analysis. Therefore, the state-space stability analysis, as a benchmark of the stability analysis [2], is used for the validation. The derivation for the state-space stability analysis is shown in the Appendix A. The grid-connected current-controlled inverter system for the stability analysis is shown in Figure 2, and its parameters are shown in Table 1. The inductors and resistors of the AC transformer (L c , R t ) and the transmission line (SCR) are noted as L g and R g in Figure 2. The pole map is used for the comparison between both stability analyses because it shows the pole position and pole locus in detail and simply. For the determinant-based impedance stability analysis, all the poles of the determinant based on (9) are drawn in the pole map. The pole locus of both stability analyses are drawn in Figure 3 by changing the cut-off frequency of PLL (ω PLL ) from 55 rad/s to 1100 rad/s. As shown in Figure 3, the pole locus and each pole of both stability analyses are precisely matched. The same stability analysis result based on their pole locus is found, that increasing ω PLL leads the poles towards to the right-half plane and causes the low stability or instability of the inverter system. It is concluded that both stability analyses have the same accuracy. Coupling Influence on the dq Impedance Stability Analysis As mentioned in Section 2, the couplings of the impedance-ratio matrix in the eigenvalue-based impedance stability analysis are difficult to include. If the couplings are ignored, the dq impedance stability analysis will lose accuracy. To consider the couplings, the determinant-based dq impedance stability analysis is used. Three cases below are presented to show the influence of the couplings on the stability analysis. 1. Ignoring couplings causes the error of the stability analysis via the time-domain simulation; 2. The influence of the couplings on the pole locus; 3. The error quantification for the coupling influence on the stability analysis. The inverter system as shown in Figure 2 is used for the stability analysis, and its parameters are shown in Table 1. Time-domain simulation of the grid-connected current-controlled inverter system was built in MATLAB/Simulink. The pole map was used to show the stability analysis results. Time-Domain Validation Based on the determinant-based dq impedance stability analysis, the analysis results with considering couplings or without considering couplings are shown as the pole map in Figure 4a under the 301-rad/s cut-off frequency of the PLL. The right-plane poles appear when the couplings are considered. On the contrary, if the couplings were ignored, the analysis shows that the system was stable because all its poles still stayed in the left plane. The stability analysis results of both impedance stability analyses were mismatched. The time-domain simulation of the grid-connected current-controlled inverter system was built in MATLAB/Simulink to validate the results from the stability analysis. The d-axis output voltage v od was used to show the system state. The average model of the two-level VSCs was also added to show clearly that the system was unstable or became stable gradually without the disturbances of the harmonics. As shown in Figure 4b,c, when the ω PLL was increased from 290 rad/s to 301 rad/s at 1 s, v od started to oscillate, and its magnitude increased gradually. It shows that the system was unstable at a 301-rad/s PLL cut-off frequency. At 1.2 s, the system was back to the stable condition as ω PLL was changed back to 290 rad/s. The time-domain simulation result at 301 rad/s ω PLL matched the analysis result with the couplings, as shown in Figure 4a. Ignoring couplings failed to identify the instability. It proved that ignoring the couplings caused errors in the stability analysis. Pole Locus Comparison The coupling influence on the pole locus is drawn in this section. The influence of ignoring couplings on the pole locus is studied. Four pole locus are drawn via changing the parameters including ω PLL , ω c , i * cd , and i * cq , as shown in Figure 5. It is observed that the pole locus without considering the couplings was not precisely matched to the one with the couplings, as shown in Figure 5a-d. The movements of both pole loci were the same, but there were errors between each pair of poles. For the movement, each pair of pole loci with or without coupling are shown in Figure 5a-d. The poles in the middle moved to the right-half plane when the parameters were increased significantly. Therefore, both impedance stability analyses can show that increasing ω PLL (ω c , i * cd and i * cq ) reduced the system stability or even led to instability. For the pole errors, it is observed that the error between each pair of poles always existed under the different values of the parameters, as shown in Figure 5a-d. These errors led the analysis without couplings to lose accuracy for the stability analysis. Furthermore, ignoring couplings could fail to identify the instabilities, as shown in Section 4.1. Error Quantification for the Stability Analysis without Couplings In this section, the error of the analysis without couplings is defined and shown. The cut-off frequency of the PLL, which is the basic control for the dq frame, was selected to determine the error, as shown below: where ω E−max PLL are the maximum PLL cut-off frequency to keep the system stable based on the analysis without couplings. ω D−max PLL are the maximum PLL cut-off frequencies to keep the system stable based on the analysis with couplings. and ω D−max PLL were identified by increasing ω E PLL and ω D PLL until their right-plane poles appeared in the pole map, respectively. Various SCRs were also selected to show the errors. The stability analysis errors based on (35) are therefore calculated and summarized in Table 2. The error from the eigenvalue-based impedance stability analysis was 12.7% when the inverter connected to a weak AC grid (SCR = 2). It was reduced when the grid became stronger (SRC = 15); the error was only 0.2%. This is because the weak AC grid enhanced the couplings, and therefore, caused the large error in the eigenvalue-based impedance stability analysis. The output voltage v o (magnitude) was changed in terms of the various SCRs. The same operation point, that v o at 1 p.u., should be maintained. i * cq was changed to achieve this in terms of various SCRs. The errors under the same magnitude of v o are shown in Table 3. It is observed that even a strong grid was connected, and the error (11.53%) was almost same as that of a week grid under the same output voltage operation point. No matter which type of AC grid was connected, the error from the eigenvalue-based impedance stability analysis could be around 10%, as shown in Table 3. It was also found that ω D−max PLL increased along with the increasing SCR, as shown in Tables 2 and 3. When the inverter connects to a weak AC grid, ω PLL should be reduced within the limit of ω D−max PLL in order to maintain the stable operation of the inverter system. Conclusions The coupling influence on the dq impedance stability analysis was studied. The results showed that ignoring the couplings of the impedance-ratio matrix brought significant errors up to 12.7% in the stability analysis, which may fail to identify the instabilities. The failure was validated in the time-domain simulation. Ignoring the couplings caused the wrong stability analysis results. A weak AC grid strengthened the couplings and caused the large errors. However, when the same output voltage magnitude of the inverter was maintained, the errors were around 10%, no matter whether a strong or a weak AC grid was connected. The couplings did not change the movement of the pole locus. In other words, the analysis without couplings could still identify whether the stability was increased or decreased because of the changing parameters. The dq impedance stability analysis based on the determinant of the impedance-ratio matrix can achieve accurate stability analysis simply, which had the same accuracy as the state-space stability analysis. For the future work, an auxiliary control will be designed based on the dq impedance stability analysis in order to stabilize the grid-connected inverter. The coupling influence on the outer loop control that regulates the power can be further studied via the impedance stability analysis. Conflicts of Interest: The authors declare no conflict of interest. Appendix A The space matrix of the grid-connected inverter system is derived in this section for the state-space stability analysis. The state equation can be represented as: where x is the state vector, u is the input vector, A is the state matrix, and B is the input matrix. According to (A5), (A6), (A8), (A10), (A11), and (A16), x is [ θ, x PLL , i gd , i gq , v od , v oq , x cd , x cq , i cd , i cq ] T , and A is summarized into (A2). The eigenvalues of A are the poles that determine the system stability [2]. The details of the derivation are shown below. The following relations are assumed: Substituting v s oq of (A3) with (19) yields: Substituting i s od and i s oq of (A4) with (20) yields: The relation of the phase-locked loop in Figure 2 can be summarized as: Substituting first v s oq with (A3) and second v s oq with (19) yields: The relation between the voltage and the current induced by Z g can be summarized below via the dq form: Rearranging the equation yields: Following the same way, the relation over C f can be arranged as: The relation over Z f can be found: v c is derived from the current control:
5,203.8
2019-09-26T00:00:00.000
[ "Engineering", "Physics" ]
Providing Equity of Access to Higher Education in Indonesia: A Policy Evaluation Providing Equity of Access to Higher Education in Indonesia: A Policy Evaluation DOI: 10.23917/ijolae.v3i1.10376 Received: February 25th, 2020. Revised: April 13th, 2020. Accepted: April 17th, 2020 Available Online: April 20th, 2020. Published Regularly: January 1st, 2021 Abstract In the last decade, Indonesia has worked towards expanding access to higher education, but the enrolment of the poor remains negligible with the majority of students in the country’s leading public universities still coming from Indonesia’s wealthiest echelons. Concerned with the issue of equity and access, the govern-ment has formulated a new policy calling on all higher education institutions to ensure at least 20% of their newly admitted students are of a low socioeconomic status (SES). The principal challenge the government has faced is a discrepancy between its ambitious political agenda and the policy’s implementation affected by inadequate budgeting, lacking implementation mechanisms, and limited award allocations. This challen-ge raises a question of whether the Equity and Access Policy can be effectively implemented and, if so, under what conditions can such success be achieved. We thus examine the country’s Equity and Access Policy, education system with its leadership structure, broader institutional framework, and how these fac-tors interact to obstruct the higher education access for the poor in Indonesia. The inadequate policy imple-mentation can impede Indonesia’s human capital development and the country’s economic growth. Introduction In the last decade, the Indonesian government has pushed to improve access to higher education, resulting in the gross enrolment rates increasing from 17.23% in 2005 to 36.31% in 2018 (World Bank, 2019). However, the enrolment of the poor has remained low especially in the country's top public universities. In 2010, only 2.5% of those enrolled at a higher education institution were from the poorest 20% of the households as compared with 64.7% of the student body coming from the wealthiest 20% of Indonesia's households (MOEC, 2013). Concerned with the issue of equity, the government created a new policy -Law 12/section 74 (thereafter, the Equity and Access Policy) -calling on all higher education institutions (HEIs) to enroll at least 20% of its students from low socioeconomic backgrounds and tasking the Directorate General for Higher Education with its implementation (DGHE). The primary challenge to this policy's implementation is that both public and private HEIs have the autonomy in budgeting and funding allocations. DGHE's jurisdiction over HEIs is limited to a supervisory role via each university's Board of Directors. This has created a tension between the government's ambitious political agenda and the lacking implementation mechanisms, including but not limited to the inadequate budget tracking and award allocations to the poor students. This challenge raises a question of whether the Equity and Access Policy can be effectively implemented. Thus various structural and organizational factors to determine whether and how they work to obstruct the policy's implementation are evaluated. Conceptually, the study addresses the issues of educational equity, social justice, and economic development. We analyze one of the most pervasive problems of higher education globally: limited access to higher education by the poor. If higher education provides merited social mobility (Sabic-El-Rayess, 2012Turner, 1960) and improves human capital (Schultz, 1961(Schultz, & 1981Prakhov, 2019) then it is in the interest of each nation to promote mobility of the poor through education. But, as prior research has found (Sabic-El-Rayess, 2012Sabic-El-Rayess & Mansur, 2016;Sabic-El-Rayess & Seeman, 2017;Sabic-El-Rayess & Otgonlkhagva, 2012;Sabic-El-Rayess, Mansur, Batkhuyag & Otgonlkhagva, 2019;Moratti andSabic-El-Rayess, 2009a and2009b), the elite mechanisms and interests often prevail over those of the poor and are at play in most social, economic, educational and political hierarchies globally. Instead of broadening access, higher education typically serves to reproduce the existing social strata (Sabic-El-Rayess & Mansur, 2016) unless there is political will to interrupt the status quo and expand the access to the poor. To examine whether the political will to transform and improve equity and access to higher education in Indonesia is genuinely there, the design and implementation of the Equity and Access Policy is evaluated. This evaluation is situated into a larger debate on access and equity in higher education, and our findings are contextualized within Foucault's (1997) social justice and ethics framework that calls on researchers to question all assumptions (Sabic-El-Rayess et al., 2019;Foucault, 1997). The assumption here is that the narrative around the Equity and Access Policy aims at broadening access for the poor, but that discourse may conflict with the implementation. There are other examples of education policies where the outcomes not only diverged from broadening access to education but instead adversely impacted the poor. For instance, the school uniform policy in Mongolia aimed at easing the access to education for the poor, but once implemented, it led to higher dropout rates amongst the poor Sabic-El-Rayess & Otgonlkhagva, 2012). Thus we question and probe the Equity and Access Policy to determine if and what factors prevent it from achieving the intended outcome for the poor in Indonesia. We also recognize that providing financial aid to poor students can burden HEIs. In 2016, four years after the implementation of the Equity and Access Policy, poor students made up only 10% of Indonesia's total higher education enrolment (Directorate General for Higher Education, 2017). The federal government funds allocated differ greatly across HEIs in Indonesia, and there is no specification on how much of the federal funds the individual institutions are entitled to, which elevates the risk of corruption, misallocation of funds, and subjective decision-making that can, intentionally or not, affect the poor's access to HEIs (Welch, 2012). Corruption in education has been studied extensively as a barrier to merited social mobility, inclusion, and economic development (Sabic-El-Rayess, 2009, 2011, 2016a. Systemic corruption has also been identified to trigger students' disengagement and even radicalization (Sabic-El-Rayess, 2016b andSabic-El-Rayess & Mansur, 2020). Thus, sufficient and effective allocation of funds is essential to success of any policy. We end with a University of Gadjah Mada (UGM) case study as UGM is one of the nation's leading public universities. The inadequate policy implementation, we ultimately argue, can impede Indonesia's human capital development and the country's economic growth. Indonesia consists of over 17,000 islands, and with its 260 million people ranks as the world's 4th most populous country. It is abundantly rich in natural resources but lacks in human capital with many Indonesians not advancing their knowledge and skills due to their poverty or the underdeveloped educational infrastructure in the regions of Indonesia where they reside. The country's current level of public education spending stagnates at 3.6% of GDP and is lower than what is advisable for developing economies (Dilas, Mackie, Huang & Trines, 2019). Access to and availability of HEIs is still limited in underdeveloped areas of Indonesia, forcing many individuals to move to cities to obtain education (OECD, 2013). Transportation and relocation costs are additional costs and the entry barriers for the poor (Logli, 2016). Despite these obstacles, the demand for higher education has increased over the years and total enrollment has grown by 68%, from 3.7 million in 2006 to over 6.1 million in 2016 (WES, 2019). Secondary school graduation rates have also improved from 61.7% in 2013 to 65.9% in 2017 (OECD, 2020), stimulating demand for higher education. Here, we define the higher education as all post-secondary schooling including vocational, academic, or professional type of schooling. HEIs -from academies, polytechnics, colleges, institutes to universities -provide various programming for their students. The number of these institutions has grown rapidly over the last couple of decades with most growth occurring in the private sector (Moeliodihardjo, 2014). Of 3,200 institutions, 92 are public and therefore governmentally managed (OECD/ADB, 2015). Public institutions are differentiated based on their accreditation and level of autonomy in governance and financing. The Ministry of Education and Culture (MOEC) along with DGHE regulates most public institutions, but some have their by-laws approved by the MOEC and the President. The implication of this is that, for most public institutions, the budget and related indicators are set by the MOEC to ensure that the institutions' individual strategies are closely aligned to their national agenda (Negara & Benveniste, 2014). Stateowned HEIs are typically more autonomous relative to other public universities. They have independent sources of revenue and are more accountable to the public than the MOEC (Yulianto, 2017). Private HEIs have expanded access to higher education as well: they account for over 60% of enrolment nationally (OECD/ADB, 2015). The urban-rural gap in provision of higher education remains significant. Only few provinces are home to 80% of top HEIs (Logli, 2015). For instance, the best 10 universities are located on the island of Java (QS, 2019). In 2012, the gross enrollment rate for higher education in Indonesia's capital, Jakarta, was 122%, as compared to West Papua's meager 22% (OECD, 2013). The growth in higher education needs to occur across all regions in Indonesia to meaningfully diversify access. In an effort to broaden that access, the DGHE has set out to establish community colleges with vocational programs in all districts of Indonesia (OECD/ADB, 2015). As of 2013, 35 community colleges were being developed in rural Indonesia (Logli, 2016). The government intends to build additional technical institutions outside Java along with the institutes of technology in the islands of Sumatra and Kalimantan as well as additional polytechnics in every province (OECD, 2013). Despite challenges, demand for higher education remains high from all social quintiles, partly due to higher rate of graduation from secondary schooling ( Figure 1). Most institutions are currently located in Java (43.7%) and Sumatra (29.1%) groups of islands while the underdeveloped provinces of Maluku and Papua islands host only 3.4% of all HEIs (Moeliodihardjo, 2014). Consequently, the increase in the higher education graduation rates has largely been limited to Java and Sumatra island groups ( Figure 2). The urban-rural gap in the number of institutions available correlates to the stark difference in the university educational attainment amongst the individuals living in the urban versus rural areas, with higher education attainment rates at 10.42% and 2.97%, respectively (OECD/ADB, 2015). The higher education graduates are increasingly joining the urban labor force while the number of employees with higher Providing Equity of Access to Higher Education in Indonesia: A Policy Evaluation education in rural areas has more than doubled in percentage terms (MOEC, 2014). The returns to education in rural areas have declined but can, in part, be explained by the lower growth rate in the labor force in rural regions overall ((MOEC, 2014). Another possible explanation for the urban-rural difference in returns is in the type of earnings. Urban earnings are typically obtained through salary, whereas workers in rural areas earn agricultural income (Van Cao and Akita, 2008). To ensure that earnings in rural areas expand to other industrial sources, different skills must be learned, which necessitates higher education expansion. Access to higher education consistently benefits the elites. The majority of higher education students are from the two richest quintiles in Indonesia (MOEC, 2014). At least 80% of what Indonesia spends on higher education enriches the experience of the wealthiest 40% while more than 60% of it benefits the wealthiest 20%: in other words, the current public funding structure in Indonesia helps those who are already doing well (Gao, 2015). This parallels the obvious lack of support for the poor students expectedly compelling them to opt out of higher education, in favor of either technical or vocational training. Those less able to afford higher education are more likely to enroll in lower quality degree programs, consequently obtaining the lesser returns in the labor market (Gao, 2015). Yet, their broader participation in higher education is critical for producing a more qualified labor force that would advance Indonesia's economy. The human capital theory (Schultz, 1961(Schultz, & 1981 points to the formal education's role in increasing the labor productivity and producing various benefits, individual and societal. Though the higher education return for an individual is economically quantifiable via wages and lifetime earnings (Schultz, 1961;Becker, 1964), investing in people via education additionally provides measurable non-pecuniary benefits to health, fertility, consumption, savings, behavior and societal participation (Hartog & Oosterbeck, 1998;Doyle & Weale, 1994;Solmon, 1975;Becker, 1993;Tran, 2019). The social benefits and spillover effects are regularly reflected in improved public health, lesser crime and poverty, use of technology, and extended benefits to democracy, human rights, political stability, and environment (McMahon, 2010). Financial aid programs have been deployed successfully in the established higher education systems to assist financially disadvantaged in accessing higher education. In the US, the 1965 Higher Education Act (HEA) was enacted with the objective to strengthen the college and university funding. The conventional human capital theory of how the student aid works is seeded in the belief that individuals behave rationally and make decisions "that maximize their expected happiness, or 'utility' over time" (Goldrick-Rab et al., 2009). The authors go on to suggest that in lieu of the short-term income, college graduates improve their longer term consumption and leisure given that higher education provides not only pecuniary but also numerous non-pecuniary benefits. The financial aid programs that resulted from the HEA have increased enrolment, college choice, and completion rates for the financial aid students (Dynarski & Scott-Clayton, 2013;Dynarski, 2003;Klaauw, 2002). This expansion of higher education access has helped equalize the college costs for all thus ultimately benefiting the economic growth in the United States. Method a. Key Stakeholder Indonesia is a democratic government in a multi-party system. In 2019, there were officially 27 political parties participating in the General Election (Andayani, 2017). The presidential and vice presidential candidates work to garner enough support from other political parties to win in the election and create a governing coalition. Once they win, these political parties control the key positions in the government, making them influential in the policy making process. Several ministries help govern HEIs in Indonesia, such as the Ministry of Religious Affairs (MORA), Ministry of Research, Technology and Higher Education (MORTHE), and Ministry of Finance (MOF). Under MORTHE, higher education affairs are managed by the DGHE. This specific ministry oversees HEIs except for the religious institutions that are monitored by the MORA while other governmental institutions oversee the 82 tertiary education service institutes in charge of training personnel in governmental ministries (OECD/ADB, 2015). The Ministry of Finance approves the budget for higher education mostly through the DGHE, primarily supporting the public higher education with only 8% to 10% of the overall funding being used for the private institutions (OECD/ADB, 2015). The MORTHE through DGHE influences institutional leaders in higher education. Government Regulation 66/2010 specifies the governing structure of public HEIs, with four entities playing a key role: the Rector who manages the individual institution and chairs the Senate, the Senate that is responsible for academic affairs, the Oversight Unit that oversees all non-academic and financial matters, and the Advisory Board that assists with non-academic issues (World Bank, 2014). The Senate decides on the pool of candidates for a Rector position though the Minister of Education and Culture holds more than one third of all voting rights on who ultimately gets that job (DIKTI, 2017). The Rector along with the Minister of MORTHE has the most influence and control over the management and operations of public HEIs. The Senate is the most powerful entity, but limited in its independent decision making given that it is overseen by the Rector, who works closely with the Minister (Gao, 2015). The Oversight Unit and the Advisory lack sufficient power that would secure HEIs accountability to stakeholders and the public (Gao, 2015). Therefore, while public institutions have different levels of autonomy in governance, operations and finance, their actions remain influenced by the ministries. Without more autonomous and independent managers and leaders in the higher education structure of Indonesia, it may be challenging to formulate and successfully implement effective policies. b. Agenda Setting Indonesia lacks in the number of qualified STEM professionals, triggering a dependency for partnerships with foreign companies that manage Indonesia's natural resources and technological growth. The World Bank (2010) has underscored Indonesia's need for trained labor with the country currently in demand of at least 50,000 engineers annually, but that number is expected to double by 2025 (ICEF Monitor, 2014). When everyone regardless of their background or economic circumstances is able to access higher education, they will contribute productively to the economy, which is why a comprehensive system of financial aid in the HEIs is integral in providing equity and access. This realization is one of the main forces behind the Equity and Access Policy. Providing Equity of Access to Higher Education in Indonesia: A Policy Evaluation The current movement to improve the higher education system in Indonesia originated as part of the National Medium-Term Plan (NMTP) for 2010-2014. The plan details the Indonesian President's agenda stemming from the National Long-Term Development Plan for 2005-2025 (Republic of Indonesia, 2010). The long-term plan is comprehensive and contains insights on the national development strategy, various policies and programs (Republic of Indonesia, 2010). In line with the long term vision for the country, the NMTP aimed at broadly reforming Indonesia's economy by improving the quality of human resources, particularly in sciences and technology (Republic of Indonesia, 2010). To fulfill this ambitious goal requires the development of Indonesia's higher education sector. In 2015, relative to Vietnam (28.54%), Lao People's Democratic Republic (14.97%), and the Philippines (35.48%), Indonesia's 2015 enrolment in HEIs was higher, but at only 36.31% it was also behind that of Malaysia (45.13%) and Thailand (49.29%) (OECD/ADB, 2015). Indonesians who are 25 to 64 have low higher education attainment relative to Thailand, Singapore, and South Korea with only 4.6% of Indonesia's 2010 workers holding a university degree (DIKTI, 2016) Furthermore, the country is behind Thailand, Philippines and Malaysia in the labor productivity domain (OECD/ADB, 2015). The main development target in higher education under this agenda was to increase the gross enrolment rate at universities of individuals who are 19 to 23 years of age from 21.26% in 2008 to 30% in 2014 (DIKTI, 2016). To reach this target, Indonesia hoped to provide affordable quality higher education throughout the nation. But, meeting this target requires not only cooperation from MORTHE but also collaboration with other ministries that have their own training programs at tertiary education service institutes. The coalitions of political parties that control the ministries also play a critical role in implementing this agenda. Previous reforms aimed at ensuring the quality and access to higher education across the country. The first Law regarding higher education was enacted in 1961 and is still in effect now, requiring that at a minimum one public university in each of the provinces broadens access to higher education across Indonesia (Brewis, 2016). In 1996, as part of the Long-Term Education Plan, the MOEC worked towards both more autonomous and accountable institutions as well as improved accreditation and evaluation processes (World Bank, 1998). The Government Regulation on the Implementation of State Universities as Corporate Bodies (1999) led to four top state universities becoming more autonomous and gaining financial and academic autonomy via independent revenue generation, research structure, independent programming, and overall more autonomous management (Brewis, 2016). The author further suggests that the Government Regulation on Management and Governance of Education 2010 spelled out both management and policymaking frameworks at the national level. The Higher Education Law 12/2012 (Republic of Indonesia, 2012) defined an ambitious agenda for higher education to serve as a foundation to the nation's intellectual, scientific and technological advancement and competitiveness globally and, in doing so, advance equity in access to higher education (OECD & ADB, 2015). More importantly, this law is the first to specifically include an equity and inclusion centered policy for the admissions process in the effort to ensure access for the individuals from the lowest SES group. As part of the initiative to ensure broadly accessible higher education in all parts of the country, the government has included section 74 in the Higher Education Law 12 which states that higher education institutions must allocate 20% of the total enrolments across study programs for students who have high academic potential but come from low economic backgrounds and "frontier, outer and disadvantaged areas" (Republic of Indonesia, 2012). c. Policy Implementation: Top-Down vs Bottom-UP Approach Every policy is implemented in stages, and various actors often exert their influence before policy outputs are decided upon. Sabatier & Mazmanian (1989) argue implementation often involves initiating action via court, executive or statutory decision. Further, they suggest the process may involve passing the basic statute, deciding on policy outputs, and aligning the implementation steps to the policy goals to minimize discrepancies between the actual impact and the intended effect, which can at times lead to additional revisions to the implementation process. As per Hjern (1993, p. 250): "[t]he implementation process is structured through a policy network, which is composed of interconnected clusters of firms, governments and associations, termed the implementation structure." The existing implementation models are still debated with two polarizing schools of thought of the top-down versus the bottom-up model: Sabatier (1986) categorized these approaches by differentiating how their implementation begins and how their effectiveness is measured. Top-downers begin their analyses at the top of the implementation process and structure, focusing more on the policymakers' (top government officials) perspectives, interests, and goals, then move down to analyze the implementation at the operational level, particularly the behavior of the implementing officials. Implementation is understood as making lower-level officials take actions that materialize the intent of a formally adopted policy (Smith & Larimer, 2008). It is deemed to be effective and successful if the policy objectives set by those at the top are attained at a reasonable cost. Bottom-uppers begin their analyses from the bottom of the implementation process and structure, focusing on the perspectives, interests, demands, conflicts, and strategies of stakeholders at the operational level ('streetlevel officials') (Lipsky, 1980). This format allows lower-level officials to negotiate the changes by introducing new regulations or changes stemming from the experiments within the target population (Smith & Larimer, 2008). With both approaches, the success is equated to a policy problem being solved. d. Blended Top-Down Approach The Equity and Access Policy was designed using a blended top-down approach, synthesizing both perspectives. This strategy, suggested by Sabatier & Mazmanian (1980), uses the strengths of both approaches and addresses their weaknesses. The authors propose the following six pre-requisites for effective policy implementation: 1. well defined policy objectives; 2. solid theoretical backing of policy in question; 3. legal structure to ensure compliance by all stakeholders; 4. skilled implementation personnel; 5. broad policy support amongst powerful players; and 6. favorable socioeconomic context that maintains feasibility and political support for the policy. In building this approach they rightly accounted for the concerns of the bottom-uppers (Lipsky, 1980;Elmore, 1979) regarding the politicians' dominion over policy implementation as well as the necessity of having well trained personnel at all levels of policy Providing Equity of Access to Higher Education in Indonesia: A Policy Evaluation implementation. Both legal and political mechanisms must be in place to direct and restrain behaviors of the involved officials and other parties (Sabatier & Mazmanian, 1980). It is for this reason that the blended top-down model cannot be defined as a purely top-down approach since it recognizes the relevance of the complex political and legal power structures at local levels as well. We employ this approach to assess the Equity and Inclusion Policy by considering the government and political system, relevant legislation and skills of the government officials. The approach also recognizes that policy implementation does not occur in a political or power vacuum, at any level of the power structure. It is essential to understand the power dynamics amongst various stakeholders at all levels to gauge the policy's potential for a successful implementation. As Sabic-El-Rayess and Mansur (2019, p. 9) note, policy goals "can be appropriated by less than altruistic actors…for their gain." In the absence of proper implementation mechanisms, deviations from the planned path can occur and risk a failed implementation. In democratic countries, policymaking and implementation activities often consider the perspectives of the policy stakeholders and assess the conditions for implementation at all levels. Bottom-up approaches may be more relevant in analyzing the implementation process in democratic regimes where a diverse pool of actors engage in the implementation process (Brata, 2014). Top-down theoretical models elucidate the implementation process in an authoritarian regime where the power and control are centralized (Brata, 2014). Still, many policies in democratic contexts are implemented using the top-down approach while some aspects inherent to the bottom-up approaches are considered. In the case of Indonesia, evaluating the policy implementation process requires understanding the perspectives, interests and goals of the policymakers at the top level as well as those of the implementers at the bottom of the implementation structure. Sabic-El-Rayess and Mansur (2019, p. 13) agree with Foucauldian perspective that "power is not always structural, but rather present in all around us" further suggesting that poorly implemented policies only further "embed the class hierarchies more deeply into the mindsets of the poor." Indonesia has transitioned in the last 15 years, after decades of authoritarian regime, into a more decentralized state, where the country's political reforms localized power along with financial resources that are now more broadly distributed to the regencies, municipalities and provinces (Nasution, 2016). The author further suggests that local communities are increasingly responsible for actions in the arena of health, education, public works, environment, transport, and economy. During such transitional process, the state's growing bureaucratic capacity has been essential to achieving greater economic development and securing broader autonomy of the state (Addison, 2009). Weak bureaucracy is a barrier to a policy reform , 2006. In addition, governance theories elaborate on the complex networks of administrative and governmental actors who shape the policy implementation process (Robichau & Lynn, 2009). For this reason, the interplay between multiple actors across different levels of government should be well understood to determine their influence and control over the policy outcomes (Robichau & Lynn, 2009). Thus, the engagement of the lower level bureaucrats should not be deemed insignificant in any policy implementation in Indonesia. A strong legal and political mechanism must be in place and established by top-level policymakers. Well thought-through plans, goals and theoretical frameworks under-girding the policy should be clearly explained and shared with the relevant stakeholders. A policy tool that is often used in top-down approaches are mandates, and this tool is appropriate. Mandates are intended to guarantee compliance of all parties involved, but their significant downside is that their enforcement and supervision may be expensive (McDonnell & Elmore, 1987). In most implementation cases, a task force or committee involving actors and stakeholders from both the top and operational levels is formed to implement policies (Brata, 2014). For the Equity and Access Policy, however, no new task force or committee was formed to oversee the implementation of the policy; instead, the responsibility of carrying out this policy was given to the DGHE in the MORTHE (Rahmawati, 2016). The use of mandates and having an enforcing agency responsible for carrying out the policy signals the extent of policy's importance to the top-level policymakers. Result and Discussions The Equity and Access Policy requires that one fifth of all students in Indonesia's HEIs are from the two bottom income quintiles, but that goal has not been achieved as of yet. Several initiatives have been pursued to ensure that this occurs. One of them includes adjusting the fee levels for undergraduate programs at public institutions outside of the toptier autonomous public universities. Four types of scholarship programs financially assist students with both fees and living costs. However, in 2014, only 10% of students received scholarships or financial assistance, well below the 20% set by the government for that year. This section will discuss the political and institutional factors that might have impeded the implemention success throughout Indonesia. The implementation process will be evaluated against the blended topdown framework discussed earlier. A case study of one of the top-tier autonomous public institutions will be introduced to contextualize and exemplify the implementation of the policy. The stakeholders, agencies and target groups of the Equity and Access Policy affect the policymaking and implementation process at different stages. Higher Education Law 12 from 2012 originated by order of the country's President, in line with the National Medium-Term Plan (RPJMN) of 2010-2014. This step was followed by a legislative order where the proposed law was signed by at least 10 members of the House of Representatives (DPR) (Republic of Indonesia, 2016). Once executed, it was then brought to the House of Representatives for discussion with special committees, legislation bodies and budgeting teams. These teams were responsible for finalizing all parts of the law, the implementation process and determining which departments and agencies will be responsible in carrying out the law. Special interest groups, business conglomerates and various private institutions often (indirectly) influence sections of the law at this stage (Retnoningsih & Marom, 2014). This is usually done by lobbying the representative members involved in the process. Once the law is formally finalized, it is brought to the floor at the House of Representatives and put to the vote. If passed, it is signed by the President. The President reserves the right to make any final changes or veto any of the sections of the law. If changes occur, then the law is returned to the House of Representative for another vote. The Law 12 of 2012 was successfully passed and signed by then President, Susilo Bambang Yudhoyono on August 10th, 2012. Both the MRTHE and the MRA are responsible for carrying out the law, with the DGHE overseeing and implementing the law within HEIs across Indonesia. The law itself was passed with relative ease as it was strongly supported by the President and the coalition of political parties in the House of Representatives (Rahmawati, 2016). The law underscores "a well-planned, guided and sustainable approach to HE governance" is a foundation to the "realization of social justice in access to HE that is of high quality", but also in the interest of "development, independence and prosperity" (Republic of Indonesia, 2012, p. 31). The top-tier autonomous public institutions also supported policy as it gave them more autonomy in the governance and operations of their universities (Brewis, 2016). Since the policy was not employed within private institutions, they did not voice their concerns about its potential implications. By evaluating the implementation process using the blended top-down approach, it is evident that the policy has a solid objective with an adequate theory of change underlying the policy, formulated by both the relevant bureaucrats and officials. The legal and procedural structure of the implementation process further ensured that all parties followed through because there is a clear policy mandate along with the DGHE overseeing compliance. The socioeconomic context and any related changes did not weaken political rationale and support for the policy particularly given that it was designed to help and meet the demands of those with lower SES status. a. Implementation Breakdown The implementation of the Equity and Access Policy seems to breakdown when the institutions attempt to set and enforce the regulations mandated to them. There were obstacles and tensions regarding coordination between the DGHE and the universities, both technically and management-wise because there was no direct regulation that pressured the university to implement this policy. When this occurs, any new policy finds itself in competition with other initiatives for the universities' limited resources. Individual players' interests and goals may further complicate the existing policy dynamics particularly if the policy is not aligned with the universities' strategic vision and agenda. Any policy is dependent on political willingness to implement it as well as the capacity to do so, but the process also requires that parties involved be both pressured and supported during implementation (McLaughlin, 2009). Further, and in line with Foucauldian perspective (Sabic-El-Rayess & Mansur, 2019; Foucault, 1997) on power being present in every context and action: "vague mandates and weak guidelines provide the opportunity for dominant coalitions or competing issues to shape program choices" (McLaughlin, 2009, p. 173). Most public institutions in Indonesia have limited resources; therefore, if DGHE's support was technically or financially limited, the local stakeholders would focus on other programs or policies in their institutions. More importantly, there were no additional incentives or sanctions attached to the implementation of the policy. Tuitions in public institutions are determined based on income with the poor students being subsidized by the wealthy students' tuition (Retnoningsih & Marom, 2014). The government, via the MOF, does not allocate additional subsidies to support more students from low SES backgrounds. The government only continues to provide the already existing merit-based financial aid programs. Additionally, no sanctions were imposed on the institutions if they did not achieve the policy's objectives, since institutions were only told to 'seek & screen' potential students. In contrast to the affirmative action policies of higher education in India, for instance, there is no legal structure that secures representation of the poor in the HEIs (Boston & Nair-Reichert, 2003). In other words, for a successful policy implementation, the implementation body should both pressure and offer support by using a combination of incentives and sanctions. As of 2016, the enrolment rate for those in the two lowest quantiles of SES is still well below 20% and reported to be around 10% (DGHE, 2017). The policy has been in place for a few years, and serious concerns should be raised in regards to the efforts made by the institutions as well as DIKTI to achieve the 20% enrolment goal. The current implementation framework clearly lacks political and institutional commitment and skilled personnel coupled with limited 'push' from the power holders (Retnoningsih & Marom, 2014). While Indonesia is officially a democratic country and has adapted a decentralized approach to governing, it is still very much centralized in practice. Local governments and its bureaucrats still lack capacity to independently govern and plan economic initiatives that would promote local economic growth, as evidenced in the local government's dependence on the support from the central government (Nasution, 2016). Students did not voice their concern or discontent about this policy, likely because they were not fully informed (Retnoningsih & Marom, 2014). Also, when there are systematic failures that students observe in higher education institutions, their "voice mechanism [is] severely diminished in its power" so "students often remain passive, and if they voice their dissatisfaction, they do so less aggressively (Sabic-El-Rayess, 2014, p. 74)." The absence of interest groups is also noticeable, which means that they could not assist in the lobbying and accountability of the policy implementation at the institutional level. The lack of 'push' from the 'street-level' stakeholders along with the low capacity of those bureaucrats contributed to the Equity and Access Policy's implementation breakdown. b. Financial Aid Programs In this section, the study analyzes different types of programs implemented by public institutions in an attempt to provide financial aid to those who need it, and pinpoint to where efforts can be improved to guarantee that the goals and objectives of the policy are achieved. Over the past decade, the government has introduced several financial aid programs to address the inequity in access to higher education. A full scholarship program called BidikMisi started in 2010. This program has supported the academically strong but poor high school graduates during the entire 4-year-long bachelor degree or a 3 year diploma program. The demand for the Bidi-kMisi program has constantly risen from 2010 to 2015, however, the quota for students receiving these scholarships has not risen accordingly. Figure 3 shows the number of applicants (light green) compared to the number of scholarships available in the program (orange). The number of applicants increased by 600% from 2010 to 2014 while the number of scholarships has only risen by 300%. Another financial aid program provided by DIKTI is BBM and PPA, and this program works to lessen the college dropout rate. The program works with poor students who have stellar academic and non-academic backgroundsmeasured by their high GPA or success in sports or arts and who attend either public or private universities (DGHE, 2017). Despite these efforts of providing financial aid programs that target the poor, institutions are still unable to meet the requirement of the policy. All of these financial aid programs are also designed to be merit-based while continued access to the scholarship requires students to have and maintain above average academic grades. Students who receive BidikMisi scholarships have demonstrated outstanding academic performance, around 75% have maintained a GPA of 3.0 and above (Figure 4). However, this requirement does not fully address the issue of equity in access for students, especially those from low SES backgrounds since most of them do not have access to the quality K-12 education, therefore they are not academically ready to perform or meet the requirements set by the institutions. Given this context, there may not be a sufficient number of qualified low SES students to meet the requirements of having 20% enrolment in the institutions. It is possible that the institutions themselves are downsizing the applicant pool of students from poorer backgrounds due to the fact that these students would require more subsidies. This paper thus includes a case study of University of Gadjah Mada and examines their financial reports in order to understand their financing structure and their policy implementation efforts. c. Case Study: University of Gadjah Mada Over the years, the University of Gadjah Mada (UGM) has worked to admit students based on merit while accounting for their diverse backgrounds. The institution first opened its doors in 1949, and since then has used its limited resources to lead the "affirmative action" effort and accept students nationally (Logli, 2015). About 50% of the student pool is accepted based on standardized testing and another half is invited (University of Gadjah Mada, 2010). High schools suggest between 5 to 50 percent of their top graduates be admitted to UGM (University of Gadjah Mada, 2010). In spite of these efforts to diversify its student population, 83 percent of UGM's students still come from Java (Logli, 2015). In Figure 5, the study includes the sources and uses of income for UGM from 2013-2016. Considering that the policy was implemented in 2012, it is evident that the funds used for financial aid programs have increased since then, indicating the institutional efforts to recruit more students who would receive scholarships. The size of scholarship funds has steadily increased from 2013-2016 (Sumbangan Pembinaan Pendidikan) since the policy has been implemented and well above the years prior to the policy. However, the enrolment rate for students from low SES background has not risen accordingly. The number of low SES students has remained at around 10% even after the policy's implementation (University of Gadjah Mada, 2016). Therefore, it is possible that it does not matter how much the University intends to allocate to financial aid Providing Equity of Access to Higher Education in Indonesia: A Policy Evaluation programs, but that there are not enough students from low SES to choose from as they are not meeting the admissions standards set by the universities. This may be occurring due to a limited number of students who qualify for scholarship funding in its current form, further pointing to the importance of the quality K-12 education. The existing discrepancy between the funds available and scholarship students is rather discouraging since the intent of the policy is to secure equitable access for those who need it most. In India, students from lower castes fill in all reserved seats for those castes and are provided remedial courses to support their success at the higher education institutions they attend (Boston & Nair-Reichert, 2003). Currently, Indonesia lacks a systematic change to ensure that students from low SES backgrounds are academically ready and are able to meet the requirements of the institutions. Conclusion Indonesia's efforts to address the inequity of access of higher education, specifically to those from low SES backgrounds have been admirable. The policy of Law 12 Section 74 or the Equity and Access Policy was implemented to ensure that 20% of enrolment in all higher education institutions across Indonesia is made up of students from the two lowest SES quintiles. After careful examination of the implementation process, there seem to be two major faults in how the policy is designed and adapted at the institutional level. The first failure occurs at the institutional level where there are no direct regulations and mechanisms recommended by the DGHE that either give support or exercise pressure on the institutions to adopt the policy. The second failure is due to the design of the policy itself as the issue of higher education's inequitable access correlates to a similar problem of the lacking access to the quality K-12 schooling. The financial aid programs available at institutions to address the needs of low SES students are imperfect since the financial aid is often contingent on the academic achievement of the students when applying to the institutions and while they are enrolled in the institutions. Inadequate preparations and inequitable access to quality K-12 education greatly diminishes the pool of the low SES applicants from which institutions can choose. Understanding the student background fully and holistically is critical in envisioning the adequate assistance for Indonesia's poor. To ensure equity and improve access for the poor, a systematic change is needed to assist those who are disadvantaged financially and academically in pursuing the post-secondary education. Affirmative action policies are crucial in bringing equity to those groups who have been continuously at a disadvantage. Specific provisions and policies must be implemented to assist institutions in seeking and supporting those students so that they are provided with the necessary support to enroll and study in the institutions. Additionally, institutions such as community colleges should be continually expanded throughout the provinces of Indonesia. Community colleges can be the social mobility vehicle that provides equitable access to the post-secondary education to those from low SES backgrounds as they provide 2-year post-secondary education at a lower cost and are not as academically rigorous in their requirements. Securing broader access and growth in higher education and overall the education system is critical to developing Indonesia's human capital, but that will not occur without strong political will to decisively enforce policies on equity, inclusion, and access to higher education. References Addison, H. J. (2009). Is administrative capacity a useful concept? Review of the
9,874.2
2020-04-20T00:00:00.000
[ "Education", "Political Science", "Economics" ]