id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
18674400
pes2o/s2orc
v3-fos-license
MicroRNAs and cell fate in cortical and retinal development MicroRNAs (miRNAs) are involved in crucial steps of neurogenesis, neural differentiation, and neuronal plasticity. Here we review experimental evidence suggesting that miRNAs may regulate the histogenesis of the cerebral cortex and neural retina. Both cortical and retinal early progenitor cells are multipotent, that is, they can generate different types of cortical or retinal cells, respectively, in one lineage. In both cortical and retinal development, the precise timing of activation of cell fate transcription factors results in a stereotyped schedule of generation of the different types of neurons. Emerging evidence indicates that miRNAs may play an important role in regulating such temporal programing of neuronal differentiation. Neuronal subtypes of the cortex and retina exhibit distinct miRNA signatures, implying that miRNA codes may be used to specify different types of neurons. Interfering with global miRNA activity changes the ratio of the different types of neurons produced. In fact, there are examples of cell fate genes that are regulated at the translational level, both in retinogenesis and in corticogenesis. A model depicting how miRNAs might orchestrate both the type and the birth of different neurons is presented and discussed. Glossary. • Lineage: the temporally ordered cell progeny of an individual progenitor cell. • Specification: the (reversible) process by which a cell becomes capable of, and biased toward, a particular fate. • Commitment: the process by which cell fate is fully determined and can no longer be affected by external cues. • Potency: the entire complement of cells that a progenitor can ultimately produce. • Multipotency: the ability to give rise to more than one cell type. • Progenitor: a dividing cell that, in contrast to a stem cell, cannot proliferate indefinitely. • Antago-miR: modified antisense oligonucleotide that blocks the activity of a miRNA. • Heterochronic neuron: type of neurons that is generated at inappropriate times of development. • Neuron birth date: the time of the last mitosis of a neuronal cell. MicroRNAs (miRNAs) are involved in crucial steps of neurogenesis, neural differentiation, and neuronal plasticity. Here we review experimental evidence suggesting that miRNAs may regulate the histogenesis of the cerebral cortex and neural retina. Both cortical and retinal early progenitor cells are multipotent, that is, they can generate different types of cortical or retinal cells, respectively, in one lineage. In both cortical and retinal development, the precise timing of activation of cell fate transcription factors results in a stereotyped schedule of generation of the different types of neurons. Emerging evidence indicates that miRNAs may play an important role in regulating such temporal programing of neuronal differentiation. Neuronal subtypes of the cortex and retina exhibit distinct miRNA signatures, implying that miRNA codes may be used to specify different types of neurons. Interfering with global miRNA activity changes the ratio of the different types of neurons produced. In fact, there are examples of cell fate genes that are regulated at the translational level, both in retinogenesis and in corticogenesis. A model depicting how miRNAs might orchestrate both the type and the birth of different neurons is presented and discussed. Glossary • Lineage: the temporally ordered cell progeny of an individual progenitor cell. • Specification: the (reversible) process by which a cell becomes capable of, and biased toward, a particular fate. • Commitment: the process by which cell fate is fully determined and can no longer be affected by external cues. • Competence: a cell condition linked to temporal identity. It can be defined as the ability of a progenitor cell to respond to a signal and become a particular type of neuron. • Potency: the entire complement of cells that a progenitor can ultimately produce. • Multipotency: the ability to give rise to more than one cell type. • Progenitor: a dividing cell that, in contrast to a stem cell, cannot proliferate indefinitely. • Antago-miR: modified antisense oligonucleotide that blocks the activity of a miRNA. • Heterochronic neuron: type of neurons that is generated at inappropriate times of development. Neuron birth date: the time of the last mitosis of a neuronal cell. • Keywords: cortex, retina, cell-fate, heterochronic, timing, cell birth date, development GENERAL IMPLICATIONS OF miRNAs IN NEURAL DEVELOPMENT MicroRNAs (miRNAs) are a large family of non-coding RNAs of approximately 21 nucleotides in length, which inhibit gene expression at the translational level and are involved in the control of many developmental and cellular processes in eukaryotic organisms, including vertebrate neural development (Krol et al., 2010). miRNAs have been found to regulate many aspects of neural development, including the early steps in neurogenesis, the specification and differentiation of neural progenitor cells, brain patterning, and the plasticity of mature neurons (Coolen and Bally-Cuif, 2009;Fineberg et al., 2009;Bian and Sun, 2011). Examples of miRNAs involved in the specification of distinct types of mature neurons have also been described. miR-7a is expressed in a gradient opposing Pax6 along the ventricular walls and restricts its translation in the dorsal aspect. In vivo inhibition of miR-7a in Pax6-negative regions of the lateral wall induced Pax6 protein expression and increased dopaminergic neurons in the olfactory bulb (De Chevigny et al., 2012). miR-132 plays a key role in the differentiation of dopamine neurons by directly regulating the expression of Nurr1, which is one of the most important transcription factors in determining dopamine neuron development and differentiation (Yang et al., 2012). The overexpression of miR-181a and miR-125b increases the expression of dopaminergic markers and the ratio of tyrosine hydroxylase (TH) positive neurons generated by neural stem cells derived from human embryonic stem cells, whereas the inhibition of these miR-NAs impairs the generation of the dopaminergic subtype (Stappert et al., 2013). miR-9, which is reiteratively used in patterning, neurogenesis, and differentiation (Coolen and Bally-Cuif, 2009), also has a role in establishing distinct types of motor neurons. miR-9 is transiently expressed in a motor neuron subtype together with its target gene FoxP1, which determines distinct motor neuron subtypes. Consequently, miR-9 overexpression or knockdown switches columnar identities in developing chick spinal cords (Otaegi et al., 2011). Recent observations suggest that combinatorial miRNA expression may contribute to specifying neuron identity. The expression of a large fraction of known miRNAs with distinct expression profiles in glutamatergic and subtypes of GABAergic neurons has recently been demonstrated (He et al., 2012). In the mouse retina, a comprehensive survey of miRNA expression was achieved by in situ hybridization, revealing the expression of specific sets of miRNAs in distinct neuronal subtypes (Karali et al., 2010). Here we discuss the role that miRNAs may play in the generation of distinct types of neurons at different times in the development of layered structures. We will focus on the histogenesis of the neural retina and the cerebral cortex, where the role of miRNAs has been most widely investigated. CORTICOGENESIS AND RETINOGENESIS SHOW SIMILAR MECHANISMS FOR ESTABLISHING DISTINCT CELL FATES One main characteristic of the both retina and the cortex is that the identity of a certain type of mature neuron correlates with the time of its last division (cell birth date). Cortical projection neurons are derived from progenitor cells of the dorsal forebrain. After an initial phase of expansion, which is realized by symmetric divisions, progenitor cells of the ventricular zone (radial glia) start asymmetric divisions that generate new radial glia and either post-mitotic neurons (direct neurogenesis) or secondary (intermediate) progenitors. The net result is that the pool of progenitors does not deplete over the time and a single progenitor can generate a lineage made of different types of neurons with different birth dates. In the cortex, neurons with early birth dates are produced by primary (early) progenitor cells of the ventricular zone (radial glia) and populate the deep layers VI-V. Neurons with late birth dates, which fill the superficial layers II-III, are primarily generated by Tbr2-positive secondary progenitor cells of the subventricular zone (Leone et al., 2008;Sessa et al., 2008Sessa et al., , 2010 Figure 1A). By the time a young neuron has progressed through its final mitotic division, the cell has acquired the information needed to migrate to the layer typical of its birth date, independent of the environment. Cellular studies by transplantation experiments suggest a progressive restriction in the developmental potential of cortical cells. Early progenitors, which normally produce deep-layer neurons, are multipotent: these cells can directly produce upperlayer neurons when transplanted into an older brain environment (McConnell and Kaznowski, 1991). Conversely, the progenitors of layer IV-II neurons have lost the ability to form layer VI neurons if transplanted into younger brains (Frantz and McConnell, 1996;Desai and McConnell, 2000). In the retina, landmark studies of lineage-tracing have shown that early progenitor cells are multipotent and, likewise, early cortical progenitors can generate lineages containing different types of neurons (Turner and Cepko, 1987;Holt et al., 1988;Wetts and Fraser, 1988). The six types of neurons and the Müller glia making up the vertebrate retina are generated in a stereotyped sequence, with a correlation between cell birth date and cell fate, though with some overlap in the production of retinal cell types at any given time. Retinal ganglion cells (RGCs) are generated first, followed by the production of cone photoreceptors, horizontal cells, and amacrine neurons. Rod photoreceptors, bipolar neurons, and Müller glia are generated last ( Figure 1B). Retinal progenitors generate these different cell types by proceeding through intrinsically defined competence states, with a certain degree of influence of environmental cues. A growing list of transcription factors has emerged as key intrinsic regulators of cortical and retinal cell fate. Cortical progenitors sequentially activate a number of transcription factor genes that have the potential to determine the fates of their daughter cells. Early progenitor cells produce deep-layer neurons that express Fezf2 and Ctip2, which specify subcortically projecting neurons. Late progenitors generate upper-layer neurons expressing Satb2, which is required for the formation of axonal projections that connect the two cerebral hemispheres. Fezf2/Ctip2 and Satb2 pathways appear to be mutually repressive, thus ensuring that individual neurons adopt either a subcortical or callosal projection neuron identity (Leone et al., 2008). The molecular nature of this cross-repression is under scrutiny (Srinivasan et al., 2012). Interestingly, the Satb2 protein, in contrast to mRNA, was not detected in late progenitors, but was detected in post-mitotic cells of the cortical plate, suggesting Frontiers in Cellular Neuroscience www.frontiersin.org that a Satb2 translation block might occur in the progenitor cell (Britanova et al., 2005). Retinal cell fate specification is mainly regulated by combinations of bHLH and homeobox genes. In mice, Atoh7 (bHL) and Pou4f2 (homeobox) cooperate to regulate RGC genesis. The expression of Prox1 (homeobox) is essential for horizontal cell generation, while a number of factors, including Neurod1 and Neurod4 (bHL), Pax6 and Six3 (homeobox), regulate the production of amacrine cells. Crx (homeobox) is crucial for specifying photoreceptors, and Vsx2 (also named Chx10, homeobox) is required for bipolar cell genesis (Ohsawa and Kageyama, 2008). Notably, the Xenopus homologs of Crx and Vsx2 (Xotx5b and Xvsx, respectively) coordinate the production of photoreceptors and bipolar cells via a translational control mechanism . The sequential expression of the two Sry-related HMG box proteins Sox11 and Sox4, during retinogenesis, leads to the fine adjustment of retinal differentiation. Overexpression of Sox11 and Sox4 in retinal progenitors increases the number of cone cells and dramatically decreases the number of rod cells and Müller glia, by acting through epigenetic mechanisms (Usui et al., 2013). Although key transcription factors of cell fate are known, how they are activated in distinct cells at specific developmental times is not clear. Consequently, the mechanisms responsible for shifts in competence over time in the lineage of a progenitor cell remain largely elusive. One important feature shared by the cortex and retina is that the potency of progenitor cells diminishes and their competence changes as they "age" during embryonic development. We do not know the precise sort of "clock" that measures a progenitor's age, though one possible way would be through the length of its cell cycle. In fact, during neural development the proliferation rate decreases over time as the progenitor cell cycle length increases (Caviness et al., 1995;Alexiades and Cepko, 1996;Decembrini et al., 2006). The proliferation rate of neural progenitor cells is regulated by the activation of a number of growth factor pathways. The activation of Wnt and fibroblast growth factor (FGF) pathways during cortical development supports the expression of cyclinD1 and shortens the cell cycle of progenitors, thus promoting proliferation, expansion of apical progenitors, and reduced generation of basal progenitors (Salomoni and Calegari, 2010). Wnts and FGFs, together with bone morphogenic proteins (BMPs), play a crucial role also in cortical patterning (Rubenstein, 2011) but they have not been shown to directly affect the establishment of distinct neuronal fates. The Shh pathway supports cell cycle progression, both in the retina (Wang et al., 2005;Locker et al., 2006) and in the mouse cerebral cortex (Komada et al., 2008). Interestingly, blocking the Shh pathway affects the histogenesis of both the Xenopus retina (Decembrini et al., 2009) and mouse cortex (Komada et al., 2008). In the Xenopus retina, this is caused by release from translational inhibition of Xotx5b and Xvsx, which are necessary for specifying the bipolar fate. Notably, shortening the cell cycle by E2F overexpression exerts opposite effects, thus supporting the idea that Shh acts on cell fate through the cell cycle machinery . Whether (and how) cell cycle progression relates to the clock controlling the competence of differentiation, and how this clock in turn regulates activation of the transcription factors that specify the distinct neuron types remain open issues. miRNAs AND CORTICAL HISTOGENESIS Most of our knowledge on the role of miRNAs in cortical and retinal histogenesis comes from analyzing the phenotypes observed after global loss of miRNA regulation, which is induced by disrupting the pre-miRNA processing enzyme Dicer. Conditional knock-out (CKO) of Dicer in the cortex was achieved after breeding Dicer:lox/lox mice with distinct forebrain Cre-driver mouse strains, including Nestin:Cre, Emx1:Cre, or FoxG1:Cre (De Pietri Tonelli et al., 2008;Kawase-Koga et al., 2009;Nowakowski et al., 2011; Table 1). A general effect common to different mouse strains driving early inactivation of Dicer in the cortex is the induction of cell death, because miRNAs target several players of the DNA-damage response signal-transduction network (Bailey et al., 2010). However, Dicer CKO also has profound effects on cortical layering. FoxG1:Cre;Dicer:lox/lox embryos deactivate Dicer from E8, and the effects on the expression of mature miRNAs are detectable by E11.5 in most forebrain cells. In these mice, neuroepithelial stem cell identity is not affected, but expression of the markers of radial glia Nestin, Sox9, and ErbB2 is abnormally low. Early telencephalic progenitors generate correct proportions of neurons after Dicer deletion, but many of those neurons migrate abnormally, possibly due to a defect in radial glia-guided migration. Moreover, the population of secondary (basal) progenitors, which are generated by the radial glia, is disorganized and expanded (Nowakowski et al., 2011). The depletion of miR-92b may play a crucial role in generating this phenotype. In fact, this miRNA is predicted to target the 3 untranslated region (UTR) of the transcription factor Tbr2, which regulates the generation of intermediate progenitors. Acute miR-92b gain of function causes rapid reductions in the ratio of Tbr2-expressing cells, whereas acute miR-92b loss of function has opposite effects (Nowakowski et al., 2013). Dicer CKO in dorsal forebrain cells has been achieved with Cre expression from around E10 to E10.5 in Emx1:Cre;Dicer:lox/lox and Nestin:Cre;Dicer:lox/lox mice. The Nestin:Cre strain drove a milder and later inactivation of Dicer as compared to the Emx1:Cre strain. Emx1:Cre;Dicer:lox/lox showed overproduction of early-born neurons and a reduced number of Brn1-expressing upper-layer neurons as compared with controls, and the remaining ones were intermingled with Tbr1-expressing deep-layer neurons. Nestin:Cre;Dicer:lox/lox mice had no defects in the production of early-born neurons, but exhibited affected generation and migration of late-born neuron (De Pietri Tonelli et al., 2008;Kawase-Koga et al., 2009). Dicer CKO in post-mitotic neurons of CamKII:Cre;Dicer:loxP/ loxP mice caused reduced dendritic branch elaboration, but generated normal cortical layering (Davis et al., 2008), indicating that a late inactivation of Dicer cannot affect layer identity. Altogether, these results show that mature miRNAs are required at different times in corticogenesis to fine-tune cell fate and, depending on the time of Dicer inactivation, different cell types and layers are affected. Unfortunately, these studies did not address the question of whether the translation of key transcription factors of cortical cell fate was affected. Frontiers in Cellular Neuroscience www.frontiersin.org miRNAs AND RETINAL HISTOGENESIS Four different Dicer-CKO mouse models have recently allowed investigating the effects of global miRNA down-regulation in mouse retinal development ( Table 1). Cre-mediated Dicer excision in retinal progenitors resulted in phenotypes of variable severity, likely dependent on the time and the extent of Dicer deletion. Accordingly, when Dicer excision began earlier in retinal development, or when Cre was more uniformly expressed throughout the developing retina, more severe phenotypes were consistently observed. Likewise, when driving Dicer CKO in the developing cortex, a general effect of cell death was observed at different extents and times in all the retina CKOs (Damiani et al., 2008;Pinter and Hindges, 2010;Iida et al., 2011;Nowakowski et al., 2013). Chx10-Cre expression exhibits a mosaic pattern and begins before embryonic day 14.5 in progenitors of all retinal layers. Dicer CKO driven by the Chx10-Cre transgene led to decreased electroretinogram (ERG) responses, morphological anomalies, and formation of photoreceptor rosettes at post-natal day 16. This phenotype progressed to more general cellular disorganization and widespread degeneration of retinal cell types as the animals aged (Damiani et al., 2008). αPax6-Cre is expressed in peripheral regions of the developing retina, beginning on embryonic day 10.5. Dicer CKO driven by αPax6-Cre, which inactivated Dicer in a less mature population of retinal progenitors than Chx10-Cre, generated a more severe phenotype, consisting in the abnormal differentiation of retinal cell types. The production of early generated cell types (RGC and horizontal cells) was increased. Interestingly, ganglion cells (GCs) were generated beyond their normal competence window and, probably as a consequence, the Dicer-deleted areas of the retina showed a decrease in later generated cell types (amacrine cells and rod photoreceptors). These results indicate that miRNAs are required for shifts in the competence of retinal progenitors over time (Georgi and Reh, 2010). Dkk3-Cre is ubiquitously expressed in all retinal progenitors beginning on embryonic day 10.5. Dicer CKO by this transgene produced massive death of retinal progenitor cells (RPCs), resulting in microphthalmia and the absence of layers. In vitro reaggregation culture of Dicer-CKO retinal cells revealed that cell death and the suppression of proliferation by Dicer inactivation occurred in a cell-autonomous manner (Iida et al., 2011). Such results are consistent with the phenotype observed after early inactivation of Dicer by morpholino microinjection in Xenopus (Decembrini et al., 2008). Rx-Cre is ubiquitously expressed in the developing neuroretina. Dicer CKO by Rx-driven Cre activation caused cell death and a reduction in overall eye size. However, a RGC layer formed and no defects were observed in the formation of the optic disc, which is the exit point for RGC axons from the retina. Interestingly, mutants showed a marked increase in ipsilateral projections, with RGC axons extending outside the optic chiasm or showing aberrant projections, indicating a miRNA role in ensuring correct axon guidance decisions. Notably, these phenotypes were not the result of a mis-patterning of the eye (or the chiasm), suggesting that miRNAs have direct functions in the intracellular processes needed for axon growth and pathfinding (Pinter and Hindges, 2010). Frontiers in Cellular Neuroscience www.frontiersin.org Recent observations suggest that distinct miRNAs might be responsible for the cell death observed after Dicer CKO. In Xenopus, the inhibition of miR-24a, which is predicted to target the pro-apoptotic factors caspase-9 and protease-activating factor 1 (apaf1), resulted in increased apoptosis of retinal progenitors and microphthalmia (Walker and Harland, 2009). In mice, the knock-out of miR-124 caused apoptosis of newly differentiated cone photoreceptors (Sanuki et al., 2011). Individual miRNAs controlling retinal cell identity are emerging. miR-204 has an active role in establishing dorsoventral (D/V) polarity of the optic cup of medaka fish. When miR-204 activity was blocked by antago-miR, the expression domain of ventral markers was reduced or absent, whereas the expression domains of the dorsal markers were expanded ventrally. A reciprocal molecular phenotype was observed after miR-204 overexpression. These phenotypes were associated with concomitant up-or down-regulation of olMeis2, which is a target of miR-204 and mediates its effects on D/V eye polarity (Conte et al., 2010). DISTINCT miRNAs AND mRNAs REGULATE THE TIMING OF RETINOGENESIS A defined temporal sequence of gene expression that could explain the chronological order of cell birth in different neuronal lineages was first described in Drosophila (Isshiki et al., 2001). Further studies have confirmed the generality of this strategy, with different sequences of transcription factors being used in different structures of the Drosophila nervous system to generate neuronal diversity, according to a well-defined time schedule (Bayraktar and Doe, 2013;Li et al., 2013;Suzuki et al., 2013). Homologs of key transcription factors defining the temporal identity of Drosophila neuroblasts have now been detected in the developing mammalian retina. One of them, IKAROS family zinc finger 1 (Ikzf1/Ikaros), is a mouse ortholog of hunchback (hb), which is necessary and sufficient to specify early-born neurons in Drosophila. Ikaros is both necessary and sufficient to confer early temporal competence to mouse RPCs. In fact, mis-expression of Ikaros is sufficient to generate early-born neurons at inappropriate times: after viral Ikaros transduction in late RPCs, heterochronic amacrine and horizontal cells were generated in vivo and GCs in cell culture. In addition, Ikaros mis-expression caused a reduction in lateborn neurons (bipolar cells) and prevented Müller glia formation (Figure 2A). Consistent with this, Ikaros-deficient retinas exhibited a permanent reduction in most early-born cell types. Cones were not affected by the gain or loss of Ikaros, suggesting that different regulatory mechanisms control the timing of their production (Elliott et al., 2008). These findings indicate that Ikaros is required for progression to a late temporal state. Surprisingly, the timing of Ikaros activation is due to regulated translational repression, because Ikaros mRNA is expressed throughout retinal development, whereas the protein is present only in early RPCs (Figure 2A). Although not currently proven, key mediators of this repression might be miRNAs, as suggested by the similarity of the phenotypes observed after Ikaros mis-expression and Dicer CKO by αPax6-Cre transgene (see above). A central role of Ikaros in determining the temporal fate of neurons in mouse was recently indicated also by a study of cortical development. Ikaros is expressed in progenitor cells of the mouse cerebral cortex at high level during the early stages of neurogenesis and thereafter its expression decreases over time. Sustained Ikaros expression prolonged the period of the generation of deeplayer neurons and delayed the production of late-born neurons. However, there is no direct evidence that Ikaros expression during corticogenesis is regulated at the post-transcriptional level as in the developing retina. In fact, Ikaros mRNA level is high at early stages and decreases by over 80% from embryonic E10.5 to E15.5. A possible role of miRNA in mediating the decrease of Ikaros mRNA level during cortical development was discussed (Alsiö and Tarchini, 2013). Distinct miRNAs that can rescue Pax6-Cre driven Dicer CKO have recently been found. These miRNAs, let7, miR-9, and miR-125, are expressed in early retinal progenitors and serve as key regulators of the early to late developmental transition in retinal progenitors. When down-regulated, they cause an increase in GCs, whereas their up-regulation accelerates retinogenesis, increasing the ratio of late photoreceptor cells (rods) at the expense of early neurons (ganglion and horizontal cells). Let7, miR-9, and miR-125 target Protogenin (Prtg) and Lin-28b, two proteins that are crucial for maintaining an early competence state of RPCs. In fact, overexpression of Prtg and Lin-28b from E16 caused an extra number of heterochronic GCs that were generated at late times in retinogenesis (Figure 2A). Ikaros and Lin-28/Prtg seem to constitute two parallel pathways for the control of developmental timing, because let7, miR-9, and miR-125 do not appear to directly regulate the expression of Ikaros. However, there are conserved binding sites for miR-125 in the 3 UTR of two members of the Ikaros family, Ikzf3 and Ikzf5. These two genes show small increases of expression in the Dicer-CKO retina and the possibility that they play a role in retinal development has to be considered (La Torre et al., 2013). Finally, key transcription factors of late retinal cell identity that are regulated at the translational level have been described in Xenopus. Xotx5b is the Xenopus homolog of the mammalian homeobox gene Crx and specifies photoreceptor identity. Xotx2 and Xvsx1 are the Xenopus counterparts of the mammalian Otx2 and Vsx2 homeobox genes, respectively, and support the differentiation of bipolar cells in Xenopus (Viczian et al., 2003;D'Autilia et al., 2006;Decembrini et al., 2006). Xotx5b, Xvsx1, and Xotx2 are transcribed since the early stages of retinogenesis in multipotent progenitor cells, but their translation is inhibited until later stages, when the generation of photoreceptor and bipolar cells begins. This translational inhibition is due to signals in the 3 UTR and is controlled by progression of the cell cycle . We have identified a set of four miRNAs that inhibit the translation of Xvsx1 and Xotx2 by binding to their 3 UTR. The four miRNAs (miR-129, miR-155, miR-214, and miR-222) are down-regulated as retinal development proceeds. Interestingly, their expression is decreased in early progenitors by the inhibition of the Shh pathway, which has the effect of lengthening the cell cycle, and is increased in progenitors forced into the S-phase. These treatments, respectively, accelerate and block the translation of Xvsx1 and Xotx2. We have proposed that cell cycle length, which is known to increase as retinogenesis progresses (Alexiades and Cepko, 1996), provides an intrinsic timer that regulates cell birth through miRNA activity (Decembrini et al., 2009;Pitto and Cremisi, 2010; Figure 2B). Shh Frontiers in Cellular Neuroscience www.frontiersin.org FIGURE 2 | The temporal identity of retinal progenitor cells (RPCs) is defined through the translational regulation of key proteins. (A) In mice, Ikaros, Prtg, and Lin-28b are transcribed throughout retinogenesis, but are translated only in early RPCs. While the molecular nature of the inhibitor of Ikaros translation ("?" label) is unknown, Prtg and Lin-28b are targeted by let-7, miR-9, and miR-125. When the protein expression of Ikaros, or Prtg and Lin-28b, is forced throughout retinogenesis, heterochronic neurons of the early-born type (HC, horizontal cells; AC, amacrine cells; GC, ganglion cells) are generated at late times in retinogenesis (Elliott et al., 2008;La Torre et al., 2013). CP, cone photoreceptor; RP, rod photoreceptor; BC, bipolar cell; MG, Müller glia. (B) In Xenopus, bipolar fate is driven by the homeobox Xvsx1 and Xotx2 genes, which are transcribed in RPCs from early developmental stages (15 and 25, respectively), but are translated only from late stages 37 and 38-39, respectively . A set of four cell cycle-regulated miRNAs (miR-129, miR-155, miR-214, and miR-222, in red) bind the 3 UTR of Xvsx1 and Xotx2, inhibiting their translation in early RPCs. In normal Xenopus retinogenesis, the duration of the cell cycle (indicated by dashed circles) inversely correlates with the expression of the four miRNAs. Lengthening the cell cycle by treatment with the Shh signaling inhibitor cyclopamine (Shh inhibition) down-regulates this set of miRNAs, leads to earlier translation of Xvsx1 and Xotx2 and causes the generation of heterochronic bipolar cells. Antago-miR lipofection in early RPCs inhibits the activity of the four miRNAs. Compared to cyclopamine treatment, the lipofection exerts similar effects on the translation of Xvsx1 and Xotx2, and on the generation of bipolar cells, but does not affect progression of the cell cycle (Decembrini et al., 2009). This favors the hypothesis that cell cycle progression may affect neuronal fate through the set of four miRNAs. In these experiments, the effect of miRNAs on Müller glia was not examined. CP, cone photoreceptor; RP, rod photoreceptor; HC, horizontal cell; BC, bipolar cell; AC, amacrine cell; GC, ganglion cell. Frontiers in Cellular Neuroscience www.frontiersin.org is a possible mediator of this process, as it regulates the cell cycle length in the retina (Locker et al., 2006). CONCLUSION The generation of distinct types of neurons in the cerebral cortex and neural retina relies on the ordered activation of cell fate genes over time. Studies in Xenopus and mouse retinal development described key proteins of neuronal identity whose expression is regulated at the translational level. Distinct miRNAs target these proteins and are crucial for early or late competence of progenitor cells in retinogenesis. Although no specific miRNA has been found to control the translation of key factors of cell fate in the cortex, the involvement of miRNAs in the control of the competence of cortical progenitor cells (CPCs) is strongly suggested by the results of Dicer down-regulation in CKO mice. In both the retina and cortex, expression of miRNAs is necessary for the transition from early to late development. However, in Xenopus retinogenesis there is evidence that distinct miRNAs must also be down-regulated to generate the latest neuron types. An intriguing hypothesis is that the multipotency of early progenitor cells results from the transcription of mRNAs that serve to specify different neuronal identities, but are repressed by miRNAs. The release from the translational inhibition of distinct types of such mRNAs might determine what type of neuron is generated, and when. In Xenopus, release from the translational inhibition of Xvsx1 and Xotx2 is due to cell cycle lengthening, which causes the down-regulation of the four miRNAs targeting Xvsx1 and Xotx2. A similar mechanism, which makes use of cell-cycle-dependent miRNAs, might provide an intrinsic timer to regulate the cell birth of different types of neurons ( Figure 2B). Shh, which regulates the cell cycle length in both the cortex and retina, might play a key role in this regard, and its function in temporally regulated aspects of retinogenesis and corticogenesis warrants further study.
2016-05-04T20:20:58.661Z
2013-09-03T00:00:00.000
{ "year": 2013, "sha1": "3587417bf8c8161d2323cad30e64abbd45c7ea0e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2013.00141/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3587417bf8c8161d2323cad30e64abbd45c7ea0e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
140545174
pes2o/s2orc
v3-fos-license
Stratigraphy and correlation of Lower-Middle Jurassic sediments in SE West-Siberian petroleum-bearing province Based on the investigation of Lower-Middle Jurassic sediments in SW Siberia, isochron reference clay and coal horizons confined to the roof of chronostratigraphic units were identified; bed series from pre-Jurassic formation bottom to U10 coal layer were divided; structural features and distribution of regional cyclites J10-J17 in the sequences and throughout the area were determined. Introduction The studied area is located within the south-east of Tomsk Oblast, whereas, in terms of tectonic relationship -within the SE West-Siberian platform, to the north of Kalgach meso-block, including Boltnoe, Kazan, Ponomarev, Rogalev, West-Somov and Novosomol uplifting (figure 1). Hydrocarbon potential of this territory is correlated with Upper Jurassic sediments, where Kazan and Boltnoe oil fields are being developed. In recent years special consideration is being given to Early Middle Jurassic rocks which are potentially hydrocarbon bearing, in view of limited oil flow recovery and flow properties observed throughout thickness sequence. The major target of the following research includes dismembering and correlation of the Lower -Middle Jurassic sediments in the sequences, as well as profile planning. Initial data Research data included the following: structural map of reflecting horizon F 2 (Jurassic bottom, according to V Vilkin, 2010); spontaneous potential (SP) diagrams, apparent resistivity, (AR), induction resistivity (IR), natural radioactivity (NR), and neutron loggings and core samples. Pre-Jurassic formation surface embraces dismembered relief and numerous tectonic deformations (figure 1). Currently, 19 drilled wells have entered pre-Jurassic formation within the studied area. However, these wells are wide-spaced which hampers the possible correlation of geological sections and further investigation of rock distribution in profile plan. Lower-Middle Jurassic sediments are marked by vertical and horizontal inhomogeneity. Due to the lack of paleontological data, layering is based on the results of geologic-geophysical surveys, spore-pollen analysis, and analogy with contemporaneous rocks containing marine organism remains. The following researchers gave a detailed stratigraphic description of Lower and Middle Jurassic formations in SE West Siberia: F.G. Gurari [1,2,3,4,6,8,9,10,11,12]. Lower Jurassic sediments were tapped in SE West Siberian petroleum province within Ust-Tym, Nurolsk, Barchar and other mega-depressions. However, Middle Jurassic sediments (J 15 and upwards) have been investigated to a lesser extent. Dismembering and correlation of Lower-Middle Jurassic sediments (from pre-Jurassic formation bottom to coal layer U 10 ) were based on the principle of sedimentation cycle [7,5], including the following: 1) unified sedimentation sequence, not involving perturbation and concealing of large stratigraphic unit sections, has been observed; 2) horizon markers showing isochrohism features within a relatively limited area and welldefined field-geophysical characteristics have been identified in the sequences. Results and discussion Horizon markers in studied sequences are defined as coal layers (U 10 , U 11 , U 12 , U 13 , U 14 , U 15 ), argillaceous formations (Togur suite and middle subsuite of Urman suite) and Jurassic sediments with crust weathering rocks -Paleozoic sediments contact. Chronostratigraphic units in correlating the sequences are cyclites wherein the top are either underlying coal marker layers of continental sedimentation and / or argillaceous formations formed in marine fresh lake basins. Basically, regional cyclites are composed of small priority cyclites (zonal and local) which exhibit changing composition orientation from layer to layer and complex texture, i.e. these cyclites are composed of rock layers or beds with gradual and/or sharp boundaries. Accordingly, it should be noted that there is a difference in the two terms: "cyclite" and "layer". The latter is homogeneous three-dimension geological body, bounded above and below by subparallel surfaces. Cycliteassociation of at least two layers (beds) recurring in time and space and being the composition reflection of a specific sedimentation cycle. Relating cyclites as chronostratigraphic units of different priorities significantly improves reliable correlation of sedimentary formations, especially for changing lateral continental formations, whereas, reality, sand layers formed at different periods are correlated. Indexing regional cyclites (as described in previous research [5] ) is based on the number of coal layer underlying corresponding formation roof. Genetic relation of sand and coal layers within cyclites can be explained by the facies-phase and facies-cycle sedimentation models: rock unit alteration in sequences, consistent with two phases-tectonic activities in the bottom and tectonic quiescence in the roof. It is well-known that regional continuous coal layers formed during a period of maximum tectonic quiescence, relief leveling and minimum water environment dynamic are relevant to the completion of large sedimentation cycles, which, in its turn, indicates the interstratified rock system of priority regional cyclites. Integrating all stratigraphic research data of previous years, a conceptually updated stratigraphic chart of Lower-Middle Jurassic sediments (figure 2) was compiled which included the alignment and correlation of the sequences (figures 3 and 4). The flattening line is the roof of U 10 coal layer which is considered to be the regional horizon marker in SE West Siberia. According to palynology data U 10 layer roof is related to Aalenian sediments [3] and is the interface between the lower and middle subsuite of Tumen suite, as well as Vimian stratigraphic horizon roof [6,2]. Logging response show that U 10 layer exhibits low values, gamma ray and neutron logging curves, high resistance, low electroconductivity and frequent deep reversed-polarised anomaly. Layer thickness changes from 3 to 14 m, moreover, this layer delaminates and is separated by clay bands. The regional cyclite J 10 is, in some cases, distinctly or conventionally subdivided into zonal J 10 в (upper) and J 10 н (lower) cyclites, whereas, further they are divided into local cyclites J 10 1 , J 10 2 , J 10 3 and J 10 4 . More or less, practically in all sequences of the local cyclite bottoms, especially J 10 4 sandstones, frequently, with carbonate cement, are deposited (pertaining to core sample from Ponomarev well 1P and increased neutron logging values). Coal layer U 11 thickness is 1m., increasing up to 5m. in Kazan 16R and Somov 145R wells, while in Ponomarev wells 1P and 2P from 3 to 4m. Regional cyclite J 11 , as described above, is divided into 4 local cyclites, in the bottom of which, sandstones or aleurolites are deposited, while in the topthin coal interlayers.Cyclite U 11 thickness ranges from 11 to 20 m., increases up to 22-31 m within Ponomarev and Somov areas, while in Boltnoe well 3-only 6 m. Coal layer U 12 with thickness of 2-4 m is rather distinct in the sequences with minimum values of gamma ray and neutron logging, high resistance and low electroconductivity. Regional cyclite J 12 has inconsistent thicknessfrom 14 to 33 m and is divided into local cyclites, where sandstones, and even gravelstones are deposited in the bottom (according to core-log data correlation from Zapadno-Somov 9P and Ponomarev 1P wells) and distinct coal interlayers in the top (according to core-log data correlation). Regional cyclite J 13 , separated from overlying coal layer U 13 sediments and having a thickness of 1-3 m embraces a sequence of sand (2-4 m), clay (1-3m) and coal (1 -1.5-m) interlayers. Due to the existing differential lithological composition, the local cyclites J 13 1 , J 13 2 , J 13 3 и J 13 4 are distinctly identified within the sequences. The deep-seated anomaly is confined to cyclite J 13 4 . The core samples from Zapadno-Somov 11R and Ponomarev 2P wells exhibited coarse-grained sandstones, while the core samples from Kazan 8R well-gravelstones and sandstones with pebbles. Thetre are sandstones, often with carbonate cement in the bottom of overlying local cyclites and clay and coal in the top. The total thickness of regional cyclite J 13 ranges from 20-37 m., whereas in the crosssections of Somov 145R and Boltnoe 2P wells, the cyclite J 13 rocks overlie crust weathering formations having reduced thickness of 15m and 10m., respectively. The bottom of cyclite J 13 is the lower interface of the upper substage of Aalenian sequence, lower subsuite of Tumen suite and Vimian stratigraphic horizon (figure 2). The coal layer U 14 is often splitted into two layers (Kazan 9R, 2R, 16R wells; Zapadno-Somov 10P, 9P and 11P wells; Ponomarev 1P well) having a thickness of 1-4m. Regional cyclite J 14 is distinctly different from the above-described cyclite J 13 in its argillaceous composition, and as the overlying cyclites, regional cyclite J 14 is divided into local ones. However, in this case, sand sediments are mainly confined to the bottom-J 13 4 , whereas, core samples from Zapadno-Somov 9R and Kazan 8R, 9R wells showed the presence of conglomerates, gravelites, sandstone with gravel, while core samples from NovoSomov 1P, Ponomarev 2P and Zapadno-Somov 11Pmedium to fine grained sandstones. Fine-grained sandstones, and often aleurolites are deposited in the bottom of local cyclites J 14 3 , J 14 2 , J 14 1 . The roof of these cyclites are apparently evident on interpretated logs and were verified by coal interlayers detected in core samples (Zapadno-Somov 9P, 10P, 11P, Kazan 16R, 18R and Ponomarev 2P wells). However, the regional cyclite J 14 has a rather distinct sand sequence as found in some wells, i.e. Kazan wells 18R, 8R, 3R, 9R, 2R and 15R. The layers in these wells have a rather profound SP amplitude and are confined to cyclites J 14 1 , J 14 2 and J 14 3 whereas laminating coal interlayers are not well-pronounced. Total thickness of cyclite J 14 involves 26−38m. , while in Kazan 3R, 8R and 15R wells it decreases to 10−21m., as eroded rocks lie on argillaceous-siliceous crust weathering sediments. Stratigraphically, regional cyclite J 14 is correlated to lower substage of Upper subsuite Salatian suite and Ladinian horizon (figure2). Regional cyclite J 15 is associated with upper substage of Toarcian stage, lower subsuite of Salatian suite and Nadojakhian horizon, while its roof is along the Lowerr and Middle Jurassic system series boundary (figure 2). Interlaying rocks refer to Urman suite. Upper Urman subsuite has been identified as J 16 layers [7,8,13,23]. The authors propose to consider this sequence as regional cyclite Ю 16 being subdivided into local cyclites J 16 4 , J 16 3 , J 16 2 and J 16 1 ; the roof of the latter cyclite (according to cyclite identification classification [7] ) is marine argillite sediments of Togur suite. This structure is welldefined on interpretated loggings and investigated core samples from cross-sections of Kazan 16R, Zapadno-Somov 10P and Ponomarev 2P wells, and probably, Ponomarev 1P well. The sand section of regional cyclite J 16 (according to the stratigraphic chart) is identified as upper substage of Pleinsbachian stage, upper subsuite of Urman suite and Sharapavian horizon (figure 2). The thickness of regional cyclite J 16 in the studied territory is 25−46 m. Underlying sequence is identified as regional cyclite J 17 based on the description of investigated core samples from Ponomarev 2P and Kazan 16R well cross-sections. According to core sample description these sediments are subdivided into local cyclites, including aleurolites in the bottom and argillites in the top of each cyclite. Upper argillite sequence is identified as lower substage of Pleinsbachian stage and is related to middle subsuite of Urman suite and Levinian horizon. The core sample from Kazan 16R well at a depth of 2930m showed sandstone and along its bottom the lower boundary of local cyclite J 17 1 was defined. Underlying Hettangian-Sinemurian rocks are probably of related to lower subsuite of Urman suite and Zimnian horizon. The total thickness of the regional cyclite J 1 in Ponomarev 2P is 46m and in Kazan 16R-19m.
2019-04-24T13:08:32.028Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "15e859bce6ca4588ff6a597fb92ae3fe7eb1edfa", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/24/1/012014", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e5aeb7d33f74689d40dd2ec4fc0647d7a4dd8da5", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Physics", "Geology" ] }
25086455
pes2o/s2orc
v3-fos-license
Unusual continuous intra-abdominal spread of primary testicular lymphoma along the spermatic cord and gonadal vessels: Report of 2 cases Primary testicular lymphoma (PTL) is an uncommon neoplasm (<5% of all testicular tumors). Testicular lymphoma presents with homogeneous mass, hyperintense on T1-weighted images, and iso-to-hypointense on T2-weighted images with strong diffusion restriction and homogeneous contrast enhancement. Seminoma testis, a close differential due to T2 hypointensity and homogeneousity, can be differentiated by its lower diffusion restriction and younger age group. Involvement of spermatic cord and epididymis is rare with seminoma. Intra-abdominal extension along the gonadal vein is not reported. PTL disseminates to extranodal sites. However, extension of PTL along the spermatic cord and gonadal vein up to the inferior vena cava is a rare phenomenon. We report 2 cases of PTL with involvement of epididymis and spermatic cord and further continuous extension along the gonadal vein up to the inferior vena cava. These findings are very rare and when present may help to differentiate testicular lymphoma from other testicular tumors. Introduction Primary testicular lymphoma (PTL) is a very uncommon neoplasm constituting <5% of all testicular tumors and 1%-2% of non-Hodgkin lymphomas [1,2]. PTL has a tendency to disseminate to other extranodal sites such as contralateral testis, central nervous system, lung, pleura, Waldeyer ring, skin, and soft tissues [3,4]. However, continuous extension of the PTL along the spermatic cord and gonadal veins in the absence of other deposits or manifestations is a very rare phenomenon. We report 2 such cases of PTL with extension along epididymis and spermatic cord into the inguinal canal with further continuous extension along the gonadal vessels up to inferior vena cava. These imaging findings, when Case report Case 1 An 85-year-old man presented with a history of gradually increasing painless swelling in the right hemiscrotum since 4 months. He also complained of vague abdominal discomfort and intermittent pain. On examination, the scrotum was visibly enlarged with normal overlying skin. On palpation, there was a firm to hard mass in the right side of the scrotal sac measuring approximately 12 Â 10 cm. It was extending superiorly into the right inguinal canal along the spermatic cord. The right testis was not palpable separately from the mass. The left testis appeared normal in size. Magnetic resonance imaging examination of abdomen and pelvis was done. On magnetic resonance imaging examination, a large heterogeneous signal intensity mass lesion was seen in scrotum arising from the right testis and completely replacing it ( Figure 1 A to I). The lesion measured approximately 7 Â 7.5 Â 5 cm in size. It was T1 hypointense, T2 iso-to-hypointense, short tau inversion recovery (STIR) hyperintense, and showing strong diffusion restriction. Mild homogeneous postcontrast enhancement was seen within the mass. The mass was extending along the right epididymis and right spermatic cord into the right inguinal canal and further extending into the abdomen along the right gonadal vein up to its (venous) drainage into the inferior vena cava. No significant iliac, para-aortic, or mesenteric lymph nodes were detected. Abdominal visceral organs were normal. Overall picture was suggestive of malignant neoplasm of right testis with extension along spermatic cord and right gonadal vein; diagnostic possibilities of seminoma testis and PTL were considered. The patient underwent right inguinal orchiectomy. The histopathology examination revealed diffuse large B cell lymphoma of testis. Further staging workup revealed no other deposits or lymphadenopathy. Contrast computed tomography (CT) abdomen study (part of staging workup) also confirmed homogeneous mildly enhancing soft tissue attenuation mass extending along right gonadal vein with no significant retroperitoneal or mesenteric lymphadenopathy (Figure 1 J to N). Final diagnosis was diffuse large B cell non-Hodgkin lymphoma stage IIAE. The patient was offered chemotherapy and radiotherapy. Case 2 A 68-year-old man presented with a history of pain and redness over the right hemiscrotum since 2 months which was gradually increasing in size. Ultrasonographic examination in a local hospital revealed left inguinal hydrocele and right testicular hypoechoic lesion extending into the right spermatic cord. Possibilities of chronic orchitis and testicular neoplasm were considered. The patient was operated, and right orchiectomy with eversion of left tunica vaginalis sac was done. The histopathology examination revealed diffuse large B cell lymphoma of testis. The patient was referred to our hospital for further workup and management. Ultrasonographic scan of neck and contrast CT scan of thorax, abdomen, and pelvis were done as a part of staging workup, which revealed homogeneous well-defined cordlike mass lesion along right gonadal vein extending from the right inguinal canal below to the junction of right gonadal vein and inferior vena cava superiorly. No solid visceral organ or bowel involvement was detected. No retroperitoneal or mesenteric lymph nodes or ascites were detected. No obvious lymphoma deposits were detected above diaphragm ( Figure 2). Final diagnosis was diffuse large B cell non-Hodgkin lymphoma stage II. Chemotherapy with CHOP regimen (Cyclophosphamide, doxorubicin, vincristine, Prednisolone) was planned for the patient. Discussion PTL is a very uncommon neoplasm constituting <5% of all testicular tumors. For lymphoma per se also, it is an uncommon site of involvement, representing 1%-2% of all non-Hodgkin lymphomas. PTL occurs mainly in patients older than 50 years, and almost 85% of all PTL patients are older than 60 years [1,2]. Most patients present with early localized disease (stage I or II). Radiologically, the closest differential diagnosis of PTL is seminoma, as it also presents as homogeneous testicular mass usually replacing the testis. Typical features of testicular lymphoma are homogeneous mass lesion replacing the testis, hyperintense on T1-weighted images, and isointense to hypointense on T2-weighted images. It shows moderate-to-strong diffusion restriction and homogeneous contrast enhancement. Calcifications and hemorrhages are rare. Intra-abdominal lymphadenopathy is rare with PTL. However, in occult or manifest generalized lymphoma, testicular involvement is usually associated with intraabdominal solid organ or lymph node involvement in disease course. Although seminoma testis is a close differential, owing to T2 hypointensity and homogeneous texture, it can be differentiated by its lower diffusion restriction and presentation in a younger patient subset. Involvement of the spermatic cord and epididymis are also rare with seminoma, whereas intraabdominal extension along the gonadal vein has not been reported so far. CT scan of abdomen and pelvis is usually indicated in testicular tumors for staging workup as it is more sensitive in detecting retroperitoneal, para-aortic, and mesenteric lymph nodes and also solid organ metastasis. In known lymphoma cases, it can detect other visceral organ infiltration or lymphoma deposits in abdomen. PTL has a tendency to disseminate to other extra-nodal sites such as contralateral testes, central nervous system, lung, pleura, Waldeyer ring, skin, and soft tissues over the course of disease [3,4]. However, continuous extension of the PTL along the spermatic cord and gonadal vein right up to the inferior vena cava in the absence of other deposits or manifestations is a very rare phenomenon. We have described 2 such cases in this article. Fig. 1 e An 85-year-old man: (A) Sagittal T2-weighted images of the scrotum -a large heterogeneous T2 isointense-tohypointense mass lesion seen in the scrotum arising from the right testis and completely replacing it (curved black arrow); (B) the mass was extending along the right epididymis (white arrow) and right spermatic cord (black arrow); and (C) continuous extension of the mass along up to inguinal ring (black arrow). (D) Coronal STIR image of the scrotum of an 85year-old man: a large heterogeneous STIR hyperintense mass lesion seen in the scrotum arising from the right testis and completely replacing it (black arrow). Magnetic resonance images of the scrotum of the 85-year-old man: (E) coronal STIR image of abdomen: large STIR hyperintense mass lesion is seen extending obliquely from the right inguinal region to the right para-aortic region (black arrow); (F) sagittal T2-weighted image of the abdomen: large T2 isointense-to-hypointense mass lesion is seen extending obliquely from the right inguinal region to the right para-aortic region (black arrow); (G) sagittal T2-weighted pasted image shows continuous extension of the mass lesion from the right testis and epididymis, along the right spermatic cord to the abdomen (black arrow); (H) axial diffusion-weighted image shows increased signals from the mass suggestive of strong diffusion restriction (black arrow); (I) corresponding axial ADC map shows hypointensity within the mass suggestive of strong diffusion restriction (black arrow). Contrast-enhanced CT of the abdomen of the 85year-old man: (J) axial postcontrast CT image, at the level of external iliac vessels, shows large well-defined homogeneous mass lesion along the right external iliac vessels (black arrow); (K) axial postcontrast CT image, at the level of internal iliac vessels, shows large well-defined homogeneous mass lesion encasing the right gonadal vessels and partly encasing the right external iliac artery (black arrow); (L) axial postcontrast CT image, at the lower para-aortic level, shows large welldefined homogeneous mass lesion along the right gonadal vessels (black arrow); (M) coronal postcontrast CT image shows continuous extension of the homogeneous mass lesion from the right external iliac vessels to the para-aortic region (black arrow); and (N) oblique sagittal postcontrast CT image shows continuous extension of the homogeneous mass lesion from the right inguinal region to the para-aortic region (black arrow). A similar single case has been reported in the literature by Scalcione et al. [5]. They have described similar positron emission tomography CT abdomen findings in a case of carcinoma prostate, which on orchiectomy was proven to be a second primary B cell lymphoma. Chemotherapy and radiotherapy have well-established widely accepted role in the treatment of lymphoma and are associated with a good prognosis. However, orchiectomy is usually preferred in the initial management of suspected testicular lymphoma as it provides definite histopathology tissue diagnosis. In addition, testicular lymphomas are considered less responsive to chemotherapy because of blood testicular barrier preventing peak concentration of chemotherapeutic agent within testis. Conclusion We report 2 cases of PTL with extension along epididymis and spermatic cord into inguinal canal with further continuous extension along the gonadal vein up to inferior vena cava. These specific imaging findings are very rare and when present may help to differentiate testicular lymphoma from other testicular tumors on imaging. r e f e r e n c e s
2018-04-03T06:00:39.003Z
2015-08-28T00:00:00.000
{ "year": 2015, "sha1": "793c8e9dd15e9a8b29fd17584532c4cfcfdb2d81", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.radcr.2015.06.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2e7364189a28c4cdbe4329511e67e3a5514e7a0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236320109
pes2o/s2orc
v3-fos-license
Enhancement of Thermostability of Aspergillus flavus Urate Oxidase by Immobilization on the Ni-Based Magnetic Metal–Organic Framework The improvement in the enzyme activity of Aspergillus flavus urate oxidase (Uox) was attained by immobilizing it on the surface of a Ni-based magnetic metal–organic framework (NimMOF) nanomaterial; physicochemical properties of NimMOF and its application as an enzyme stabilizing support were evaluated, which revealed a significant improvement in its stability upon immobilization on NimMOF (Uox@NimMOF). It was affirmed that while the free Uox enzyme lost almost all of its activity at ~40–45 °C, the immobilized Uox@NimMOF retained around 60% of its original activity, even retaining significant activity at 70 °C. The activation energy (Ea) of the enzyme was calculated to be ~58.81 kJ mol−1 after stabilization, which is approximately half of the naked Uox enzyme. Furthermore, the external spectroscopy showed that the MOF nanomaterials can be coated by hydrophobic areas of the Uox enzyme, and the immobilized enzyme was active over a broad range of pH and temperatures, which bodes well for the thermal and long-term stability of the immobilized Uox on NimMOF. Introduction Urate oxidase (Uox; EC 1.7.3.3), also identified as uricase, catalyzes the uric acid oxidation to allantoin and is a significant enzyme with several therapeutic and diagnostic applications [1]. In biological fluids, such as blood, the solubility of uric acid is low, and the imbalances between production and dissociation of uric acid can result in the buildup of excessive concentrations of uric acid in the blood, which could result in gout and kidney disorders [2]. In this regard, the Uox from external sources is commonly used to regulate the imbalances of uric acid levels in the blood [3,4]. In addition, the purified Uox enzyme is used as a diagnostic agent for determining the levels of uric acid in biological samples [5,6]. The Aspergillus flavus' Uox, a homotetrameric enzyme, is the predominant form of this enzyme and is one of the most active Uox enzymes. Each subunit comprises a single polypeptide chain with a molecular weight of 34 kDa. The Uox enzyme encompasses four active sites, which are located between these subunits [7,8]. Moreover, it has been shown that the Uox enzyme does not require any cofactors or metal ions for its function [9]. Preparation of the Recombinant Uox A. flavus Uox's coding sequence was created and cloned into the pGEM-B1 plasmid by the Bioneer Company (Daejeon, South Korea). After synthesis of the A. flavus Uox gene, it was subcloned into the pET-28a (+) expression vector, under the regulator of T7 promoter, flanked by NcoI and XhoI sites. The resulted construct (called pET-28a-Uox) was then transformed into E. coli BL21 (DE3). The coding sequence of the Uox was confirmed by DNA sequencing. For the production of recombinant Uox, the altered E. coli BL21 (DE3) cells were induced by IPTG at a final concentration of 1 mM at 37 • C for 5 h. The induced cells containing the Uox enzyme were collected and were resuspended in 50 mM Tris-HCl (pH 8.5) by sonication on ice; sonication cycles of 30 s (with 30 s intervals) were repeated until most E. coli cells were lysed. The cell lysate supernatant was amassed by centrifugation at 13,000 rpm at 4 • C for 20 min. Subsequently, the His-tagged Uox enzyme was purified via a Ni-NTA Sepharose metal affinity column according to the manufacturer's directions at 4 • C. Briefly, the cell lysate was supplemented with 500 mM NaCl and was loaded on the Ni-NTA column that was pre-equilibrated with the protein binding buffer (50 mM Tris-HCl, 500 mM NaCl, pH 8.0). The column was washed several times using the same buffer until the absorbance reached the baseline. Then, the Uox enzyme was eluted in a protein binding buffer supplemented with 500 mM imidazole. The purity of the prepared Uox was assessed using a 12% SDS-PAGE, and the purified Uox enzyme was finally concentrated using an Amicon centrifugal ultrafiltration column (10 kDa cut-off) (Sigma-Aldrich Company, Darmstadt, Germany); protein concentrations were determined using a BCA protein determination kit. Synthesis of Ni-Based Magnetic MOF (NimMOF) Nanomaterial First, superparamagnetic Fe 3 O 4 NPs were produced using a co-precipitation approach for the fabrication of the magnetic MOF (NimMOF) core-shell nanomaterial [51]. Briefly, under a N 2 stream, 100 mL of FeCl 2 aqueous solution (0.5 M) was poured to the same volume of 1.0 M FeCl 3 . After stirring for around 40 min, 50 mL of sodium hydroxide (6.0 M) was added drop by drop to the prepared solution, and the mixture was rapidly agitated for about 2 h at 80 • C until a dark/black precipitate appeared. Fe 3 O 4 NPs were then gathered using an external magnetic field and rinsed repeatedly with H 2 O. Finally, the magnetic Fe 3 O 4 particles were accumulated and dried at 65 • C in an oven. In the next stage, a layer-to-layer synthesis technique was employed to create the magnetic MOF core-shell nanoparticles [52,53]. In a typical procedure, 300 mg of Fe 3 O 4 NPs were mixed vigorously for five hours at room temperature with 15 mL of thioglycolic acid ethanolic solution (0.6 mM). After being surface modified with thioglycolic acid, the Fe 3 O 4 NPs were captured using an external magnetic field and washed repeatedly with distilled water and ethanol, respectively. Next, 200 mg of functionalized Fe 3 O 4 NPs were poured into 20 mL of NiCl 2 ethanolic solution (10 mM) and stirred under ultrasound for 20 min until a homogenous solution was achieved. The precipitations were then collected using an external magnetic field and treated three times with ethanol. Thereafter, the precipitates were mixed in a 25 mL ethanolic solution of 1,2,4,5 benzenetetracarboxylic acids (10 mM) at 70 • C for 30 min. Using a magnetic field, the magnetic NPs were retrieved and washed with ethanol. After 20 cycles of adding NiCl 2 and 1,2,4,5 benzene tetracarboxylic acid solutions, the final NimMOF core-shell nanomaterials were produced. Finally, NimMOFs were washed with ethanol and collected using a strong magnet and dried for 8 h under vacuum at 70 • C. Characterization of the Synthesized NimMOF FTIR spectrum of synthesized NimMOF and Fe 3 O 4 NPs was performed by a Fourier transform spectrometer, Bruker TENSOR 27, Billerica, MA, USA. Moreover, the crystalline structure of NimMOF and Fe 3 O 4 NPs was assayed by the XRD technique (Bruker Advance-D8 instrument and CuKα). The SEM method (S-4800 machine, Hitachi, Japan) was used for the morphology observation of synthesized NPs, which operated at 10 kV and 100 mA. Measurement of the Uox Activity Uox activity was measured spectrophotometrically based on the established procedures [54]. Uox converts uric acid to Allantoin, which is monitored by observing a decrease in 293 nm adsorption; the measurement of the Uox activity is based on this reduction in absorbance. The assay reactions were carried out at room temperature (25 • C), and each reaction mixture (717 µL) consisted of boric acid buffer (20 mM, pH 8.5) and uric acid (48 µM). After adding 40 µL of the Uox enzyme to the reaction, any change in the absorbance of uric acid (293 nm) was recorded for 5 min. Immobilization of Uox on NimMOF For the preparation of Uox@NimMOF (i.e., Uox immobilized on NimMOF material), the synthesized NimMOFs in the previous steps were rinsed three times with a Tris-HCl buffer (50 mM; pH 8.0). Then, 200 µL of the enzyme solution (150 µg/mL) was added to 0.5 mg NimMOF of nanoparticles, and this mixture was stored at varying temperatures (4 and 25 • C). Afterward, nanoparticles were collected at 0, 15, 30, 45, and 60 min (with 15-min intervals) post-incubation using a magnet and washed three times with a 50 mM Tris-HCl buffer. At different time points, samples were taken. The activity of the remaining Uox in the supernatant and MOF pellet was measured for each sample. Analysis of the Optimum Temperature for Uox Activity The optimum temperatures of both, the naked and immobilized Uox (Uox@NimMOF) were determined from 10 to 70 • C (with 5 • C intervals) based on the Uox activity assay, as described in the previous sections. For these measurements, the substrate solution containing 20 mM boric acid (pH 8.5) and 48 µM uric acid were pre-heated for approximately 5 min at various temperatures. Then, the enzymatic solution was added to this pre-equilibrated reaction solution for performing the activity assays. Thermal Inactivation and Thermal Stability Studies Thermal inactivation studies were performed to evaluate the stability of the enzymatic function of both Uox and Uox@NimMOF at elevated temperatures. Therefore, both the control enzyme (138 µg/mL Uox) and Uox@NimMOF (containing 138 µg/mL of protein) solutions were incubated at 35 to 55 • C (with 5 • C intervals) for 5 min. Afterward, the samples were placed on ice for 2 min for a quick cool down and were then deployed in a Uox activity assay as described before. The Uox activity was considered 100% before the heat treatment. Furthermore, in the thermal inactivation studies, for the calculation of the inactivation rate constant (k inact ) and half-life (T1/2) of both free Uox and Uox@NimMOF, first the graphs of the linear natural logarithm (Napier) based on the remaining activity of the enzyme were drawn and the k inact value was obtained by dividing the half-life enzyme upon the Ln2 (0.693) [55,56]. Thermal stability studies were conducted at 45 • C for both the naked and immobilized enzymes. The enzyme solutions (with the same concentration) were incubated at 45 • C, and the activity of the treated enzyme was assayed every 5 min after holding the sample on ice for 2 min. Finally, the percentage of the relative activity was obtained. In addition, for the investigation of the long-term stability of the MOF-immobilized enzyme, both Uox and Uox@NimMOF samples were stored at 4 • C for approximately 2 months. Small samples were taken at different time points post-incubation, and their enzymatic performances were assessed according to the method described for measuring Uox activity. Analysis of Kinetics and Thermodynamic Parameters For the measurement of the kinetic parameters of Uox and Uox@mMOF samples, their enzymatic activities were measured at 25 • C using various concentrations of uric acid (ranging from 8-600 µM) in a boric acid buffer (20 mM, pH 8.5). Km values of uric acid were estimated based on the Lineweaver-Burk plot after two repetitions of tests. The catalyst rate constant (K cat ) depends on the Vmax values and is obtained by the division of the Vmax on Km. The activation energy (E act ) value is related to the k inact value and was determined by drawing the Arrhenius law diagram and using the below Equation (1). In the above equation, k represents the constant inactivation enzyme (min −1 ), R is the global constant non-activation of gases (8.31 J·mol −1 ·K), T is the desired temperature (Kelvin), and C shows the Arrhenius constant. Moreover, the enthalpy of the transition state (∆H) and entropy (∆S) was obtained from the inactivation experiments using Equations (2) and (3). The Optimum pH and pH Stability of Uox Preparations The optimum pH for Uox@mMOF and Uox was examined using a mixture buffer (100 mM KH 2 PO 4 , 50 mM sodium acetate, 50 mM Tris-HCl, and 50 mM glycine) at pH ranges from 4 to 12. Furthermore, pH stability for both cases of the enzyme (immobilized and naked) was measured by adding 40 µL of the enzymatic solutions into 50 µL of the mix buffer (pH = 6 to 12). The samples were kept at room temperature for 10 min while the activity of the enzymes was evaluated. Intrinsic and Extrinsic Fluorescent Measurements Intrinsic fluorescence spectra of Uox@mMOF and Uox were scanned and compared using a Cary Eclipse fluorescence spectrophotometer device. A solution containing 200 µg/mL of urate oxidase was prepared as the control and excited at a wavelength of 295 nm. Then, 500 µg NimMOF including 260 µg/mL urate oxidase was used to measure the intrinsic fluorescence spectra of the immobilized enzyme. To assess the structural conformation of the enzyme, it was immobilized and its free mode fluorescence spectra collected by excitation at 295 nm when the samples were first heated from 25 to 85 • C and then cooled to the room temperature. The extrinsic fluorescence study was accomplished in the presence of 30 µM ANS at a final concentration at 25 • C. Then, 1 µM protein solutions (equivalent to 600 µg Uox@NimMOF) were added into the ANS solution, and the fluorescence spectra were collected at various time intervals (0, 15, 30, and 60 min) using an excitation wavelength of 350 nm. Preparation of Recombinant Uox Enzymes from various sources and with varying grades have been used for immobilization purposes. For example, when immobilized enzymes are destined to be used for the removal of pollutants in large environmental samples, crude enzymes are preferred due to economic reasons [57][58][59] Immobilized enzymes used for analytical and medical reasons generally need to be of higher purity [6,58,60]. In the current work, a purified recombinant Uox was used for the immobilization experiments. Accordingly, the A. flavus Uox gene was synthesized and placed into the expression vector pET-28a. E. coli, the most common expression system, was used for the recombinant production of the Uox enzyme, and the A. flavus Uox could be produced in an active and soluble form in the cytoplasmic space of E. coli [61][62][63] The recombinant Uox was expressed successfully in E. coli ( Figure 1). Then, the soluble proteins were purified using Ni-NTA chromatography. The purity of the prepared Uox was~95% as judged by the SDS-PAGE analysis ( Figure 1). The purified Uox shows a molecular weight of around 34 kDa, which is consistent with the expected mass of the subunits of A. flavus Uox [7,64]. intrinsic fluorescence spectra of the immobilized enzyme. To assess the structural conformation of the enzyme, it was immobilized and its free mode fluorescence spectra collected by excitation at 295 nm when the samples were first heated from 25 to 85 °C and then cooled to the room temperature. The extrinsic fluorescence study was accomplished in the presence of 30 µM ANS at a final concentration at 25 °C. Then, 1 µM protein solutions (equivalent to 600 µg Uox@NimMOF) were added into the ANS solution, and the fluorescence spectra were collected at various time intervals (0, 15, 30, and 60 min) using an excitation wavelength of 350 nm. Preparation of Recombinant Uox Enzymes from various sources and with varying grades have been used for immobilization purposes. For example, when immobilized enzymes are destined to be used for the removal of pollutants in large environmental samples, crude enzymes are preferred due to economic reasons [57][58][59] Immobilized enzymes used for analytical and medical reasons generally need to be of higher purity [6,58,60]. In the current work, a purified recombinant Uox was used for the immobilization experiments. Accordingly, the A. flavus Uox gene was synthesized and placed into the expression vector pET-28a. E. coli, the most common expression system, was used for the recombinant production of the Uox enzyme, and the A. flavus Uox could be produced in an active and soluble form in the cytoplasmic space of E. coli [61][62][63] The recombinant Uox was expressed successfully in E. coli ( Figure 1). Then, the soluble proteins were purified using Ni-NTA chromatography. The purity of the prepared Uox was ~95% as judged by the SDS-PAGE analysis ( Figure 1). The purified Uox shows a molecular weight of around 34 kDa, which is consistent with the expected mass of the subunits of A. flavus Uox [7,64]. Characterization of the Magnetic MOF Nanoparticles (Ni-MOF) Among different types of MOFs, the magnetic MOF (mMOF) nanomaterials offer some unique advantages to the immobilized enzymes [65][66][67]. These include the magnetic capture and distribution of the enzymatic preparations as well as improved stabilization [68]. The Ni-based core-shell mMOF nanomaterials used in this study were synthesized in a layer-by-layer approach [31]. The structural properties of these nanomaterials were investigated, and the results of these tests are provided in the following sections. Characterization of the Magnetic MOF Nanoparticles (Ni-MOF) Among different types of MOFs, the magnetic MOF (mMOF) nanomaterials offer some unique advantages to the immobilized enzymes [65][66][67]. These include the magnetic capture and distribution of the enzymatic preparations as well as improved stabilization [68]. The Ni-based core-shell mMOF nanomaterials used in this study were synthesized in a layer-by-layer approach [31]. The structural properties of these nanomaterials were investigated, and the results of these tests are provided in the following sections. FTIR Spectra of the Ni-MOF Particles The molecular structures of Fe 3 O 4 nanoparticles as well as the NimMOF nanomaterials were first investigated by FTIR spectroscopy [31]. Figure 2a shows the FTIR spectrum obtained from the nanoparticles of Fe 3 O 4 . A sharp peak could be seen at the 693 cm −1 region in this spectrum; this can be ascribed to the M-O tetrahedral site in the spinel structure, which is more exposed in this sample. This could potentially be one of the explanations for the The molecular structures of Fe3O4 nanoparticles as well as the NimMOF nanomaterials were first investigated by FTIR spectroscopy [31]. Figure 2a shows the FTIR spectrum obtained from the nanoparticles of Fe3O4. A sharp peak could be seen at the 693 cm −1 region in this spectrum; this can be ascribed to the M-O tetrahedral site in the spinel structure, which is more exposed in this sample. This could potentially be one of the explanations for the Fe3O4 particles' fundamental character. The broad absorption band centered at 3450 cm −1 is attributable to the band O-H stretching vibrations, and the weak band near 1638 cm −1 is assigned to the H-O-H bending vibration mode due to the adsorption of water in air, as the FTIR sample disks were prepared in the open air. Figure 2b represents a typical FTIR spectrum of the MOF nanocomposites used for the immobilization of Uox in this study. When porous structures of MOF were grown on the surface of Fe 3 O 4 nanoparticles, two main peaks at 1558 and 1371 cm −1 could be observed. These peaks correspond to the asymmetrical and symmetrical COO vibrations, respectively. It can be proved that the Ni ions sufficiently coordinate the abundant amounts of carboxylate groups. Furthermore, a peak is evident at 1625 cm −1 , which can be attributed to the C=O stretching vibration of the carboxylic groups of 1,2,4,5 benzene tetracarboxylic acids. The small peak observed at 813 cm −1 could be ascribed to the vibrations of Ni-O. XRD Patterns The XRD patterns of magnetic Fe 3 O 4 NPs and NimMOF nanomaterials have spinal architectures and display seven distinct peaks at the area of 31.1 • -75.8 • , which correspond to different planes of the crystal lattice ( Figure 3). Furthermore, the XRD results of NimMOF show that characteristic peaks do not shift after coating by MOF on Fe 3 O 4 NPs (Xpert high score and Ref. code of 00-003-0875). The Debye-Scherrer equation was used to compute the crystallin sizes of Fe 3 O 4 and NimMOF, which were determined to be about 16.6 and 56.3 nm, respectively. Furthermore, there were no intervening peaks in the NimMOF waveform. When porous structures were coated on the surface of magnetic Fe 3 O 4 NPs, the XRD pattern demonstrates that crystallite size is indeed in nanometric levels, and the magnetic core composition is not affected. carboxylic acids. The small peak observed at 813 cm −1 could be ascribed to the vibrations of Ni-O. XRD Patterns The XRD patterns of magnetic Fe3O4 NPs and NimMOF nanomaterials have spinal architectures and display seven distinct peaks at the area of 31.1°-75.8°, which correspond to different planes of the crystal lattice ( Figure 3). Furthermore, the XRD results of NimMOF show that characteristic peaks do not shift after coating by MOF on Fe3O4 NPs (Xpert high score and Ref. code of 00-003-0875). The Debye-Scherrer equation was used to compute the crystallin sizes of Fe3O4 and NimMOF, which were determined to be about 16.6 and 56.3 nm, respectively. Furthermore, there were no intervening peaks in the NimMOF waveform. When porous structures were coated on the surface of magnetic Fe3O4 NPs, the XRD pattern demonstrates that crystallite size is indeed in nanometric levels, and the magnetic core composition is not affected. VSM and SEM Analysis of NimMOF MOF materials are considered to have high dispersion and low density, and, therefore, the separation and handling of the enzymes that are immobilized on the MOF (i.e., en-zyme@NimMOF) materials could be challenging. An innovative solution for this problem has been to use the MOF materials that have inherent magnetic properties (i.e., magnetic MOFs; NimMOFs). In particular, the engineered magnetic MOFs offer some valuable features, such as a large surface area, easy loading, and rapid collection [65]. This approach renders the NimMOFs particularly attractive for enzyme immobilization. These were the main factors involved in our preference for a magnetic MOF over the plain MOF for the immobilization of Uox. After fabricating the Ni-based mMOF nanomaterials, their magnetic characteristics were investigated at 25 • C using a vibrating sample magnetometer (VSM) (Figure 4) [69]. The amount of saturation magnetization (MS) in the NimMOF was about 50.02 emu/g. At 25 • C, the magnetic hysteresis loops revealed that the NimMOF has no coercivity (Hc) or remanence (Br), thus revealing their superparamagnetic features; this superparamagnetism originates from the primary nanoparticles. Analysis of the Performances of Uox and Uox@NimMOF As stated earlier, the structural and physical properties of MOFs, which include an adjustable pore size, a large surface area, and high thermal stability, have made MOFs an attractive option for the immobilization of enzymes [28,36], and they have been used for immobilization of a set of diverse enzymes. Enzyme immobilization on MOF materials is Analysis of the Performances of Uox and Uox@NimMOF As stated earlier, the structural and physical properties of MOFs, which include an adjustable pore size, a large surface area, and high thermal stability, have made MOFs an attractive option for the immobilization of enzymes [28,36], and they have been used for immobilization of a set of diverse enzymes. Enzyme immobilization on MOF materials is completed through different processes. In the surface absorption process, which was conducted in this study for immobilizing Uox, weak interactions such as van der Waals and hydrogen bonds are involved. This method is the most common and effective way for immobilizing enzymes because it usually does not alter the structure of the enzyme and its active site [30]. The immobilization of Uox on the synthesized Ni-based mMOF nanomaterial is expected to occur by two mechanisms. First, the immobilization is likely to occur through the physical adsorption of Uox molecules in the NimMOF's pore structures (i.e., on the surface of pore structures); such entrapment of enzymes is considered to be the main factor involved in the stabilization of the enzymatic performances [70]. Second, the existing Ni atoms (as a part of the mMOF) could coordinate with the available poly-histidine tags at the N-terminus of Uox molecules. Synthesized NPs have a porous structure and there are Ni atoms on the surface and inside the pores; reportedly, there is good coordination between Ni atoms and enzymes [43].Taken together, both of these proposed processes are capable of capturing the Uox enzyme, therefore improving its physical stability. Nonetheless, after immobilization of the Uox on the Ni-based mMOF nanomaterials, the behaviors of the enzymes had to be examined thoroughly to ensure a positive outcome. Therefore, the performances of both enzyme preparations (i.e., free Uox and Uox@NimMOF) under different conditions were tested and compared. The Optimum Temperature for the Uox and Uox@NimMOF Activity The activity of both the Uox and Uox@NimMOF was measured at various temperatures; Figure 6 shows that the maximum activity for both Uox and Uox@NimMOF is at approximately 30 • C. This result concurs with the previous studies, which have shown that the optimum temperature for A. flavus Uox enzyme is in the range of 28-32 • C [61,71,72]. Thermal Inactivation and Thermal Stability Studies Native A. flavus Uox is a tetrameric enzyme with relatively poor thermal stability, which has been attributed to the weak interaction between its subunits [10]. It has been suggested that at elevated temperatures (i.e., around and above 40 °C), the tetrameric Uox enzyme tends to disassociate into the inactive monomeric subunits [12]. For example, it was shown that at 35 °C, the Uox could lose its activity to below 20% of its original activity level, and also with any further increase in temperature, the Uox activity may decline further [73]. In addition, this inactivation has also been shown to happen in a concentration- Thermal Inactivation and Thermal Stability Studies Native A. flavus Uox is a tetrameric enzyme with relatively poor thermal stability, which has been attributed to the weak interaction between its subunits [10]. It has been suggested that at elevated temperatures (i.e., around and above 40 • C), the tetrameric Uox enzyme tends to disassociate into the inactive monomeric subunits [12]. For example, it was shown that at 35 • C, the Uox could lose its activity to below 20% of its original activity level, and also with any further increase in temperature, the Uox activity may decline further [73]. In addition, this inactivation has also been shown to happen in a concentration-dependent manner. For instance, Conley et al. showed that at 45 • C, the enzymatic solution with lower concentration was inactivated more readily than the higher concentration ones [11]. While investigating the optimal temperatures for the function of both the free Uox and Uox@NimMOF preparations, as expected, a sharp and constant decrease in the activity of free Uox was observed as the temperature rose above~30 • C, which was strictly consistent with the previously documented observations [10,72]. However, it was noticed that the process of immobilization of the Uox enzyme on the NimMOF nanomaterials (Uox@NimMOF) dramatically improved the thermal stability of the Uox (Figure 7). As it can be seen, while the free Uox enzyme lost almost all of its activity at~40-45 • C, the immobilized Uox (Uox@NimMOF) retained around 60% of its original activity at 45 • C. Even at 70 • C, the highest temperature tested in our experiments, Uox@NimMOF still retained significant activity. The entrapment of Uox molecules in NimMOF pore structures (through surface absorption and/or by coordination of the terminal His-tag of Uox by the NimMOF's Ni atoms) is responsible as the main factor involved in this improved physical stability. In addition to assessing the details of Uox and Uox@NimMOF thermal inactivation, further tests were performed. Here, both samples (free Uox and Uox@NimMOF) were incubated at different temperatures (from 35 to 55 °C with 5 °C intervals) for 5 min. Then, the remaining activity of these heat-treated samples was measured ( Figure 7). As it can be seen, the residual activity for the Uox@NimMOF sample was around 60, 45, and 30%, when incubated at 45, 50, and 55 °C, respectively. However, for the free Uox sample, the remaining activity was around 15, 10, and 4% at 45, 50, and 55 °C, respectively. This inactivation could be due to the disassociation of subunits of the Uox enzyme or denaturation of the enzyme and ultimately the loss of the active sites. To better understand the effect of the NimMOF immobilization on the thermal stability of the Uox, the inactivation rate constant (kinact) for both the Uox and Uox@NimMOF was calculated. As presented in Table 1, the kinact of the immobilized enzyme decreased, which is indicative of the effect of NimMOF in reducing the inactivation of Uox at elevated temperatures. Furthermore, half-life (T1/2) calculations show that the immobilization with NimMOF increased the enzyme's half-life significantly (Table 1). Taken together, it is apparent that the NimMOF has a protective effect on the function of the Uox enzyme at In addition to assessing the details of Uox and Uox@NimMOF thermal inactivation, further tests were performed. Here, both samples (free Uox and Uox@NimMOF) were incubated at different temperatures (from 35 to 55 • C with 5 • C intervals) for 5 min. Then, the remaining activity of these heat-treated samples was measured ( Figure 7). As it can be seen, the residual activity for the Uox@NimMOF sample was around 60, 45, and 30%, when incubated at 45, 50, and 55 • C, respectively. However, for the free Uox sample, the remaining activity was around 15, 10, and 4% at 45, 50, and 55 • C, respectively. This inactivation could be due to the disassociation of subunits of the Uox enzyme or denaturation of the enzyme and ultimately the loss of the active sites. To better understand the effect of the NimMOF immobilization on the thermal stability of the Uox, the inactivation rate constant (k inact ) for both the Uox and Uox@NimMOF was calculated. As presented in Table 1, the k inact of the immobilized enzyme decreased, which is indicative of the effect of NimMOF in reducing the inactivation of Uox at elevated temperatures. Furthermore, half-life (T 1/2 ) calculations show that the immobilization with NimMOF increased the enzyme's half-life significantly (Table 1). Taken together, it is apparent that the NimMOF has a protective effect on the function of the Uox enzyme at higher temperatures. For investigating the thermal stability of the enzyme, the activity of the samples treated at 45 • C was recorded every 5 min. The results indicate that the connections between NimMOF nanoparticles and enzymes probably have an important role in the thermal stability of the enzyme. Therefore, after 70 minutes of temperature treatment, Uox@NimMOF showed 20% activity (Figure 8). For investigating the thermal stability of the enzyme, the activity of the samples treated at 45 °C was recorded every 5 min. The results indicate that the connections between NimMOF nanoparticles and enzymes probably have an important role in the thermal stability of the enzyme. Therefore, after 70 minutes of temperature treatment, Uox@NimMOF showed 20% activity (Figure 8). Analysis of Enzyme Kinetic and Thermodynamics Parameters To further compare the free Uox and Uox@NimMOF, the kinetic parameters for these samples were calculated. These parameters were obtained over a range of substrate concentrations (8-600 µM uric acid) and are summarized in Table 2. The value of Km measured for the Uox enzyme is 50 µM, which is comparable to the data obtained in the reported studies [13,74,75]. However, the Km value for Uox@NimMOF did not change considerably in comparison to that of free Uox, thus indicating that the immobilization did not significantly change the affinity of the enzyme for its substrate. Furthermore, comparisons revealed a slight decrease in the turnover number (Kcat) of the immobilized enzymes ( Table 2). The Kcat of Uox@NimMOF is approximately 85% of the free Uox, which could be due to the immobilization enzyme on the NimMOF nanomaterials. Analysis of Enzyme Kinetic and Thermodynamics Parameters To further compare the free Uox and Uox@NimMOF, the kinetic parameters for these samples were calculated. These parameters were obtained over a range of substrate concentrations (8-600 µM uric acid) and are summarized in Table 2. The value of Km measured for the Uox enzyme is 50 µM, which is comparable to the data obtained in the reported studies [13,74,75]. However, the Km value for Uox@NimMOF did not change considerably in comparison to that of free Uox, thus indicating that the immobilization did not significantly change the affinity of the enzyme for its substrate. Furthermore, comparisons revealed a slight decrease in the turnover number (Kcat) of the immobilized enzymes ( Table 2). The Kcat of Uox@NimMOF is approximately 85% of the free Uox, which could be due to the immobilization enzyme on the NimMOF nanomaterials. Activation energy (Ea), which is associated with k inact , was calculated for the Uox enzyme and Uox@NimMOF (Table 3) to be 99.80 and 58.81 kJ·mol −1 , respectively. The value of Ea decreased to about one-half of the free Uox enzyme, which indicates that at a high temperature upon immobilization on NimMOF, a lower amount of energy is required for activation of Uox. The ∆H value is a criterion for non-covalent enzymes (van der Waals, hydrogen bond, and electrostatic), which are broken down in the formation of a transition state complex for non-digestion of the enzyme. The value of ∆H for the naked Uox is 97 kJ·mol −1 , but after immobilization it fell to 56 kJ·mol −1 . In addition, ∆G is also defined in terms of enthalpy and entropy. The ∆G value in the Uox@NimMOF enzyme decreased by about 5 kJ·mol −1 proportional to the naked Uox enzyme, according to the results presented in Table 3. The ∆S parameters of the enzyme decreased after immobilization. Therefore, it can be concluded from the results that the nanoparticle provides an enzyme's structural maintenance. Table 3. Thermodynamic parameters of Uox and Uox@NimMOF. Optimum pH and pH Stability for Uox Preparations Optimum pH for the activity of both free Uox and Uox@NimMOF was measured and compared ( Figure 9a). According to the results, the optimum pHs of 8.5-9.5 and 9-10 were obtained for free Uox and Uox@NimMOF, respectively, which lie well within the optimal pH ranges measured previously for the A. flavus Uox [61,71,72]. It is also worth mentioning that the activity of Uox@NimMOF in the pH ranging from 10 to 12 is approximately 20% more than that of the free Uox. This suggests that the Uox@NimMOF's performance is higher than free Uox in a wider range of pH. In addition, Figure 9b shows the results obtained for the pH stability of the Uox samples. It can be seen that the immobilization of Uox on NimMOF material did not dramatically change the overall stability of this enzyme compared to the free enzyme in a wide range of pH; both Uox preparations were the most stable at pH ranging from 6 to 10. However, it should be mentioned that the immobilized Uox@NimMOF's performance seems slightly superior to that of the free Uox in pH from 6 to 8. is higher than free Uox in a wider range of pH. In addition, Figure 9b shows the results obtained for the pH stability of the Uox samples. It can be seen that the immobilization of Uox on NimMOF material did not dramatically change the overall stability of this enzyme compared to the free enzyme in a wide range of pH; both Uox preparations were the most stable at pH ranging from 6 to 10. However, it should be mentioned that the immobilized Uox@NimMOF's performance seems slightly superior to that of the free Uox in pH from 6 to 8. Storage Stability Studies It was shown that the MOF immobilization could lead to the enhanced long-term stability of the encapsulated enzymes [70]. Thus, to assess the possible effects of immobilization on the long-term stability of Uox, both enzymatic solutions (i.e., free Uox and Storage Stability Studies It was shown that the MOF immobilization could lead to the enhanced long-term stability of the encapsulated enzymes [70]. Thus, to assess the possible effects of immobilization on the long-term stability of Uox, both enzymatic solutions (i.e., free Uox and Uox@NimMOF) were stored at 4 • C for an extended period. Then, the activity for both samples was compared at different time points post-incubation at 4 • C, and the results are shown in Figure 10. It could be seen that immobilization of Uox on the NimMOF caused a significant increase in the stability of this enzyme when stored in solution at 4 • C for a prolonged period. After 50 days, while the activity of free Uox reached less than 20% of the original Uox preparation, the Uox@NimMOF sample still showed~75% of its original activity. This improved long-term stability of the Uox enzyme could be attributed to its encapsulation in the Ni-based mMOF material. Uox@NimMOF) were stored at 4 °C for an extended period. Then, the activity for both samples was compared at different time points post-incubation at 4 °C, and the results are shown in Figure 10. It could be seen that immobilization of Uox on the NimMOF caused a significant increase in the stability of this enzyme when stored in solution at 4 °C for a prolonged period. After 50 days, while the activity of free Uox reached less than 20% of the original Uox preparation, the Uox@NimMOF sample still showed ~75% of its original activity. This improved long-term stability of the Uox enzyme could be attributed to its encapsulation in the Ni-based mMOF material. Florescent Measurements Intrinsic fluorescence spectra of Uox and Uox@NimMOF were collected and compared at different temperatures. Considering that some of the proteins are removed from the surface of the particle by an increase in temperature, the released proteins were collected by centrifuge, and the structural changes of the immobilized urate oxidase were analyzed. As shown in Figures 11 and 12, the intensity of the emitted light was considered 100% for both states (Uox and Uox@NimMOF) at 25 °C. As the temperature rises, the amount of fluorescence decreases with a different slope. Increased temperature causes structural changes and denaturation of proteins, but stabilized enzymes seem to maintain Florescent Measurements Intrinsic fluorescence spectra of Uox and Uox@NimMOF were collected and compared at different temperatures. Considering that some of the proteins are removed from the surface of the particle by an increase in temperature, the released proteins were collected by centrifuge, and the structural changes of the immobilized urate oxidase were analyzed. As shown in Figures 11 and 12, the intensity of the emitted light was considered 100% for both states (Uox and Uox@NimMOF) at 25 • C. As the temperature rises, the amount of fluorescence decreases with a different slope. Increased temperature causes structural changes and denaturation of proteins, but stabilized enzymes seem to maintain their structure better and reduce the slope of the fluorescent intensity more slowly. Florescent Measurements Intrinsic fluorescence spectra of Uox and Uox@NimMOF were collected and compared at different temperatures. Considering that some of the proteins are removed from the surface of the particle by an increase in temperature, the released proteins were collected by centrifuge, and the structural changes of the immobilized urate oxidase were analyzed. As shown in Figures 11 and 12, the intensity of the emitted light was considered 100% for both states (Uox and Uox@NimMOF) at 25 °C. As the temperature rises, the amount of fluorescence decreases with a different slope. Increased temperature causes structural changes and denaturation of proteins, but stabilized enzymes seem to maintain their structure better and reduce the slope of the fluorescent intensity more slowly. The renaturation of the protein structure was assessed after cooling the solution from 85 to 25 °C (Figure 13). By reducing the temperature, the percentage of fluorescent intensity was increased, indicating that the structures of the protein were refolded again. The renaturation of the protein structure was assessed after cooling the solution from 85 to 25 • C ( Figure 13). By reducing the temperature, the percentage of fluorescent intensity was increased, indicating that the structures of the protein were refolded again. The renaturation of the protein structure was assessed after cooling the solution fro 85 to 25 °C (Figure 13). By reducing the temperature, the percentage of fluorescent inte sity was increased, indicating that the structures of the protein were refolded again. Urate oxidase has hydrophobic patches on its surface [76,77], and it appears that i immobilization on the NimMOF nanoparticle can cover surface regions. For the confirm tion of this thought, extrinsic fluorescence spectra of urate oxidase were investigated both states (Uox and Uox@NimMOF) using 8-aniline-1-naphthalene sulfonic acid (ANS As shown in Figure 14a, fluorescence intensity increased after the connection of ANS hydrophobic regions and indicated the NimMOF could cover the hydrophobic areas th exist on the surface of urate oxidase. Figure 14b depicts changes in the fluorescence inte sity of the enzyme in the presence of different NimMOF concentrations. It is also not worthy to mention that the length of incubation time does not affect the output spectrum Urate oxidase has hydrophobic patches on its surface [76,77], and it appears that its immobilization on the NimMOF nanoparticle can cover surface regions. For the confirmation of this thought, extrinsic fluorescence spectra of urate oxidase were investigated in both states (Uox and Uox@NimMOF) using 8-aniline-1-naphthalene sulfonic acid (ANS). As shown in Figure 14a, fluorescence intensity increased after the connection of ANS to hydrophobic regions and indicated the NimMOF could cover the hydrophobic areas that exist on the surface of urate oxidase. Figure 14b depicts changes in the fluorescence intensity of the enzyme in the presence of different NimMOF concentrations. It is also noteworthy to mention that the length of incubation time does not affect the output spectrum. Conclusions In conclusion, a significant amount of work was conducted on improving the stabili of this enzyme, urate oxidase (Uox), including the use of additives such as an immobiliz enzyme with alginate microcapsules that can degrade the increased uric acid in patien with renal insufficiency. Moreover, the binding of urate oxidase to albumin has caus thermal stability and resistance to proteolysis and increased the half-life of the enzyme six times that of the wild type. Enzyme immobilization on the outer membrane of eryt rocytes as a carrier can decompose uric acid with high efficiency. Conjugating urate ox dase with polyethylene glycol compounds or dextrin, as well as the use of nanoliposom coating to prevent damage to the enzyme's structure, improved the stability and activi of the enzyme. Furthermore, nanogels, methanol, dimethyl sulfoxide (DMSO; f strengthening the hydrogen bonds), glycerol, and NaCl have been deployed for the stab lization of this enzyme. In this study, we synthesized and used magnetized MOF (N MOF) nanoparticles for the immobilization of the A. flavus Uox. Subsequent detailed i vestigations and comparisons of the properties of both the free Uox and the immobiliz Uox (Uox@Ni-MOF) showed that a significant improvement in the thermal stability an Conclusions In conclusion, a significant amount of work was conducted on improving the stability of this enzyme, urate oxidase (Uox), including the use of additives such as an immobilized enzyme with alginate microcapsules that can degrade the increased uric acid in patients with renal insufficiency. Moreover, the binding of urate oxidase to albumin has caused thermal stability and resistance to proteolysis and increased the half-life of the enzyme to six times that of the wild type. Enzyme immobilization on the outer membrane of erythrocytes as a carrier can decompose uric acid with high efficiency. Conjugating urate oxidase with polyethylene glycol compounds or dextrin, as well as the use of nanoliposomal coating to prevent damage to the enzyme's structure, improved the stability and activity of the enzyme. Furthermore, nanogels, methanol, dimethyl sulfoxide (DMSO; for strengthening the hydrogen bonds), glycerol, and NaCl have been deployed for the stabilization of this enzyme. In this study, we synthesized and used magnetized MOF (Ni-MOF) nanoparticles for the immobilization of the A. flavus Uox. Subsequent detailed investigations and comparisons of the properties of both the free Uox and the immobilized Uox (Uox@Ni-MOF) showed that a significant improvement in the thermal stability and long-term storage of Uox could be realized upon immobilization on Ni-MOF nanocomposite. Moreover, immobilization did not adversely affect the enzyme kinetic properties of Uox, as the immobilized enzyme remained active over a wide range of pH and temperatures. Furthermore, the ability to maintain the Uox enzymatic function is an important case that facilitates its application for commercial uses. This promising novel approach for immobilization of the Uox could potentially be used for the development of novel reagents for both therapeutic and laboratory purposes and especially for the construction of a uric acid assay kit with a longer shelf-life.
2021-07-26T05:28:51.492Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "de7d782d2102b82078bb7bc8ecbbf975f8b8663d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/11/7/1759/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "de7d782d2102b82078bb7bc8ecbbf975f8b8663d", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
92424494
pes2o/s2orc
v3-fos-license
Assessment of Aglycones Isoflavone Profiling of Staple Indian Grain Flours and Soybean Sprout-Flour In the present study, investigation of fourteen traditional and most commonly used Indian staple grain flour types (viz. wheat, white rice, processed little millet, maize, all-purpose/refined wheat flour, chickpea flour, toasted gram flour, little millet, quinoa, soybean, white millet, pearl millet, semolina/cream of wheat and finger millet) was undertaken for the assessment of 3 major bioactive aglycone forms of isoflavone (IF): daidzein (DI), glycitein (GY) and genistein (GN), with a special interest on the effect of sprouting on total and individual IF components. The obtained results showed that the content and composition of total IF were negligible among all the investigated flours except for soybean, wherein detectable total (227 mg kg) and individual IF (45, 129 and 53 mg kg for DI, GY and GN respectively) components were observed. From soybean mature seeds to sprouts formation with ~80% germination rate at a pilot-scale, a 31% increase in total IF (298 mg kg), characterised by an individual and respective increment of 30% (58 mg kg), 25% (161 mg kg) and 48% (78 mg kg) in corresponding DI, GY and GN components, was observed. The current results demonstrated that for the Indian scenario, contribution of aforementioned grains, other than soybean in daily dietary intake of IF is negligible and sprouting represents an effective way to enhance the endogenous IF content. Introduction Isoflavones (IF), non-steroidal estrogen mimics, termed as 'phytoestrogen', constituting a sub-class of natural bioflavonoids, are a large group of plant secondary metabolites (Cos et al., 2003;Hwang et al., 2006).Over the past few decades, IF have attracted considerable worldwide attention mainly due to their enormous health benefits (Crozier et al., 2009;Garcia-Lafuente et al., 2009;Terashima et al., 2012).IF are known to possess antioxidant, anti-microbial, anti-allergic, ant-inflammatory, anti-viral, anti-carcinogenic, anti-neoplastic, antihelminthic, anti-thrombotic and anti-hormonal properties (Mira et al., 2002).A diverse array of epidemiological and associated meta-analyses strongly supports the notion that persistent consumption of IF enriched products imparts natural remedies by offering protection against cancers (prostate and breast), diabetes, obesity, osteoporosis, hypercholesterolemia, cardiovascular and neuro-degenerative disorders, as well as relieving from menopausal symptoms (Kalaiselvan et al., 2010). In plants, major and parental IF are daidzein (DI), glycitein (GY) and genistein (GN), present both in free-(aglycones) and conjugated-forms [glycosides: glucosides (βglycosides), acetyl-glucosides ester (6'-o-acetylglycosides) and malonyl-glucosides ester (6'-o-malonylglycosides)], with the latter being the predominant form (Friedman and Brandon, 2001;Ho et al., 2002).After consumption, the conjugated-glycosides are hydrolysed in the human gut to their respective parental aglycones, which are further metabolised and excreted (Kulling et al., 2002).Thereby, the aforementioned health benefits by IF relates exclusively to their bioactive aglycone components, DI, GY and GN.However, in most clinical trials, a little attention has been paid towards exact aglycone profiling, wherein the declared food content is commonly expressed in terms of total IF without any specification related to their form (free/conjugated) and proportion.Hence, it becomes mandatory to determine their bioactive aglycones either by acid- (Müllner and Sontag, 2000) or enzymatic-hydrolysis (Franke et al., 1994), in order to know and estimate the precise levels of bioactive IF exposure to their consumers.and rinsed thoroughly with running distilled water to remove any traces of NaClO.About 5 kg of cleaned and surface sterilized seeds per batch were soaked in 25 l of potable water for 4 h with a constant shaking at 10 rpm in a customised motor seed dressing drum (GMW, Ambala, India), followed by draining and rinsing with distilled water.The soaked seeds were subsequently distributed evenly on filter paper in a single layer in sterile germination trays.For each sample, 5 germination trays harbouring 1 kg seeds/tray were evaluated in three biological replicate.Each germination tray was wrapped with a muslin clothes (to allow entry of oxygen for the germinating seeds, while minimizing the contamination during the test-period) and placed in the customised seed germinator [ACM-78093-S, Acmas technologies Pvt. Ltd., India] at 30 °C with 100% relative humidity for 3 days in the absence of light (Agrahar-Murugkar and Jha, 2009).Germination trays were watered daily according to its requirement with distilled water during the course of germination.Physiological germination in terms of visible radical protrusion of at least 2 mm (ISTA, 1999) was assessed each day over a test period of 3 days, wherein no further radicle emergence was noted.Sprouts obtained after 3 rd day of germination were subjected to drying in an hot air oven incubator (Inlab equipment; 230 volt, 5.4 A) at 50-55°C to a final moisture content of 6-8%, a level recommended for the production of soy-flour (Gandhi, 2008).Dried sprouts were milled to a fine powder using analytical grinder mill, passed through a 0.6 mm sieve to obtain flour of 500 μm particle size.The obtained sproutflour was stored as a fine powder in tightly closed containers at 4 °C till further use. IF extraction and estimation An HPLC based method for extraction and quantitative estimation of aglycone IF has been adopted (Kumar et al., 2009), with certain modifications.This method relies mainly on acid hydrolysis of 12 endogenous IF isomers to their respective aglycone forms, DI, GY and GN.Sample preparation.Approximately 250 mg of sample was extracted with 5 cm 3 of 80% ethanol followed by acid hydrolysis with 1 cm 3 of conc.HCl for 2 h in a boiling water bath.HPLC conditions and instrumentation.An HPLC system equipped with an auto-sampler, a gradient programmer, a solvent pump and a diode array detector (Agilent 1200) was used.The supernatant obtained after centrifugation (10,000 rpm for 10 min) was passed through a syringe filter (0.45 µm, PVDF, 33 mm).Aliquots (20 mm 3 ) of syringe filtered samples were injected into HPLC system housing a C-18 silica column (Inertsil ODS 3V; 5 μm with dimension of 250×4.6 mm).The column oven was maintained at 40 °C.The separation and elution of IF was accomplished by employing a binary gradient mode with solvent A (10% ACN) and solvent B (38% ACN) at a flow rate of 0.8 cm 3 min -1 for 25 min.The solvent system was run as follows (% solvent A/B), 0 min (0/100), 5 min (10/90), 20 min (0/100) and 25 min (0/100).The resolution of DI and of GY and GN was detected at 250 and 260 nm, respectively.Authentic commercially available aglycone IF standards; DI (in methanol), GY (in dimethylformamide) and GN (in methanol) were dissolved at 1 mg cm -3 (1,000 mg dm -3 ) and subjected to HPLC in a concentration range of 0-10 mg There are published data on IF levels of vegetables, fruits, nuts, cereals, oilseeds, legumes, berries and beverages such as tea, coffee and wine (Mazur, 1998;Mazur et al., 1998;Liggins et al., 2000;Liggins et al., 2002).Recently, USDA also released the database on IF content from 560 food items (Bhagwat et al., 2008).Paradoxically, most of these studies has analysed the IF composition from the foods and food-products available in the USA and Western countries.However, there was no systematic report of such a kind in Indian context, wherein diverse array of dietary grain food-products are consumed on a daily basis.Thereby, the following research was conducted to investigate the composition of bioactive aglycones DI, GY and GN in staple Indian grain flours, with a special interest to identify the effect of sprouting on total and individual IF aglycone components.A comparative insight of aglycone IF forms of 14 commonly used Indian grain flours along with a positive effect of sprouting on IF contents are reported here within. Seed materials and flour preparation A diverse array of staple Indian grain flours were used in the study (Table 1).For this, commercially available grain seeds / flours [viz.wheat (Triticum aestivum), white rice (Oryza sativa), processed little millet, maize (Zea mays), allpurpose/refined wheat flour, chickpea flour (Cicer arietinum), toasted gram flour, little millet (Panicum sumatrense), quinoa (Chenopodium quinoa), soybean (Glycine max), white millet (Sorghum vulgare), pearl millet (Pennisetum glaucum), semolina/cream of wheat and finger millet (Eleusine coracana)] were purchased from the local retail stores of Bengaluru, India.A comprehensive overview of aforementioned flours was described earlier (Duyff, 2012).For making seed-flour, the dried seeds (moisture content < 7%) were cleaned and sorted out thoroughly to make them free from dust, dirt, stubbles and foreign matter.Damaged, immature/broken with cracked hull and smallsized seeds were discarded mechanically so as to obtain clean seeds of uniform size.The cleaned seeds were milled to a fine powder using analytical grinder mill (CT 193 Cyclotec TM , FOSS India Pvt. Ltd., Mumbai) and passed through a 0.6 mm sieve to obtain flour of 500 μm particle size.The obtained flours were stored as a fine powder in tightly closed containers at 4 °C till further use. Soybean sprouting conditions and sprout-flour preparation Soybean seeds var.'JS9560' (a widely used commercial variety in Central India), procured from ICAR-Indian Institute of Soybean Research (IISR), Indore, Madhya Pradesh were used in the study.Cleaned and mechanically sorted seeds were surface sterilized with 0.5% (w/v) sodium hypochlorite (NaClO) for 5 min (to avoid fungal invasion) dm -3 .The resulting peak area was plotted against standard concentration for the linear calibration curve.The retention time of individual standard was used to identify the corresponding peaks from the HPLC chromatograms for each sample.The relative concentration of individual IF was calculated after superimposing the sample chromatogram on their corresponding standard curve.Concentration of DI, GY and GN were summed to compute total IF.Individual and total IF concentraion was expressed as mg kg - 1 on a dry weight (dw) basis. Data analysis The results were expressed as means ± SDs.One-way analysis of variance (ANOVA) was used to analyze the level of statistical significance (P ≤ 0.05) between groups. Evaluation of aglycones IF Chromatographic methods allow the quantitation of individual IF aglycone forms DI, GY and GN (Fig. 1), in a complex mixture.Thereby, the selected grain flours used in the hereby study were subjected to acid hydrolysis, followed by HPLC for the separation and quantification of total and individual IF aglycone components.Representative HPLC chromatograms, retention time (RT, in min) and linear equations obtained from the calibration curve of each commercial available authentic IF standards are shown in Fig. 2A.The present HPLC procedure is able to clearly resolve three standard peaks (between 8-15 min as minimum and maximum RT) corresponding to DI, GY and GN, as an individual IF components.In each case, a linear relationship between IF concentration (0-10 mg dm - 3 ) and the observed peak area was obtained, with a determination coefficient >99% (Fig. 2B).The IF profiling depicting the individual aglycone component for each flour are shown in Fig. 3. Each IF component was identified by comparing its relative RT and DAD spectra with those of the corresponding aglycone standard.Sample peak area corresponding to each of identified aglycone IF were quantified using external standard procedure.The authenticity of method was also monitored by a recovery test, wherein the appropriate concentration of the aglycone recovery standard was added to the sample prior to acid hydrolysis (data not shown).Table 2 summarizes the amount of total and individual aglycone IF components for each flour sample.Notably, total IF content was determined as the sum of individual aglycone IF obtained from each flour.The results demonstrated that except for soybean, no detectable IF was observed from any grain flours examined.In soybean seed-flour, the total IF observed was 227 mg kg -1 , wherein aglycones DI, GY and GN accounts for 45, 129 and 53 mg kg -1 , respectively.In this regard, Indian daily intake of IF from the grains examined other than soybean could be neglected, while evaluating phytoestrogen intake in tested Indian grains flours.Notably, the observed soybean IF aglycones value and pattern are in well agreement with the previous studies, wherein soybean has been reported to be a rich source of IF with relative high percentage of DI and GN (Vacek et al., 2008).The prevalence of soybean and soy-products in Asian diet with mean IF consumption of 11-47 mg day -1 (Arai et al., 2000;Ho et al., 2000;Yamamoto et al., 2001), is in contrast to 1-2 mg day -1 in Western countries (de Strom et al., 1999;Kleijn et al., 2001), thus explain the reason of low incidence of menopausal symptoms (hot flushes) (Nagata et al., 2001) and clinical prostate cancer (Duncan et al., 2003) in Asian countries.# Total isoflavones (IF) were represented as the sum of daidzein (DI), glycitein (GY) and genistein (GN). Effect of sprouting on IF levels Being the richest source of IF, soybean represents an ideal model system for improvement of endogenous IF in other unexplored grain flours.Germination represent a cost effective way of enhancing endogenous IF levels of legumes (Chiarello et al., 2006;Gao et al., 2015).Nevertheless aside from circumstantial evidence reported only under laboratory conditions, a practical exploitation of germination process is very scarce.For a commercial perspective, it was hereby evaluated the effect of soybean sprouting on IF content at a pilot-scale.Soybean seeds of commercial variety 'JS9560' were sprouted under controlled environmental conditions at a pilot-scale, with a germination rate of ~80% (Fig. 4A).High quality driedflours was made from sprouts after 3 days of germination (Fig. 4B inset), as per recommendation for the production of soy-flour (Gandhi, 2008).HPLC chromatograms depicting the separation of individual IF components from soybean mature seed-and sprout-flours are shown in Fig. 4B.The IF contents of soybean sprout-flours significantly increased compared to the corresponding soybean seed counterpart during the assay-period.Table 3 depicts the variation of total and individual IF components in soybean mature seed-and sprout-flours. The present findings showed that soybean sprouting at pilot-scale resulted in an increment of 31% in total IF (from 227 to 298 mg kg -1 ).Within IF, an individual and respective increment of 30% (from 45 to 58 mg kg -1 ), 25% (from 129 to 161 mg kg -1 ) and 48% (from 53 to 78 mg kg -1 ) in corresponding DI, GY and GN content was observed.The variable trends of DI, GY and GN observed during sprouting could be due to their specific role during the different growth stages of soybean. Notably, among the three aglycones, the best described ones are DI and GI, which show anti-disease properties (Choi and Kim, 2013;He et al., 2015;Kaur and Badhan, 2015).The observed increment in DI and GI in soybean sprouts could be related to activation of endogenous βglucosidase activity (EC3.2.1.21)either during soaking or germination (Chiarello et al., 2006), resulting in a potent hydrolysis of their substrates, β-glycoside IF to their aglycone forms. Germination is a complex metabolic process leading to radical changes in primary-and secondary-metabolism, resulting in the production of various biologically active compounds (such as lecithin, phytosterols, saponins, estrogenic, phenolic and antioxidant compounds etc.), which could be used directly or indirectly for plant growth and survival (Bau et al., 2000;Kuo et al., 2004).Regarding this, a recent comparative study between high-and low-IF soybean cultivars also revealed the differential expression of IF synthesising phenylpropanoid pathway genes phenylalanine ammonia lyase (PAL), chalcone synthase (CHS), chalcone isomerase (CHI), chalcone reductase (CHR) and isoflavone synthase (IFS), which accounted for the observed differences in their endogenous IF concentrations (Chen et al., 2011).In this context, it is pertinent to mention here that apart from enhancing the endogenous IF levels, soybean sprouting have been also reported to significantly improve its nutritional, physicochemical and biological properties (Bau et al., 1997;Bau et al., 2000;Dikshit and Ghadle, 2003;Agrahar-Murugkar and Jha, 2009).Thus, germination at an industrial-scale could provide an exciting prospect of meeting up the food-market expectation, with a considerable high IF levels, along with complementary high nutritional values. Furthermore the selection of the suitable food grains coupled with a germination process could provide a good source of germinated product(s) harbouring enriched bioactive IF for nutritional and health benefits.The present study suggests that an improvement in the endogenous levels of IF by sprouting at an industrial-scale would aid in the development of value-added, nutritional and harmless feed-and food-product, which will ensure better nutritional security.The results from the present investigation could also be extended to other unexplored grains, thereby a complete inventory of aglycones IF can be generated, which will further enhance our understanding of IF biosynthesis.A promising method for the industrial-scale production of IF-enriched flours and/ foods with enhanced nutritional benefits, as well as purification of IF for use as a phytoestrogen in 'functional-foods' can be designed thereof.Our future studies are directed to achieve these goals. Fig. 3 . Fig. 3.A typical chromatographic separation of IF extracts from investigated Indian grain flours; Representative HPLC chromatograms showing the separation of ethanol soluble and acid hydrolysable IF extracts from various Indian grain flours.The individual aglycone from each sample was evaluated by peak identification with overlapping retention time (min) of corresponding aglycone standard 52# Total isoflavones (IF) were represented as the sum of daidzein (DI), glycitein (GY) and genistein (GN).Values (mg kg -1 ) given are means ± SDs from three independent experiments (n = 3).* Asterisks indicate significant differences (P ≤ 0.05) when compared with the control. Fig. 4 . Fig. 4. Effect of sprouting on total and individual aglycone IF components of soybean; A. Cumulative germination (%) was measured over a test-period of 3 d following 4 h of priming in water.Means ± SDs, n = 3, with 50 seeds per measurements.Inset depicts the representative images of temporal sprouts formation during the assay-period; B. Representative HPLC chromatograms showing the separation of ethanol soluble and acid-hydrolysable IF extracts from mature seed-(left-panel) and sprout-flours (right-panel) at 3 days of germination.The individual aglycone from each sample was evaluated by peak identification with overlapping retention time (min) of corresponding aglycone standard.Inset shows the soybean flours prepared from the seeds and sprouts at 3 days of germination Table 2 . Detection and quantification of total and individual aglycone IF contents from staple Indian grain flours n.d., not detected.
2019-04-03T13:09:05.571Z
2018-12-21T00:00:00.000
{ "year": 2018, "sha1": "a3a1a5f3c83b16abdef49f0dc463ef88759010ff", "oa_license": "CCBY", "oa_url": "https://www.notulaebiologicae.ro/index.php/nsb/article/download/10331/8990", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a3a1a5f3c83b16abdef49f0dc463ef88759010ff", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
4978733
pes2o/s2orc
v3-fos-license
Patient Selection for Transarterial Chemoembolization in Hepatocellular Carcinoma: Importance of Benefit/Risk Assessment Background: Liver cancer is the second most common cause of cancer-related death, with hepatocellular carcinoma (HCC) accounting for most primary liver cancers and most commonly arising from a history of advanced chronic liver disease. Among the available therapies, transarterial chemoembolization (TACE) is the most widely utilized and is considered the first-line treatment recommended for patients staged as intermediate HCC (Barcelona Clinic Liver Cancer stage B). If applied correctly, TACE can produce survival benefits without adversely affecting hepatic functional reserve. Summary: The aim of this nonsystematic review is to evaluate the evidence supporting TACE, with a special interest in intermediate HCC, for which this treatment is recommended in first line. However, intermediate HCC represents a broad and heterogeneous group of patients, not all of whom will benefit from TACE. This review highlights the importance of appropriate patient selection for initial TACE and for retreatment. It also evaluates evidence for the treatment of patients who become refractory to TACE. Some patients may, in fact, benefit from early switch (i.e., after 1 or 2 TACE treatments) to systemic therapies rather than continuing retreatments with TACE in order to preserve liver function, thus allowing sequential first- and second-line drug therapies. Key Messages: Careful assessment of an individual patient's benefit/risk ratio is recommended before any TACE session is considered to ensure optimal long-term outcomes in intermediate HCC. Introduction Primary liver cancer is currently the second most common cause of cancer-related death worldwide [1], and hepatocellular carcinoma (HCC) accounts for more than 90% of primary liver cancers [2], making it a key therapeutic target. The Barcelona Clinic Liver Cancer (BCLC) staging system [3] is widely applied for tumor characterization and to evaluate key factors influencing long-term prognosis. The BCLC system can, therefore, help facilitate appropriate patient selection for specific therapeutic interventions [2,4,5]. Nevertheless, the management of patients with HCC remains challenging. It is often complicated by the heterogeneity of the disease, the presence of underlying advanced liver disorders, and the need to coordinate a multidisciplinary health-care team comprising hepatologists, diagnostic and interventional radiologists, transplant surgeons, pathologists, and medical and surgical oncologists [5]. Transarterial chemoembolization (TACE), a locoregional therapy (LRT), is widely recommended as first-line treatment for intermediate-stage HCC (BCLC stage B) [2,4]. Surgical resection, percutaneous ablation, and liver transplantation are also occasionally applied in highly selected BCLC stage B patients. The oral multikinase inhibitor sorafenib is the current standard systemic therapy for advanced HCC (BCLC stage C) or for those tumors progressing on LRT and is therefore an additional option for intermediate-HCC stage patients as first-line systemic treatment [2,4]. Recently, another multikinase inhibitor, regorafenib, was approved as second-line treatment for patients with HCC who had radiological progression under sorafenib, providing improved overall survival compared with placebo (hazard ratio [HR] 0.63; 95% confidence interval [CI] 0.50-0.79; p < 0.0001) with a median 2.8-month survival benefit [6,7]. Correct patient selection for treatment within BCLC stage B is therefore crucial to maximize response and survival, but this is not a trivial process, as choices in real-world settings may not match evidence-based recommendations [8]. Objective The aim of this article is to review the efficacy, safety, and limitations of TACE in HCC, with a specific focus on the importance of appropriate patient selection, including the identification of patients whose disease becomes refractory to repeated treatments with TACE, and the potentially detrimental consequences of inappropriate TACE application on liver function and long-term clinical outcomes. In addition, we discuss the recommended administration and timing of systemic therapies as an alternative to or sequential to TACE. Benefits of TACE in Intermediate HCC TACE is the most widely used treatment for unresectable HCC that, if applied correctly, can produce survival benefits and favorable response without adversely affecting hepatic functional reserve [7]. In brief, conventional TACE is performed through the injection of chemotherapy mixed with Lipiodol (ethiodized oil), followed by the obstruction of a preselected hepatic artery branch that feeds the tumor. As HCC derives up to the totality of its blood supply from the hepatic artery, differently from the non-tumor liver parenchyma, occlusion primarily results in ischemic necrosis and slows tumor progression [9]. Further advances in TACE techniques include the introduction of microcatheterization of peripheral arterial branches with the aim of improving therapeutic selectivity, balloonoccluded TACE to increase therapeutic effect, and the use of drug-eluting beads (DEB-TACE) to improve drug delivery. Other avenues being explored include combination therapy with TACE and other treatments, and immune therapy. Several reviews have described these advances in detail [9,10]. Based on the increased heterogeneity of the TACE approach, a clearer algorithm for patient selection would be of great importance. Several trials have compared TACE with conservative management or suboptimal therapies (such as chemotherapy with 5-fluorouracil or oral tamoxifen) in HCC (see online suppl. Table 1; for all online suppl. material, see www.karger.com/doi/10.1159/000485471) [11][12][13][14][15][16][17][18][19]. Although the findings from these trials were rather heterogeneous, meta-analyses have confirmed an overall survival benefit of TACE [20,21]. One meta-analysis of randomized controlled trials assessed the survival benefit of arterial embolization/chemoembolization in 6 trials reporting 2-year survival and 1 trial reporting 1-year survival in a total of 545 patients [20]; 2-year survival was 41% (range 19-63%) in the treatment group and 27% (range 11-50%) in the control group. The odds ratio for 2-year survival favored chemoembolization (odds ratio = 0.53; 95% CI 0.32-0.89; p = 0.017). The treatment-induced objective response (complete or partial response lasting 1-6 months) was 35% (range 16-61%). Based on the inclusion criteria of these trials and the outcomes, the authors concluded that patients with well-preserved liver function and multinodular HCC without vascular invasion were the best target population for TACE. However, the treatment effect was modest, the ranges for survival and objective response (a predictor of survival) were large, and not all patients responded to therapy. Furthermore, the trials were not designed for patient selection. A more recent systematic review with data from more than 10,000 patients with HCC undergoing TACE found that the objective response was 52.5%, while overall survival was 70.3% at 1 year, 51.8% at 2 years, 40.4% at 3 years, and 32.4% at 5 years [21]. These findings are in line with those reported previously. However, a Cochrane analysis of 6 trials found no survival benefit of TACE over control, emphasizing the need for more adequately powered trials [22]. This meta-analysis was criticized primarily due to the lack of focus on trials, including the correct profile for TACE (i.e., BCLC stage B patients with compensated liver disease) [23]. This highlights the controversy surrounding patient recruitment for TACE, but also that the use of TACE in intermediate HCC should not be automatic [24]. Guideline Recommendations Based on currently available evidence, international guidelines and consensus working groups have published general guidance on the use of TACE; recommendations for the use of TACE in intermediate HCC are listed in Table 1 [2,4,[25][26][27][28][29]. In general, TACE is regarded as the standard of care for patients with intermediate HCC (BCLC stage B) who have wellpreserved liver function and large or multinodular HCC without portal vein tumor thrombosis or extrahepatic metastasis [2,4,25]. The guidelines estimate that ∼20% of all HCC patients are the target population for TACE, and the median overall survival in patients who receive TACE is 20 months (range 14-45 months) [2]. TACE achieves partial responses in 15-55% of patients and delays tumor progression and macrovascular invasion [2]. Some expert centers apply stricter patient selection for TACE [30]. When only very fit intermediatestage HCC patients with perfectly preserved liver function undergo TACE, expected survival may be up to 48 months [30], but stricter selection also implies that fewer patients are receiving LRTs, particularly borderline-compensated patients. Some or most of these patients may be candidates for systemic treatment, as they are unfit for resective or locoregional therapies that affect liver function. Considerations between Western and Asian Populations There are several epidemiological differences in HCC across geographical regions, as well as differences in genetic mutations, especially between Western and Asian populations [31,32]. The incidence of HCC is considerably higher in Asian countries, such as China and [34]. In Japan, where HCC primarily arises from HCV infection, there is a nationwide surveillance program, so patients are diagnosed at an earlier stage of disease [31]. Moreover, Japanese patients tend to be older when diagnosed. The mean age at diagnosis of Japanese patients has increased from approximately 60 to 70 years over the past 30 years, in part because of the increased life expectancy of the Japanese population [35]. These differences in characteristics between Asian patients and Western patients, and between Asian patients from different countries (e.g., China vs. Japan), may affect the selection of treatment strategies. There are few randomized controlled trials evaluating TACE in Asian populations. In one randomized controlled trial [15] in Asian patients with unresectable HCC, actuarial survival was significantly improved with TACE compared with symptomatic treatment (57% vs. 32% at year 1, 31% vs. 11% at year 2, and 26% vs. 3% at year 3; p = 0.002). In 3 nonrandomized trials (in China and South Korea), TACE significantly improved overall survival compared with conservative management at year 1 (see online suppl. Table 1) [17][18][19]. The majority of patients in these trials had hepatitis B, and patients with portal vein thrombosis were included in 2 trials. In a noncontrolled trial in 8,510 patients with HCC from Japan, overall survival with initial TACE was 26% at year 5. However, this trial included patients with both early-and intermediate-stage HCC [36]. A number of Asian-based societies have produced treatment guidelines similar to those published by societies from the United States and Europe (Table 1). However, the Japan Society of Hepatology recommends that TACE be considered also in patients with Child-Pugh A and vascular invasion [27], and the Asian Pacific Association for the Study of the Liver (APASL) guidelines recommend TACE as a treatment option regardless of whether patients have macrovascular invasion [26], which is a major difference from US/European guidelines. Consequently, patients with HCC and macrovascular invasion often receive TACE in Asian countries. There are also differences between Asian and US/European approaches to HCC staging, with some staging systems deemed more appropriate for Asian versus non-Asian populations [31]. For example, the HKLC staging system appears to be more suitable for Asian patients than the BCLC system. Based on an analysis of data from 3,856 patients with HCC, the ability to predict overall survival was significantly greater with the HKLC than the BCLC classification [34]. Notably, the HKLC system identified subsets of BCLC intermediate-and advancedstage patients who would benefit from more aggressive treatment, and the 5-year survival benefit of radical treatments was substantial compared with TACE in BCLC stage B (HKLC-II) patients (52.1% vs. 18.7%; p < 0.0001). A similar benefit of the HKLC system over the BCLC system was observed in a trial of 668 patients with HCC from China [37]. Both trials acknowledged that differences in etiology (e.g., hepatitis B onset) between Asian and European patients may account for this. Validation of the HKLC system in non-Asian cohorts was not successful in a single study [38]; further studies are warranted. Complications and Contraindications of TACE In the systematic literature review described previously [21], 21,461 adverse events (including complications and toxicities) were reported in 15,351 patients undergoing 27,497 TACE treatments. The most common adverse events were related to the postembolization syndrome, and included liver enzyme abnormalities (18.1%), fever (17.2%), abdominal pain (11.0%), vomiting (6.0%), and nausea (1.7%). Recently, the incidence of postembolization syndrome was shown to be reduced by a short course of steroids [39], but external validation and effects on oncological outcomes in larger populations with longer follow-up are required before such prophylaxis can be fully endorsed [40]. Hematological/bone marrow toxicity occurred in 13.5% of patients. A total of 214 deaths were reported in 34,137 patients, for an overall mortality rate of 0.6%. The most common cause of death was related to acute liver insufficiency. Hence, current treatment-related death is estimated to be less than 1% in patients with HCC. The major contraindications for TACE in the recent guidelines are listed in Table 1. These include decompensated cirrhosis (Child-Pugh B ≥8, including jaundice, clinical encephalopathy, refractory ascites, and hepatorenal syndrome), and severely reduced portal vein blood flow (acute and chronic portal vein thrombosis or hepatofugal blood flow) [25,41]. Other absolute contraindications include extensive tumor involving the entirety of both liver lobes; technical contraindications, such as untreatable arteriovenous fistula; renal insufficiency, including creatinine ≥2 mg/dL or creatinine clearance <30 mL/min; and bilioenteric anastomosis or biliary stents [25]. Relative contraindications include factors related to liver cirrhosis (untreated varices at high risk of bleeding), tumor size (≥10 cm), severe comorbidities (acute cardiovascular or lung disease), and bile duct occlusion [41]. Of note, European guidelines list macroscopic vascular invasion as a contraindication; however, the APASL guidelines note that Asian patients with macroscopic invasion (but no extrahepatic metastasis) are often treated with TACE, despite limited scientific evidence supporting this practice [26]. Similarly, the HKLC treatment algorithm states that patients with extrahepatic vascular invasion are unsuitable for TACE [34]. The Japan Society of Hepatology guidelines state that TACE may be considered for patients with Child-Pugh A and vascular invasion, but chemotherapy is recommended [27]. Addressing Limitations of TACE: Patient Selection and Prognosis Guidelines acknowledge that the limitations of TACE are primarily due to the heterogeneity within the population and the difficulty in extracting evidence from the literature [2,25]. Available data are also confounded by a wide range of treatment approaches, including differences in emulsifying agents and heterogeneity with respect to outcome prediction and degree of selectivity in treatment delivery [2,5]. Thus, because of these factors, not all intermediate HCC patients will derive similar benefit from or are suitable candidates for TACE. Intermediate-stage HCC patients present with varying tumor burdens, liver function, and disease etiology, and some patients may benefit from alternative treatment options [42]. An overview of the key factors for patient selection in TACE is shown in Figure 1 [3,27,34,[43][44][45][46][47]. Staging Systems Staging systems such as the Okuda [48], BCLC, Child-Pugh, HKLC, Japan Tumor-Node-Metastasis (TNM), and the Cancer of the Liver Italian Program (CLIP) [49,50] have been widely applied and validated in numerous trials for prognosis prediction. There have also been attempts to combine systems; for example, the Japan Integrated Staging (JIS) score combines Child-Pugh and TNM. This approach was found to be more prognostic compared with CLIP; 10-year survival rates were 23% (CLIP score 0 group) and 65% (JIS score 0 group) (p < 0.01) [51]. Very few staging systems connect stages with both prognosis and treatment allocation. As there are variations between different systems, staging tailored to specific etiologies may produce more accurate treatment strategies and survival predictions [52]. However, some systems are not consistent across all stages; for example, CLIP was more discriminating in early (score 0-3) than later (score 4-6) disease stages [51]. Survival was also found to be better when patients received recommended treatment in stages HKLC I, IIa/b, IIIa, or Va than in stages IIIb, IVa/b, or Vb, and when patients received recommended treatment in BCLC stages 0 or A, but not in stages B, C, or D. In addition, neither system (HKLC or BCLC) could direct therapy for a large group of patients [53]. The heterogeneity of patients with intermediate HCC (BCLC stage B) means that not all patients may benefit from TACE [7]. This has led to the division of BCLC stage B into 4 subclassifications (B1, B2, B3, and B4), based on tumor burden and liver function, and representing increasing severity of disease. In this scenario, TACE is proposed as first-line therapy for patients who are categorized as B1 or B2, and as a potential option in the B3 subgroup, but not for those in the B4 subgroup [54]. A modification of this subclassification (Kinki criteria) into 3 groups (B1, B2, and B3) has been proposed by Japanese investigators [55]. Other Selection Criteria TACE is the theoretical first-line therapy for intermediate HCC patients, but in practice it may not be the most appropriate therapy in all patients. Other selection criteria for treatment in intermediate HCC include simplification of earlier systems, such as the Chiba HCC in intermediate-stage prognostic (CHIP) score, which is specific to patients with intermediate HCC undergoing TACE. This score identified and focused on the most prognostic factors using a dataset from the Chiba University Hospital, namely the number of lesions (producing scores of 0, 2, or 3), liver function categorized according to Child-Pugh system (scores 0-3), and HCV-RNA positivity (0 or 1), which was validated using an independent cohort from another hospital [45]. The generated CHIP scores were then differentiated into 5 groups (0-2, 3, 4, 5, and 6-7 points) by median survival times ( approach could be used on a larger scale to stratify responders in more detail from randomized controlled trials and simplify the scoring approach. Another validated prognostic scoring system is the hepatoma arterial-embolization prognostic (HAP) score, which is based on 4 factors that were found to be significant predictors of overall survival: albumin level, bilirubin level, α-fetoprotein, and tumor size. A HAP score is the sum of points allocated to each factor, and patients are then classified into risk groups The HAP score may be used to predict outcomes in patients being considered for TACE and guide treatment selection [44]. A recent prospective trial reported that lower serum albumin and increased tumor burden (larger tumor size/more nodules and higher α-fetoprotein) at baseline may help predict hepatic decompensation in HCC patients following their first TACE treatment [56]. Similarly, recent evidence suggests that first TACE is more effective in HCC patients with nodules <5 cm, whereas those with nodules >5 cm had poorer response rates and poorer outcomes [57]. Interrelated with the assessment of individual patient characteristics, centerspecific technical factors should also be considered when deciding if TACE is an appropriate therapy. These factors include the skill and experience of the radiologist, as well as the technique and materials (e.g., catheter size) available. Ongoing genomic and proteomic trials may also be useful for patient selection according to their molecular profile [32]; however, markers from such trials are not yet available. Other arterially directed therapies to consider include transarterial bland embolization, DEB-TACE, and transarterial radioembolization (TARE) with yttrium-90 microspheres [5]. Retreatment For patients whose disease is unresponsive or refractory to TACE, the main considerations for potential retreatment with TACE include a reassessment of the expected post-TACE survival outcomes versus risks. This decision may be guided by retreatment algorithms, such as the Assessment for Retreatment with TACE (ART) [46,47], and the α-fetoprotein, BCLC, Child-Pugh and Response (ABCR) [58] scores (Fig. 1). The ART score was developed based on a retrospective analysis in 222 patients with HCC (BCLC stage A or B and Child-Pugh A or B) treated at 2 Austrian centers who had received at least 2 sessions of TACE within 90 days. Using multivariate analyses, aspartate aminotransferase level increase of >25% (absent vs. present), Child-Pugh score increase (absent vs. +1 point vs. + ≥2 points), and radiological tumor response (absent vs. present) were found to be prognostic factors for overall survival. An ART score was determined based on these factors, and patients with a score of ≥2.5 points before their second TACE session were found to have a shorter overall survival and a higher incidence of adverse events, so were considered to be unlikely to gain further benefit from further TACE [46]. The ABCR score uses 4 predictors of overall survival: α-fetoprotein <200 vs. ≥200 ng/mL) at baseline, BCLC (A vs. B vs. C) at baseline, Child-Pugh score increase (absent vs. +1 point vs. + ≥2 points), and tumor response (absent vs. present) to determine a score ranging from -3 to +6. The scoring system was developed using a multivariate analysis of a population of 133 patients with alcohol-or viral-induced HCC, and validated in 2 other cohorts of 78 and 100 patients. The ABCR score was calculated immediately before the second TACE session. A higher ABCR score was predictive of a poorer prognosis, and it was suggested that patients with a score ≥4 would not benefit from further TACE treatment [58]. However, the predictive value of both of these scoring systems has been questioned. Several studies have reported that the ART score was not a useful tool for guiding decisions on TACE retreatment. In an Italian retrospective analysis in 51 patients with HCC (BCLC stage A or B and Child-Pugh A or B), the ART score was not a significant predictor of survival [59]. A key difference between this study and the original ART score study was that patients may have had longer than 90 days between their first and second TACE, based on the clinical decision of the center. Furthermore, in a retrospective analysis of 627 Japanese patients with HCC (most with Child-Pugh A and BCLC stage B) who had received 2 or more TACE sessions, the ART score was found to be unsuitable for most patients, as only 12% had received their second TACE within 90 days. For these patients, the ART score did not predict overall survival [47]. This finding was also reported in a smaller Japanese retrospective study, where less than 10% of patients had their second TACE session within 90 days of the first, and the ART score did not predict outcomes in these patients [60]. These findings underscore regional and national differences in the approach to the use of TACE, as on-demand TACE is more common in Japan compared with Europe and the United States [47], and differences in TACE procedures, therapies administered after TACE, and timing between TACE sessions may affect the results when evaluating outcomes [60]. An external validation of both ART and ABCR in 176 patients with HCC (BCLC stage A or B and Child-Pugh A or B) who had received at least 2 TACE sessions found that while patients with higher scores had poorer prognoses, neither score had sufficient predictive ability to aid in clinical decision-making regarding subsequent TACE sessions [61]. Interestingly, in an international study in 83 patients from the UK and Italy and 660 patients from Korea and Japan with HCC (BCLC stage A or B), both scoring systems were found to be independently predictive of survival, and sequential use of the HAP score to screen patients for initial TACE and the ART score to determine the value of TACE retreatment was proposed [62]. It is important to note that ART and ABCR scores are not included in current treatment guidelines, so should be considered exploratory. Moreover, they do not measure responsiveness to TACE; rather, they are dynamic prognostic scores to measure treatment-related survival benefit based on radiological response, tumor markers, and hepatic function, and have not been universally validated. Systemic Therapy Repeated TACE is associated with increased adverse events (e.g., liver dysfunction) and diminishing efficacy [41,43], which suggests that options other than retreatment should be considered. A few patients who achieved partial response could benefit from the addition of more aggressive therapies (such as radiofrequency ablation), which initially would have been contraindicated, to attempt to elicit a complete response. However, most patients tend not to achieve a satisfactory objective response, but progress or recur early after treatment. Unfortunately, time to TACE progression (TTTP) was recently shown to be a surrogate endpoint for overall survival, i.e. a short TTTP corresponds to a short overall survival with TACE. Therefore, alternative treatments, such as sorafenib, should be considered early in patients whose disease is refractory to TACE and/or with a short TTTP [63]. In fact, intermediate HCC patients whose disease was refractory to TACE who received sorafenib experienced an increase in survival compared with those who continued TACE in 2 retrospective trials from Japan. The median survival was 25.4 versus 11.5 months (p = 0.003) (first trial) [64] and 24.7 versus 13.6 months (p = 0.002) (second trial) [65]. Based on the beneficial effects of sorafenib on survival in patients with advanced HCC, several studies have investigated the efficacy and safety of combinations of TACE and sorafenib. In a meta-analysis of 17 studies evaluating TACE plus sorafenib combination therapy in patients with unresectable HCC, most of whom had Child-Pugh class A or B disease severity and BCLC stage B or C, the HR for time to progression (TTP) was 0.76 (95% CI 0.66-0.89; p < 0.001), suggesting that TACE plus sorafenib may improve TTP compared with TACE alone. However, the HR for overall survival was 0.81 (95% CI 0.65-1.01; p = 0.061), suggesting that the addition of sorafenib to TACE may not improve overall survival compared with TACE alone [66]. In a smaller meta-analysis of 6 studies in patients with intermediate or advanced HCC, most of whom had Child-Pugh class A disease, the pooled HR for TTP was 0.68 (95% CI 0.52-0.88; p = 0.003), and for overall survival was 0.65 (95% CI 0.47-0.89; p = 0.007), indicating beneficial effects of TACE combined with sorafenib, although the incidence of grade 3 or 4 adverse events was also higher with the combination [67]. A recent European, randomized, double-blind, phase 3 trial (TACE 2) in 313 patients with unresectable HCC (Child-Pugh A) comparing sorafenib or placebo, both in combination with TACE, was stopped early because there was no difference in progression-free survival between groups. Median progressionfree survival was 238 days (95% CI 221-281) in the sorafenib plus TACE group compared with 235 days (95% CI 209-322) in the placebo plus TACE group (HR 0.99 [95% CI 0.77-1.27], p = 0.94). There was also no significant difference between treatment groups in TTP or overall survival [68]. Similar findings were reported for the international, phase 2, randomized, placebo-controlled SPACE trial of sorafenib plus TACE in 307 patients with intermediate HCC, with no significant difference in TTP between the combination group and the TACE-only group [69]. Overall, the evidence suggests that combination treatment is not beneficial, suggesting that a sequential approach may be preferred, i.e. use of TACE early, followed by systemic therapy before the onset of liver dysfunction. Some intermediate HCC patients who are particularly fit and with optimal liver function may benefit from more aggressive treatments. In a retrospective analysis of 485 patients with intermediate HCC (BCLC stage B), treatment distribution was TACE (51.1%), curative treatments (31.8%), sorafenib (3.9%), best supportive care (4.6%), and other treatments (8.5%) [70]. The median survival was 45 months for curative treatments, 30 months for TACE, 14 months for sorafenib, and 10 months for best supportive care. Although it is difficult to make direct comparisons due to differences in patient numbers, characteristics, and other prognostic factors, these findings indicate that there is a role for treatments other than TACE as initial therapy for BCLC stage B, although TACE and systemic therapies must be promptly adopted at the time of progression or recurrence after more aggressive approaches. Recent data from the RESORCE trial showed that patients receiving regorafenib as second-line therapy after sorafenib failure in advanced HCC had a survival benefit of ∼2.8 months compared with those receiving placebo. Median survival was 10.6 months in the regorafenib group compared with 7.8 months in the placebo group, with an HR of 0.63 (p < 0.0001) [6,7]. Thus, other systemic therapies may become available with the potential to extend survival, provided that patients start systemic therapy after TACE failure when they are still compensated and fit [71]. With increasing treatment options now available beyond the singular use of TACE, health-care professionals should be encouraged to develop, in close collaboration with each patient, positive long-term individualized treatment plans focused on delaying disease recurrence and prolonging survival. Impact of TACE on Liver Function One of the key considerations for any treatment strategy in HCC is to preserve liver function as much as possible. Several trials have observed acute liver dysfunction in patients treated with TACE, especially with less selective or repeated TACE procedures [64,65,72]. In one retrospective trial [64], the median time to liver dysfunction in patients with refractory disease who continued to receive TACE was significantly shorter than in those who switched to sorafenib (29.8 In another retrospective trial [65], repeated TACE in patients with refractory disease was associated with a greater increase in Child-Pugh score compared with that observed in patients who switched to sorafenib, indicating that repeated TACE may lead to deterioration in liver function in patients with refractory disease. These data support a timely switch to sorafenib therapy to prevent deterioration of liver function with inappropriate TACE use; this is critical for safe follow-on treatment with sorafenib. Liver dysfunction with sorafenib has also been occasionally reported, although the incidence is not significantly higher compared with placebo [73], and is usually reversible with drug discontinuation, in contrast to that emerging after TACE. With the recent approval of regorafenib, it will be increasingly important to critically monitor the treatment for HCC, so that when a patient's disease becomes refractory to TACE, the switch to sorafenib-regorafenib sequential therapy is performed in a timely manner [7]. Conclusions TACE is recommended as first-line therapy in intermediate HCC (BCLC stage B) and, if applied correctly, both in terms of technical performance and patient selection, can produce survival benefits without adversely affecting hepatic functional reserve. However, the heterogeneity of patients with intermediate HCC means that not all patients with intermediate HCC may benefit from first-line therapy with TACE. TACE is often used in a broader population than is recommended by current guidelines, but the use of TACE should not only be determined by the technical feasibility of the procedure. A more tailored approach to patient selection for TACE may improve outcomes. The use of validated staging systems, identification of key prognostic factors, and consideration of patient characteristics, treatment benefit/risk profile, and limitations will help to balance the potential survival benefit against potential risks for adverse events. Some patients with intermediate HCC may benefit from a more aggressive initial approach, such as curative resection and/or ablation, particularly those with preserved liver function (Child-Pugh A, no portal hypertension, and BCLC stage B with limited tumor bulk). The "up-to-7" rule recommends transplantation for patients with BCLC stage B HCC if the sum of the size of the largest tumor (in centimeters) and the total number of tumors is ≤7 [74]. For patients with intermediate HCC (Child-Pugh A) within the "up-to-7" classification and who are not candidates for resection plus ablation, or with clinically compensated liver function (BCLC score B7), TACE is still the standard of care. To further ensure the success of TACE in these patients, a stricter definition of tumor bulk may be valuable, beyond the "up-to-7" rule. For patients with tumor bulk beyond the "up-to-7" classification and very well preserved liver function (Child-Pugh A5 or A6), the extent of tumor burden should be carefully evaluated. TARE may be recommended for patients beyond "up-to-7" threshold with greater tumor volumes (i.e., the largest tumors greater than 5-6 cm) and limited tumor numbers. If patients in this category are not suitable for TARE or TARE is not available, TACE may be an option (especially when tumor bulk is beyond the "up-to-7" threshold, but not excessively large). However, evidence of survival benefit is less clear in this setting, as there is the risk of not preserving liver function while attempting to achieve complete response. Consequently, initial systemic therapy is an alternative option to be considered. Patients whose disease is refractory to TACE or ineligible for LRT may benefit from systemic therapies, such as sorafenib followed by regorafenib in cases of radiological progression, or sequential therapy with TACE followed by sorafenib, although the efficacy of these combinations has yet to be confirmed in rigorous clinical trials. Further investigation of biomarkers that provide an objective means of patient selection for TACE, based on the likelihood that they will benefit from treatment, will provide much-needed guidance for clinicians. In conclusion, careful assessment of an individual patient's benefit/risk ratio is recommended before any TACE session is considered. It is important to select the right treatment for the right patient at the right time to ensure optimal long-term outcomes in intermediate HCC.
2018-04-27T03:28:06.298Z
2018-01-12T00:00:00.000
{ "year": 2018, "sha1": "09bd9a221c071e16cf927bba163588b75ecf865f", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/485471", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "09bd9a221c071e16cf927bba163588b75ecf865f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236486260
pes2o/s2orc
v3-fos-license
Guideline Bias in Wizard-of-Oz Dialogues NLP models struggle with generalization due to sampling and annotator bias. This paper focuses on a different kind of bias that has received very little attention: guideline bias, i.e., the bias introduced by how our annotator guidelines are formulated. We examine two recently introduced dialogue datasets, CCPE-M and Taskmaster-1, both collected by trained assistants in a Wizard-of-Oz set-up. For CCPE-M, we show how a simple lexical bias for the word like in the guidelines biases the data collection. This bias, in effect, leads to poor performance on data without this bias: a preference elicitation architecture based on BERT suffers a 5.3% absolute drop in performance, when like is replaced with a synonymous phrase, and a 13.2% drop in performance when evaluated on out-of-sample data. For Taskmaster-1, we show how the order in which instructions are resented, biases the data collection. Introduction Sample bias is a well-known problem in NLP -discussed from Marcus (1982) to Barrett et al. (2019) -and annotator bias has been discussed as far back as Ratnaparkhi (1996). This paper focuses on a different kind of bias that has received very little attention: guideline bias, i.e., the bias introduced by how our annotator guidelines are formulated. Annotation guidelines are used to train annotators, and guidelines are therefore in some sense intended to and designed to prime annotators. What we will refer to in our discussion of guideline bias, is rather the unintended biases that result from how guidelines are formulated, and the examples used in those guidelines. If a treebank annotation guideline focuses overly on parasitic gap constructions, for example, inter-annotator agreement may be higher on those, and annotators may be biased to annotate similar phenomena by analogy with parasitic gaps. . In all cases, more than half of the sentences contain the word like. We focus on two recently introduced datasets, the Coached Conversational Preference Elicitation corpus (CCPE-M) from Radlinski et al. (2019), related to the task of conversational recommendation (Christakopoulou et al., 2016;Li et al., 2018), and Taskmaster-1 , which is a multipurpose, multi-domain dialogue dataset. CCPE-M consists of conversations about movie preferences, and the part of Taskmaster-1, we focus on here, conversations about theatre ticket reservations. Both corpora were collected by having a team of assistants interact with users in a Wizard-of-Oz (WoZ) set-up, i.e. a human plays the role of a digital assistant which engages a user in a conversation about their movie preferences. The assistants were given a set of guidelines in advance, as part of their training, and it is these guidelines that induce biases. In CCPE-M, it is the overwhelming use of the verb like (see Figure 5) and its trickle-down effects, we focus on; in Taskmaster-1, the order of the instructions. In fact, the CCPE-M guidelines consist of 324 words, of which 20 (6%) are inflections or derivations of the lemma like: As shown in Figure 5 in the Appendix, more than 50% of the sentences in the guidelines include forms of like! This very strong bias in the guidelines has a clear downstream effect on the assistants that are collecting the data. In their first dialogue turn, the assistants use the word like in 72% of the dialogues. This again biases the users responding to the assistants in the WoZ set-up: In 58% of their first turns, given that the assistant uses a form of the word like, they also use the verb like. We show that this bias leads to overly optimistic estimates of performance. Additionally, we also demonstrate how the guideline affects the user responses through a controlled priming experiment. For Taskmaster-1, we show a similar effect of the guidelines on the collected dialogues. Contributions We introduce the notion of guideline bias and present a detailed analysis of guideline bias in two recently introduced dialogue corpora (CCPE-M and Taskmaster-1). Our main experiments focus on CCPE-M: We show how a simple bias toward the verb like easily leads us to overestimate performance in the wild by showing performance drops on semantically innocent perturbations of the test data, as well as on a new sample of movie preference elicitations that we collected from Reddit for the purpose of this paper. We also show that debiasing the data, improves performance. The CCPE-M provides a very clear example of guideline bias, but other examples can be found, e.g., in Taskmaster-1, which we discuss in §3. We discuss more examples in §4. Bias in CCPE-M We first examine the CCPE-M dataset of spoken dialogues about movie preferences. The dialogues in CCPE-M are generated in a Wizard-of-Oz set-up, where the assistants type their input, which is then translated into speech using text-to-speech technologies, at which point users respond by speech. The dialogues were transcribed and annotated by the authors of Radlinski et al. (2019). Sentence classification We frame the CCPE-M movie preference detection problem as a sentencelevel classification task. If a sentence contains a labeled span, we let this label percolate to the sentence level and be a label of the entire sentence. If a sentence contains multiple unique label spans the sentence is assigned the leftmost label. A sentencelevel label should therefore be interpreted as saying in this sentence, the user elicits a movie or genre preference. Our resulting sentence classification dataset contains five different preference labels, including a NONE label. We shuffle the data at the dialogue-level and divide the dialogues into training/development/test splits using a 80/10/10 ratio, ensuring sentences from the same dialogue will not end up in both training and test data. As the assistants utterances rarely express any preferences, we only include the user utterances to balance the number of negative labels. See Table 2 for statistics regarding the label distribution. Perturbations of test data In order to analyse the effects of guideline bias in the CCPE-M dataset, we introduce perturbations of the instances in the test set where like occurs, replacing like with a synonymous word, e.g. love, or paraphrase, e.g. holds dearly. We experiment with four different replacements for like: (i) love, (ii) was incredibly affected by, (iii) have as my all time favorite movie and (iv) am out of this world passionate about. See Figure 2 for an example sentence and its perturbed variants. The perturbations occasionally, but rarely, lead to grammatically incorrect input. 1 We emphasize that even though we increase the length of the sentence, the phrases we replace like with should signal an even stronger statement of preference, which models should be able to pick up on. Since our data consists of informal speech it includes adverbial uses of like; we only replace verb occurrences, relying on SpaCy's POS tagger. 2 We replace 219 instances of the verb like throughout the test set. Perturbations of train data We also augment the training data to create a less biased resource. Here we adopt a slightly different strategy, also to evaluate a model trained on the debiased training data to the above perturbed test data: We use six paraphrases of the verb like listed in a publicly available thesaurus, 3 none of which overlap with the words used to perturb the test data, and randomly replace verbal like with a probability of 20%. The paraphrases are sampled from a uniform distribution. A total of 401 instances are replaced in the training data using this approach. This is not intended as a solution to guideline bias, but in our experiments below, we show that a model trained on this simple, debiased dataset generalizes better to out of sample data, showing that the bias toward like was in fact one of the reasons that our baseline classifier performed poorly in this domain. Reddit movie preference dataset In addition to the perturbed CCPE-M dataset, we also collect and annotate a challenge dataset from Reddit threads discussing movies for the purpose of preference elicitation. The comments are scraped from Reddit threads with titles such as 'Here's A Simple Question. What's Your Favorite Movie Genre And Why?' or 'What's a movie that you love that everyone else hates?' and mostly consist of top-level comments. These top-level comments typically respond directly the question posed by the thread, and explicitly state preferences. We also include some random samples from discussion trees that contain no preferences, to balance the label distribution slightly. In this data, we observe the word like, but less frequently: The verb like occurred in 15/211 examples. The data is annotated at the sentence level, as described previously, and we follow the methodology described by Radlinski et al. (2019) and identify anchor items such as names of movies or series, genres or categories and then label each sentence according to the preference statements describing said item, if any. The dataset contains roughly 100 comments, that when divided into individual sentences resulting in 211 datapoints. The statistics can be found in the final column of Table 2. We make the data publicly available. 4 Results We evaluate the performance on two different models on the original and perturbed CCPE-M, as well as on our Reddit data: (i) a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) sentence classifier, trained only on CCPE-M, including the embeddings, and (ii) a fine-tuned BERT sentence classification model (Devlin et al., 2018). For (i), we use two BiLSTM layers (d = 128), randomly initialized embeddings (d = 64), and a dropout rate of 0.5. The model is trained for 45 epochs. For (ii), we use the base, uncased BERT model with the default parameters and finetune for 3 epochs. Model selection is conducted based on performance on the development set. Performance is measured using class-weighted F 1 score. We report results in Table 1 on the various perturbation test sets as well as the Reddit data, when (i) the models are trained on the unchanged CCPE-M data, and (ii) the models are trained on the debiased version CCPE-M thesaurus . On the original dataset, BERT performs slightly better than the BiLSTM architecture, but the differences are relatively small. Both BiLSTM and BERT suffer a drop in performance, when examples are perturbed and the word like is replaced with synonymous words or phrases. Note how longer substitutions result in a larger drop in performance, e.g. love vs. am out of this world passionate about. We see the drops follow the same pattern for both architectures, while BiLSTM seems a bit more sensitive to our test permutations. Both models do even worse on our newly collected Reddit data. Here, we clearly see the sensitivity of the BiLSTM architecture, which suffers a 30% absolute drop in F 1 ; but even BERT suffers a bit performance drop of more than 13%, when evaluated on a new sample of data. When training on CCPE-M thesaurus , both models become more invariant to our perturbations,with up to 4.5 F 1 improvements for BERT model and 3 F 1 improvements for the BiLSTM, without any loss of performance on the original test set. We also observe improvements on our collected Reddit data, suggesting that the initial drop in performance can be partially explained by guideline bias and not only domain differences. Controlled priming experiment To establish the priming effect of guidelines in a more controlled setting, we set up a small crowdsourced experiment. We asked turkers to respond to a hypothetical question about movie preferences. For example, turkers were asked to imagine they are in a situation in which they 'are asked what movies' they 'like', and that they like a specific movie, say Harry Potter. The turker may then respond: I've always liked Harry Potter. We collected 40 user responses for each of the priming verbs like, love and prefer, 120 total, and for each of the verbs used to prime the turkers, we compute a probability distribution over most of the verbs in the response vocabulary that are likely to be used to describe a general preference towards something. Figure 3 shows the results of the crowdsourced priming experiments. We can observe that when a specific priming word, such as like, is used, there is a significantly higher probability that the response from the user will contain that same word, illustrating that when keywords in guidelines are heavily overrepresented, the collected data will also reflect this bias. prefer Probablity of verb mention given priming word: Figure 3: Probability that a verb that describes a preference towards a movie is mentioned, given a priming word by the annotator is mentioned. Bias in Taskmaster-1 The order in which the goals of the conversation is described to annotators in the guidelines can also bias the order in which these goals are pursued in conversation. Taskmaster-1 contains conversations between a user and an agent where the user seeks to accomplish a goal by, e.g., booking tickets to a movie, which is the domain we focus on. When booking tickets to go see a movie, we can specify the movie title before the theatre, or vice versa, but models may not become robust to such variation if exposed to very biased examples. Unlike CCPE-M, the Taskmaster-1 dataset was (wisely) collected using two different sets of guidelines to reduce bias, and we can therefore investigate the downstream effects of of the bias induced by the two sets of guidelines. To quantify the guideline bias, we compute the probability that a goal x 1 is mentioned before another one x 2 in an dialogue, given that x 1 precedes x 2 in the guidelines. We only consider dialogues where all goals are mentioned at least once, i.e., ∼ 900 in total; the conversations are then divided into two, based on the guideline that was used. Figure 4 shows the heat map of these relative probabilities. The guidelines have a clear influence on the final structure of the conversation, i.e. if the movie title (x 1 ) is mentioned before the city (x 2 ) in the guideline, there is : Probability that a guideline goal x 1 is mentioned before another one x 2 in an actual dialogue, given that x 1 comes before x 2 in the agent's guideline. a high probability (0.75) that the same is true in the dialogues. If they are not, the probability is much lower (0.57). Plank et al. (2014) present an approach to correcting for adjudicator biases. Bender and Friedman (2018) raise the possibility of (demographic) bias in annotation guidelines, but do not provide a means for detecting such biases or show any existing datasets to be biased in this way. Amidei et al. Related Work (2018) also discuss the possibility, but in a footnote. Geva et al. (2019) investigates how crowdsourcing practices can introduce annotator biases in NLU datasets and therefore result in models overestimating confidence on samples from annotators that have contributed to both the training and test sets. Liu et al. (2018), on the other hand, discuss a case in which annotation guidelines are biased by being developed for a particular domain and not easily applicable to another. Cohn and Specia (2013) explores how models can learn from annotator bias in a somewhat opposite scenario from ours, e.g. when annotators deviate from annotation guidelines and inject their own bias into the data, and by using multi-task learning to train annotator specific models, they improve performance by leveraging annotation (dis)agreements. There are, to the best of our knowledge, relatively few examples of researchers identifying concrete guideline-related bias in benchmark datasets: Dickinson (2003) suggest that POS annotation in the English Penn Treebank is biased by the vagueness of the annotation guidelines in some respects. Friedrich et al. (2015) report a similar guideline-induced bias in the ACE datasets. Dandapat et al. (2009) discuss an interesting bias in a Bangla/Hindi POS-annotated corpus arising from a decision in the annotation guidelines to include two labels for when annotators were uncertain, but not specifying in detail how these labels were to be used. Goldberg and Elhadad (2010) define structural bias for dependency parsing and how it can be attributed to bias in individual datasets, among other factors, originating from their annotation schemes. Ibanez and Ohtani (2014) report a similar case, where ambiguity in how special categories were defined, led to bias in a corpus of Spanish learner errors. Discussion & Conclusion In this work, we examined guideline bias in two newly presented WoZ style dialogue corpora: We showed how a lexical bias for the word like in the annotation guidelines of CCPE-M, through a controlled priming experiment leads to a bias for this word in the dialogues, and that models trained on this corpus are sensitive to the absence of this verb. We provided a new test dataset for this task, collected from Reddit, and show how a debiased model performs better on this dataset, suggesting the 13% drop is in part the result of guideline bias. We showed a similar bias in Taskmaster-1.
2021-07-29T18:46:31.991Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "c77e874b18852738acbccf17f9e585a78973d924", "oa_license": "CCBY", "oa_url": "https://aclanthology.org/2021.bppf-1.2.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "c77e874b18852738acbccf17f9e585a78973d924", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
241757224
pes2o/s2orc
v3-fos-license
E-learning Acceptance Among Students: Evidence from UiTM Melaka City Campus Vol. 10(3) 2021, Pg. 885 Abstract COVID-19 pandemic is a global issue that change the education system from face to face to fully online teaching and learning. These changes affected the whole education system and departments from the students, lecturers, management, and suppliers. Hence, to provide an efficient and effective online learning, an investigation on factors that affect the successful of e-learning in Malaysia is deemed to be necessary. The first objective for this study is to validate the measurement items for each variable used in this study. Second objective of this study to examine the effect of instructor, accessibility, and university support on e-learning acceptance among students. This study utilized a cross-sectional study and questionnaire is the main source of information. The population of this study is students from the Faculty of Business and Management, Universiti Teknologi MARA (UiTM), Melaka Branch. From the population, a complete response is 171. To answer the research objective, SPSS was utilized, and the analyses conducted including frequency analysis, exploratory factor analysis (EFA), reliability analysis and regression analysis. From the exploratory factor analysis, all items used in this study was above 0.40 and it consider acceptable. Next, based on the regression analysis, all variables used in this study were found to be significant towards the e-learning acceptance. Hence, this showed that, instructors, accessibility, and university support play an important role for the successful of e-learning. It is also suggested that to get a clear picture on the successful of e-learning, future research to increase the number of respondents across the university and states in Malaysia. Introduction COVID-19, a global public health emergency, was first identified as a novel coronavirus disease epidemic by the World Health Organization (WHO) in January 2020, and then as a pandemic in March 2020. Several schools and higher education institutions were forced to close due of the COVID-19 pandemic. Because of Covid-19 pandemic, numerous schools, universities, and colleges have discontinued face-to-face instruction. This will have a detrimental impact on educational activities because social distance is so important at this period. Educational agencies are trying to find alternatives ways to manage this difficult circumstance (Dhawan, 2020). This closure boosted the rise of online educational activities, ensuring that education would not be disrupted. E-learning is defined as learning activities that take place on a variety of electronic devices, such as computers, laptops, cellphones, and other devices that have internet access. Online e-learning could be a platform that makes the process of education more flexible (Singh and Thurman, 2019). There is a lot of changing in education system nowadays, especially in higher education institution. This change gives the biggest challenge to the whole university including management team, lecturers, and students. Internet technologies and mobile applications have transformed the education system from the traditional structure to the modern method of teaching (Sankar et al., 2020). Hence, it is important for the university in preparing a good quality of instructor or lecturers that not only competence in subject knowledge but, they must be able to engage with the technology and make an interactive session with a student. Based on Lopez-Catalam et. al (2018), e-learning could bring more confidence, reduce stress, and enhance concern and empathy. Other than that, to develop a good environment on elearning, university support and accessibility toward technology also play an important role. Based on the research conducted by Mazalan et al (2021), the factors such as instructor, accessibility and university support are perceived as important for e-learning acceptance. As the e-learning system has become widely used among students. Hence, the main objective for this study is to explore the effect of instructor, accessibility, and university support toward students' acceptance on e-learning. Although there might be difficulties or barriers that can negatively impact students' use of e-learning, thus, it can be improved and overcome with the cooperation of various parties. During the COVID-19 epidemic, this study intends to quantify the effects of instructors, accessibility, and university support on e-learning among students. This study also revealed the most important elements impacting the acceptance of e-learning as a teaching instrument in higher education, which could aid future efforts targeted at using e-learning not just during the pandemic but also in non-pandemic scenarios throughout the teaching career. Literature Review There are many drivers being examined relating to e-learning acceptance among students. Based on a recent paper published by Pham and Tran (2020), there were six factors have been considered that could affect the e-book and e-learning acceptance among students. These factors include instructor/lecturer, computer competency of the student, content, and design of course, access ability, infrastructure, and university support. However, this study aims to examine the effect of instructor/lecturer, access ability and university support among students undertaking Strategic Management course at UiTM Melaka Branch. Instructor The COVID-19 pandemic has forced many organizations around the world to make full use of a variety of emerging online communication platform technologies (Al-Kumaim et al., 2021). All these changes required the universities to come out with a proper strategy to make sure the platform or technologies used are suitable with all parties involved such as student, lecturer, and management. According to Benta, Bologa, and Dzitac, (2014), a system designed that offer to students, teachers, and administrators must be efficient and effective to help them create an enhanced and customized learning climate. For example, Moodle is considered a web-based flexible learning environment that facilitates collaboration between users. Hence, as an instructor, they must be flexible in using the platform of online teaching and learning process. As a result, establishing a digital learning community among teachers is critical to provide support mechanisms for their professional development. (Izhar, Al-Dheleai, & Ishak, 2021). The creation of a team made up of people from diverse subjects looks to be able to assist teachers in identifying the areas that need to be considered when organizing technology-enabled lessons (Koh et al., 2017). Further, to make sure the teaching ang learning process at high quality, the integration of technology and support from the university and members are important. Through the platforms that subscribe by the university, teachers can upload and supply students with information and resources to which they would not have had access during face-to-face classes, and students can easily share information, state their difficulties, and receive feedback (Martín-Blas, & Serrano-Fernández, 2009). From that, universities should improve online assessment methods based on the time allocated to lecturers (Ilias et al., 2020). However, teachers' lack of familiarity with E-learning and the short time they had to adjust their teaching approach to the new circumstances (Coman et al., 2020). In this regard, the findings of a survey done by School Education Gateway at the start of the pandemic, which revealed that 66.9% of respondents said they had used online platforms for teaching for the first time, are instructive (School Education gateway, 2020). Hence, it is important for the university management to provide a comprehensive training for the lecturers to improves the skill and knowledge on online learning. Access Ability E-learning is the use of network technologies to create, foster, deliver, and facilitate learning, anytime and anywhere for empowering the individual learner so that the teacher/ trainer/tutor is no longer the gatekeeper of knowledge, while the role of teachers is likely viewed as facilitators of knowledge process (Oye, Salleh, & Iahad, 2010). Hence, to improve the quality of teaching, there is a need to ensure a suitable medium for online learning, such as Microsoft Team, Google Meet and Zoom, that is provided by universities and effectively used by both lecturers and students (Ilias et al., 2020). In terms of obstacles encountered because of the transition from face-to-face learning to a fully virtual learning environment, students reported a decrease in wellbeing, a loss of motivation, difficulty concentrating on their studies, and Internet connectivity issues (Azlan et al., 2020) E-Learning is one of the technical-based tuition and training platforms in telecommunication technology used to deliver information in education (Latip et al., 2020). Along with the progressive information and communication development, e-Learning is considered a paradigm in modern education. Furthermore, universities are among the organizations that have asked students, tutors, and lecturers to use several different online communication platforms to ensure the education process remains uninterrupted (Al-Kumaim et al., 2021). However, the COVID-19 pandemic has generated considerable challenges for the global higher education community while using such emerging technologies. Because these students are less rich and come from less tech-savvy households with limited financial resources, they may miss out on online classes (Sharin, 2021). They may be unable to participate due to the high costs of digital devices and online data plans. Inequality will widen because of this income disparity (Dhawan, 2020). University Support COVID-19 pandemic has changed the teaching and learning process for whole education system. The most challenges part in these changes is the accessibility. This is because, limitations in the form of technical issues, lack of Internet connection and insufficient data are common challenges related to online learning conducted outside the university campus (Ilias et al., 2020). A Learning Management System is seen as a software that operates and encompasses many services that are meant to aid teachers in managing their lectures and courses (Ouadoud, Nejjari, Chkouri, & El-Kadiri, 2017), and they were created to monitor and evaluate students, give grades, to monitor course attendance or additional administrative actions that can be demanded by educational institutions (Ninoriya, Chawan, & Meshram, 2011). Some of the challenges universities face, according to the Organization for Economic Cooperation and Development, include maintaining a balance between online courses, which may affect students' health due to their spending many hours in front of a screen, and nondigital activities, as well as analyzing and focusing on students' emotional health and providing them with support throughout the learning process (OECD 2020) Further, it is important for all parties to give more attention on several aspect of the effectiveness of online and distance learning. According to Huang, Liu, Tlili, Yang, and Wang (2020), monitoring and expanding internet infrastructure to avoid delays, particularly during videoconferences; employing user-friendly tools to assist students in assimilation and comprehension of knowledge; providing dependable, interactive, and diverse electronic resources providing services that help students and teachers learn about the latest policies adopted by universities and the government, and encouraging collaboration between these institutions; using social networks to build online communities for students in order to reduce feelings of isolation; using various effective techniques such as debates or learning based on discovery and experience; providing services that help students and teachers learn about the latest policies adopted by universities and the government, and encouraging collaboration between these institutions. Technical issues are still the issues most difficult to solve, due to the capacity of the servers owned by universities. Surely, universities have made efforts to solve these problems and improve the way the E-learning platforms work (Coman et al., 2020). Still, students' technical problems remain poor internet connections, signal loss, lack of adequate digital devices, especially for students living in rural areas or students from families with low incomes. Hence, it is important for the university management to come out with a comprehensive planning and strategies to support the successful of online learning. Other than that, universities could create programs to meet these types of needs and thus facilitate the learning process for students who find themselves in these situations. Oye, et al (2011) defined E-learning as a unifying term used to describe the fields of online learning, web-based training and technology delivered instructions. E-learning courses are specifically delivered via the internet to somewhere other than the classroom for enhancing or supporting learning (Elfaki, Abdulraheem, & Abdulrahim, 2019). Past investigations shown that anyplace and whenever learning and access to data and correspondence are encouraged through e-learning (Kyzy et al., 2018); Yakubu, & Dasuki, 2019). Online learning, or e-learning, provides a virtual learning environment that engages students in various activities involving a multitude of subjects through the audio-visual platform (Al-Rahmi et al., 2018). Online learning is used for delivering information, while the database system is used for managing, communicating content, interacting or facilitating teaching and learning activities (Anshari et al., 2016) Online learning is essential for the teaching and learning process, besides face-to-face and other traditional methods (Mokhtar et al., 2020). There are several reasons for adopting online learning, as mentioned in previous studies. Studies have noted that online learning operates using the Internet and there is no limit to the number of participants (Ghazali, & Nordin, 2018). Indeed, it is difficult to ensure the reliability of the learning services provided (Schroeder et al., 2010). That is the reason it is vital to assess students' acceptance of the e-Learning method (Latip et al., 2020). In addition, fear of technology, lack of technical skills, lack of technical support for both students and lecturer may also cause some concerns (Ali & Magalhaes, 2008). Nevertheless, e-Learning enables students to produce high-quality work and to be actively involved in alumni community activities. Educational institutions to benefit from it, by gaining an exposure and adding values to their programmed around the globe, besides responding towards IR 4.0 (Schroeder et al., 2010). Research methodology This study used quantitative research and questionnaire was adopted to gather the information. Population of this study is students from University Teknologi MARA, Melaka branch and all of them experienced in e-learning, undertaking Strategic Management course between September 2020 to February 2021. The required sample size for this study is 109 as suggested by G-Power. However, the number of respondents that answered the questionnaire is 171. To answer the research objective of this study, SPSS was used for analysis of data that will cover the demographic analysis, exploratory factor analysis, reliability analysis and regression analysis. Online data collection was used to gather information from the respondents. The chosen of online platform because of the covid-19 pandemic. The measurement item of this study was adapted form Pham and Tran (2020) and each item used 5-point Likert-scale. Table 1 present the analysis of demographic. Based on the analysis, most of the respondents are in between 23 to 25 years old that cover 77.8% or 133 respondents, follow by 20 to 22 years old and 25 to 26 years old. In term of gender, the higher is female with 152 respondents (88.9%) and follow by male respondents with 11.1%. Next is program of study that respondent's registers. Based on the finding, the higher number of respondents are from Human Resource Management program with 36.8%, followed by Finance (28.1%), International Business (19.9%) and Office System Management (15.2%). All the respondents for this study from semester three students. Exploratory Factor Analysis Factor analysis is a technique for revealing correlations between variables and condensing inter-correlated variables into a few factors. As a result, the Kaiser-Meyer-Olkin (KMO) index is used to assess the suitability of factor analysis. However, value more than 0.60 is adequate (Pallant, 2001). Stevens (1992) and Field (2000), "recommends interpreting only factor loadings with an absolute value greater than 0.4, which means explain around 16% of the variance). Based on the result, represented all the factor loading for each item are above 0.4. Reliability Analysis Based on the Cronbach's alpha analysis, all variables are considered reliable because they achieved an alpha value more of more than 0.7. The result showed in Table 3, instructors has the highest Cronbach's alpha value with 0.900, followed by e-learning as a dependent variable where its alpha value is 0.885, then university support with the alpha value of 0.885, and access ability with 0.882. The items used to measure each variable were reliable and none of the items were deleted at this stage. Regression Analysis The result in Table 5 indicates that R = 0.783, R 2 = 0.613, adjusted R 2 = 0.606, F = 33.330, p<0.000. The multiple correlation coefficient between the variables, in which comprise of Instructor, Access and University Support toward the e learning is 0.783. It is indicating that the independent factor considered in the regression model are highly positively correlated with the dependent variables. The three independent factors account for 61.3% of the variance in the green purchase intention that reflecting convergent validity on the independent factors on e-learning. Hence, 38.7% of the variations in the e-learning are due to other factors do not investigate in this study. The adjusted R 2 is 0.606 indicating the result of this study is generalizable to other population. Given that the adjusted R 2 is close to the R 2 value, it represented that no overfitting of the model to the sample occurred (Hair et al., 2006). Clearly, the regression model fit the data very well. The R 2 value drop by only 0.007 in the adjusted R 2 that signifies the acceptable cross validity of this model. The F-test is 88.330 at p<0.000 indicates a significant association between the variables. In viewing the B (Beta) coefficients, the positive sign on all the independent factors is an indication of a positive relationship between independent and dependent variables. Table 5, result represented the regression analysis. Firstly, the effect of university support toward e-learning shows the =0.411, t=4.078, p=0.000. This indicated the highest standardized beta coefficient. Means that, university support is the most important factor in predicting e-learning. Secondly, the effect between instructors toward e-learning among student presents =0.248, t=4.858, p=0.000. It shows that, instructors are significantly affect the e-learning acceptance among the students. Lastly, the effect of accessibility toward elearning acceptance among student shows the beta value =0.237, t=3.938, p=0.000. Hence, the analysis represents that, accessibility have a significant affect toward the e-learning acceptance among students. This implies, a better e-learning can be achieved or enhanced when the student has a good university support, instructor, and accessibility. Discussions The global landscape is becoming more muddled and uncertain leading to slow economic growth due to the COVID-19, however, this pandemic has forced global physical closure of businesses, sport activities and schools by pushing all institutions to migrate to online platforms. Based on the result above, there were three independent variables tested on elearning among students in Melaka. University support is the most important factor in predicting e-learning while instructor and access ability show least factor. Firstly, University support includes university library, digital and online materials, digital resources, training, academic tools, and others (Rapanta et al., 2020). This means digital and online materials are very important to ensure the successful of e-learning implementation. Students may retrieve information from digital resources which provided by the university library and others. The findings of this study revealed that, the university must ensure that the support in the form of online materials, digital resources, digital platform for online teaching and training tools are important to ensure the effectiveness of online learning. Secondly, it has been reported that teaching quality of lecturers has a significance influence on student satisfaction (Martono et al., 2020). Thus, during online learning, instructor plays important role in predicting e-learning interaction among students. The guidance and encouragement of the instructors can motivate students to use e-learning, by these means, they will indirectly embrace the system. Besides, engaging presentation, systematic teaching style, and friendly interaction are among the factors that encourage students to accept and encourage them to continue to use the e-learning system. This aligned with the study explored by Taat and Francis (2020) which found instructors have positive relationship towards e-learning. Thus, instructor is expected to display good and decent spirit while delivering online teaching. The positive behaviors embedded through the online learning process would influence the motivation experience by the students. Lastly, this study also reveals that accessibility has a positive relationship towards e-learning. Universities, for example, have requested students, tutors, and lecturers to use a variety of online communication channels to guarantee that the educational process runs well (Al-Kumaim et al., 2021). The implementation of e-learning is about the accessibility and availability of internet connection and devices. Access to computers and internet connections, as well as preparatory training, are critical for teachers to begin integrating technology into their teaching and learning processes (Cheok et al., 2017). Conclusions Previous study has indicated that students' satisfaction in higher education is impacted by variables such as learning facilities, teaching quality and administration . The current COVID-19 crisis has obliged most education systems to adopt alternatives to faceto-face teaching and learning. Many education systems moved activities online, to allow instruction to continue despite school closures. The e-learning approach is very important as means of diversifying teaching and learning methods among students. This study found that the level of e-learning acceptance among students was modest and influenced by factors such as university support, instructor, and access ability. Approximately, 61% of the variation in the e-learning is being influenced by the combination of independent variables represented in this study. The usefulness and convenience of students are influenced by the quality of the system provided by the university and the information provided by the lecturers. However, technical support should be taken into consideration by the university because of external issues such as sluggish internet access, low signal, sign-in problems, less user-friendly interface in digital materials, and less attractive e-learning websites that can cause students not to use the system. Other services such as internet and Wi-Fi should be improved, as the internet is at the heart of e-learning use and acceptance. Furthermore, this study encourages lecturers to use e-learning in helping to enhance their teaching process. Apart from that, this study also increased the amount of research in the field of e-learning and provided a source of reference for other researchers to conduct further studies. Recommendations for future research is to include additional variables and involve respondents from lecturers for more comprehensive and robust information and views. Also, future research needs to investigate potential moderating variables that might influence the direct relationship between the variables in this study. Perhaps, leadership role of lecturers, student's interaction and engagement, technological literacy might be additional factors that could influence the Elearning acceptance.
2021-10-15T15:18:09.298Z
2021-09-28T00:00:00.000
{ "year": 2021, "sha1": "176ce5da03966e040d6302b14d286809569d5815", "oa_license": "CCBY", "oa_url": "https://hrmars.com/papers_submitted/11095/e-learning-acceptance-among-students-evidence-from-uitm-melaka-city-campus.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "cc480f8a875bcba568e6ca9210daf54b30a99a62", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
37140975
pes2o/s2orc
v3-fos-license
Managing the Intersection of Medical and Pharmacy Benefits BACKGROUND: Because of the unique features of specialty pharmaceuticals and insurance plans, specialty pharmaceutical products can be paid through the pharmacy benefit or the medical benefit. While most pharmacists are very comfortable with the conventions of reimbursement in the pharmacy benefit, they are less familiar with the processes for payment in the medical benefit. OBJECTIVES: To review the 2 parallel processes for payment of specialty pharmaceuticals, the pharmacy benefit and the medical benefit, and tocompare and contrast these 2 processes. SUMMARY: The medical benefit and pharmacy benefit processes for payment of specialty pharmaceuticals use different claim forms, product codings ystems, pricing conventions, and contracts. Even though the services delivered can be identical, the financial aspects of paying for these services are quite different. CONCLUSIONS: Pharmacists who are interested in entering the specialtypharmacy arena, either as a provider or manager of providers, need to understand the payment processes for specialty pharmaceuticals through both the pharmacy and medical benefits. S pecialty pharmaceuticals may be viewed from a number of different perspectives: the patients who are prescribed specialty pharmaceuticals, the organizations regulating specialty pharmaceuticals (the U.S. Food and Drug Administration [FDA] ), the pharmaceutical companies developing specialty pharmaceuticals, and the payers who pay for specialty pharmaceuticals. This review will focus on describing the payer perspective in some detail. In particular, payers have specific and unique processes of payment for specialty pharmaceuticals through the pharmacy benefit and the medical benefit. While descriptions of pharmaceutical payment methods are relatively uncommon in pharmacy literature, examples of such publications exist. 1 Literature comparing the expenditures of specific classes of specialty pharmaceuticals, such as multiple sclerosis medications, has also been previously reviewed. 2 After describing and comparing these 2 pathways, we will discuss some of the novel tactics that payers are using to manage specialty pharmaceuticals. Literature addressing the high-level differences between the claims reimbursement processes for specialty pharmaceutical services in the pharmacy benefit versus the medical benefit is less common. The operational standards of these 2 approaches have been documented by managed care organizations, but the processes for submitting a claim in the medical benefit is less thoroughly understood by pharmacy professionals. It is challenging enough to operate a successful pharmacy submitting claims solely through the pharmacy benefit. However, when pharmacists decide to distribute specialty pharmaceuticals, they need to appreciate that the option of submitting claims through the medical benefit is available and is a tactic being used by their competitors. ■■ Pharmacy Benefits/Medical Benefits During the last 80 years in the United States, health insurance has been an evolving and typically expanding business. 3 Foundational programs in the insurance industry, such as Blue Cross Blue Shield, and employer prepaid programs, such as Kaiser Permanente, started by focusing on providing a predetermined number of hospital days to employees in return for a regular payroll deduction. 4 Growing from these origins as prepaid hospitalization plans, prepayment of outpatient services, such as physicians' services, were added in the decades that followed. In the 1970s and 1980s, coverage for pharmacy services was added to health plan offerings as the scope of health plan offerings continued to grow. Pharmaceuticals are a relative newcomer to the covered benefits of health plans and occupy a rather unique position. A quick review of the benefits of 1 of the largest health insurance programs in the country, the Federal Employees Health Benefit Plan, identified the following benefit categories: 5 • Services provided by a hospital or other facility and ambulance services • Medical services and supplies provided by physicians and other health care professionals • Surgical and anesthesia services provided by physicians and other health care professionals • Emergency services/accidents • Mental health and substance abuse benefits • Prescription drug benefits • Dental benefits Hospitals began interacting with health plans and government health programs in the mid-20th century. As a result, their contracts today can include 50 or more years of history. Pharmacies, on the other hand, have a different type of contract than hospitals. For instance, patients have different copayment or coinsurance obligations at pharmacies compared with hospital care. While it may not be intuitive, the multiple parallel benefits based primarily on site of service or provider type leads to the situation such that a dose of hemophilia factor, for example, may be paid for in completely different ways. Payment differences can exist based on whether the treatment is used as part of an inpatient hospital stay, an outpatient hospital service, as part of a service received at a physician's office, or a service received from a pharmacy. Traditionally, pharmacists have focused on managing successful pharmacies using the pharmacy benefit as the only source of reimbursement. Because pharmacies are able to contract with health plans as medical benefit providers (in a pharmacy capacity, in a supplier capacity, or in a durable medical equipment provider capacity), the opportunity exists for pharmacies to have simultaneous contracts in the pharmacy and the medical benefit. This tactic is usually not important in the retail pharmacy setting, but can be very significant for a specialty pharmacy. Specialty pharmacies have commonly pursued both payment processes as a business practice; with the area of specialty pharmacy growing at a rapid rate, the importance of this issue is increasing. Magnitude of the Specialty Pharmacy Issue for Health Plans How health plans manage pharmaceuticals and specialty pharmaceuticals, in particular, has been a rapidly changing issue. This issue has evolved as pharmaceuticals have become a large and rapidly growing portion of the total health care expenditures; specialty pharmaceuticals has become the most rapidly growing portion of pharmacy expenditures. 5 One of the specific challenges in calculating the exact expenditure on specialty pharmaceuticals is that their expenditures are split between the pharmacy benefit and the medical benefit. When only the specialty pharmaceuticals that are adjudicated through the pharmacy benefit are accounted for, these products make up 9.2% of the pharmacy benefit drug expenditure. 5 When total U.S. pharmaceutical sales data are used to calculate sales of specialty pharmaceuticals, these products comprise about 20% of total pharmaceutical expenditures. 6 This difference exists because half of the specialty pharmaceuticals in the United States are not paid for in the pharmacy benefit, but rather are reimbursed through the outpatient or inpatient medical benefit. It is important to note that total health care expenditures in the United States for 2005 were approximately $1.988 trillion dollars, with 32% of the costs attributable to hospital care, 21% to physician/clinical services, and 10% to sales of retail prescription drugs. 6 Within the pharmaceutical category, the specialty pharmaceutical category is growing even more rapidly than the pharmaceutical category as a whole. When compared with 2005 expenses, the expenses for specialty pharmaceuticals in 2006 were up 20.9%; this compares with a 5.9% trend for nonspecialty pharmaceuticals. 6 Specialty pharmaceuticals are getting attention as their own class of drugs because they comprise about 2% of total health care expenditures, and their costs are growing at 4 times the rate of non-specialty pharmaceuticals. There are many ways to define specialty pharmaceuticals. Some use the term "biotechnology products," which typically refers to peptide products developed with recombinant technology. While that was a very appropriate definition of specialty pharmaceuticals in the 1990s, there are several non-peptide injectable products that fall into the category of specialty pharmaceuticals (e.g., treprostinil sodium), and there are now oral specialty pharmaceuticals for rare diseases (e.g., bosentan and imatinib). Still other specialty pharmaceuticals require special handling because of perishability (e.g., biologics, vaccines, etc.) or radioactivity (e.g., tositumomab and ibritumomab tiuxetan). Therefore, the criteria of what constitutes a specialty pharmaceutical is continuing to evolve and expand in scope; all estimates are that this trend will continue drawing added health plan focus and resources. 5,6 The details of how the underlying benefit structures have evolved over the past few decades is critical to understand. Parallel (Not Overlapping) Benefits Health plans' obligations to their members are structured as benefit language. Pharmacists typically understand the language of the pharmacy benefit. Fewer pharmacists understand how the current state of most benefit language results in a situation of overlapping pharmacy and medical benefits for specialty pharmaceuticals. Each chapter of an insurance contract lists categories of health care services that are covered by the health plan, and how each of those categories will be paid. Each of these sections describes a specific benefit; each benefit is defined, for the most part, by the location of where the service is delivered (e.g., hospital inpatient) or the provider type (e.g., physician). Pharmacies have a place in this hierarchy in 2 ways. First and foremost, they are providers in the pharmacy benefit, usually defined as the retail or mail-order setting provision of pharmacy services involving a drug card. Second, the pharmacy is an acceptable provider type under the medical benefit. In a sense, the pharmacy as a provider under the medical benefit is similar to a durable medical equipment provider or supplier that ships medical equipment to patients, typically at their homes. Therefore, there are 2 possible benefits and 2 possible contracts that a pharmacy could choose to pursue. This really is not a situation of blurring of the benefits, but rather that there are 2 parallel benefits, either of which can make business sense for the pharmacy. To enter the specialty pharmacy space effectively, a pharmacist is going to have to understand both processes and both sides of this issue. These parallel benefits require additional analysis. As seen in Table 1, there are 2 parallel processes for the 2 different benefits that require the pharmacy to have contracts with different entities. The pharmacy benefit contracts are with the pharmacy benefits managers (PBMs), while the medical benefit contracts are directly with the health plans. If we use the metaphor of this process as being bilingual, this is the step at which it becomes bilingual. PBMs and health plans have industry standard terms and processes, and these terms may not have the same definition in the 2 different worlds. The health plan may have different certification requirements or other requirements than a PBM and vice versa. For instance, a PBM may have 1 definition of a pharmacy, and the health plan may have a different definition. Also, the claims payment systems are different for the pharmacy and medical benefits; the pharmacy benefit uses the National Council for Prescription Drug Programs (NCPDP) claim standard, while the medical benefit uses the Centers for Medicare & Medicaid Services (CMS) 1500 claim form, formerly known as a Health Care Finance Administration (HCFA) 1500. Another difference is the coding language: National Drug Codes (NDC) is used on the pharmacy benefit and Healthcare Common Procedure Coding System (HCPCS) is used on the medical benefit. The example of the home blood glucose meter can clarify the magnitude of the difference in philosophy of these 2 systems. There are hundreds of varieties of home blood glucose meters with each brand and model having a unique NDC code and price in the pharmacy claims payment system. In the medical claim system, these meters are represented by just 1 code (E0607), and typically they are reimbursed a single price. Also, standard pricing files used in the 2 benefits can be different. In the pharmacy benefit, it is common for PBMs to receive weekly or monthly updates of national standard pricing files that populate the claims payment engine. Adjusting prices in the medical benefit is a much less dynamic process with prices only updated annually in many cases. There are many other differences between these claims payment processes, including locations for submission, time to payment, etc. A final difference is in the process of being harmonized: the provider ID numbers are used by the 2 systems. The provider numbers have been different between the 2 systems for decades, with pharmacy systems using National Association of Board of Pharmacy (NABP) numbers, and the medical systems typically using either a proprietary system or Tax Identification Number (TIN). As part of the Health Insurance Portability and Accountability Act (HIPAA), medical payers have had to migrate to the National Provider Identification (NPI) number, and pharmacy claims payment systems are currently finishing their migration to this system. Many pharmacies find it daunting enough to manage the activities of the pharmacy benefit world without extending into the medical benefit world with its very different processes. The Evolution of Specialty Pharmacy: For Better and For Worse With the proliferation of specialty pharmaceuticals in the market and the number of differences between the ways that they can be paid on the different benefits, inefficiencies in payment practices have crept into the system. For example, recombinant erythropoietin can be administered in a myriad of ways: as part of a hospitalization, as part of a stay in the hospital outpatient chemotherapy infusion clinic, as part of a doctor office visit for chemotherapy infusion, in the setting of a dialysis facility, or self-administered by a patient who receives the product on a pharmacy benefit. The payment rules for each of these categories are governed by different contracts and different benefit-driven payment terms. Each category of provider typically has its own contract type; it is not uncommon for 1 provider entity to have multiple contracts with a single health plan, each with a unique identity, provider type, and terms. The advent of the NPI system should make it easier for health plans to identify all the contractual relationships with a single provider. In addition, there are different sets of rules that apply to the different types of providers. Providers can set up an environment of varied contracts because it gives them multiple claims systems upon which to submit claims and to determine payment. One of the most common combinations is for a provider to be a pharmacist at the same time that they are a medical supplier. Health plans have created an interesting situation as they have tried to move services out of the hospital and into the home-care setting over the last 2 decades. In many cases, plans have created lower copayments to patients when services were provided in the outpatient rather than the inpatient setting. 7 Then, to create an incentive for moving the hospital outpatient services to the homecare setting, some plans have created incentives for patients to receive care in their homes. A home-care service may include the shipping of a specialty pharmaceutical (e.g., growth hormone) to someone at home. The patient copayment may be much lower for this home-care service, but as a general trend in the last 10 years, health plans have been increasing the copayments for pharmaceuticals as a cost-management technique. 7 In this scenario, patients may now have a financial disincentive against receiving these specialty pharmaceuticals from the pharmacy benefit. This dynamic can lead to some challenges for pharmacists within health plans. Some specialty pharmacy providers know exactly what they get paid in the medical and pharmacy benefit. When a pharmacist within a health plan starts bringing the issue of these differences in payment amounts to the staff that manages the medical benefit, it can be a very challenging conversation. As explained previously, the 2 groups are managing different processes, holding different contracts, and paying at different rates. It takes a lot of analysis and mutual learning before a solution comes into focus. The following specific examples of tactics that health plans can use to address this arbitrage (the practice of paying 2 different amounts for the same item) are presented to better understand this phenomenon. ■■ Health Plan Tactics Regarding Specialty Pharmacy Despite the constraints of these core process differences between the medical and pharmacy claims payment systems in processing specialty pharmacy payments, health plans are making some significant inroads into managing specialty pharmacy services more effectively. Tactic 1: Contain a Pharmacy Benefits Management Company. It is relatively common today for health plans to operate a full-service PBM as a free-standing, internal company. This gives the health plan a greater number of pharmacists within the organization to build and manage a network and execute the clinical programs of a full-service PBM. It also gives greater transparency to these arbitrage issues between the claims payment systems. Many of the largest health plans in the country, WellPoint, Aetna, CIGNA, and Humana, have an internal division that serves as their PBM. This gives the organization a critical mass when dealing with issues such as these. This type of organization can support an internal Pharmacy and Therapeutics Committee process and can align itself with the clinical programs and contracting activities on both the medical and pharmacy benefit. Tactic 2: Build a Specific Specialty Pharmacy Benefit. The current health plan benefit language has been built upon several decades of history but fails to adequately address the issues of specialty pharmacy. Some health plans have created a new benefit type to address specialty pharmacy services that focuses less on site of service and more on the products being distributed. Once the services in the specialty pharmacy benefit are defined, the contracting for these services can be implemented, typically with a narrowing of the network. One of the definitions that has been used in this approach in the past is a "self-injectable pharmaceutical" benefit. By decreasing the number of sources from which patients can obtain these products, health plans are addressing the pricing arbitrage between the many possible procurement pathways. Tactic 3: Contain a Specialty Pharmacy. Carrying the "build or buy" dilemma to its logical conclusion, several of the larger health plans have taken the step of building specialty pharmacies in house. WellPoint has PrecisionRx Specialty Solutions, Aetna operates Aetna Specialty Management, and Cigna operates Tel-Drug Specialty Pharmacy. A consortium of Blue Cross Blue Shield plans own Prime Therapeutics as a joint venture, and this company in turn owns and manages a specialty pharmacy. The number of lives that the health plans have needed before pursuing this kind of strategy is typically ≥ 9 million members: WellPoint has approximately 35 million members; Aetna, 12 million; CIGNA, 9 million; and the partners of the Prime Therapeutics joint venture, collectively approximately 16 million. ■■ Conclusions Managing the specialty pharmacy benefits successfully is a complex undertaking. Managing specialty pharmacy in the fullest sense of the term requires a familiarity with both the medical and pharmacy benefits, as well as contracts in both benefits. A number of historical factors, such as health plan benefits, provider types, contracts that are specific to provider type, different coding systems, and different pricing files, all contribute to the difference in payments for the same services (i.e., arbitrage) between the medical and pharmacy benefits. There are several tactics that health plans are currently implementing to manage specialty pharmacy service more effectively, from building a PBM in-house to creating a unique specialty pharmacy benefits to building a specialty pharmacy in-house. These tactics are designed to bring about greater transparency and scrutiny of specialty pharmacy services and, directly or indirectly, to have these services provided by the most cost-efficient providers. Specialty pharmacy services are being provided by ever-tighter networks, and often a business unit of the health plan and/or a business unit of the health plan's PBM are competitors to the independent specialty pharmacy. The opportunities for contracting with health plans and PBMs may become more complex in the future. Also, with the growth and consolidation of specialty pharmacies, these organizations themselves are now large companies that are typically a division within an even larger company. While the current size and rapid growth-rate of the specialty pharmacy sector suggests that there is tremendous opportunity for pharmacies to enter this space, the tactics undertaken by health plans to manage this area more closely suggest that the opportunity is, in fact, smaller than it appears. For those interested in entering into this space or interested in managing specialty pharmacy services more effectively from the health plan standpoint, there are many operational issues that need to be in alignment. Among those many operational issues, whether or not a pharmacy contracts on the medical benefit is actually a rather small issue. However, failing to consider whether a pharmacy contracts to a specialty pharmacy or to a health plan ignores a substantial amount of the specialty pharmacy opportunity and revenue that make up this dynamic and rapidly growing marketplace.
2017-08-27T12:01:11.040Z
2008-05-01T00:00:00.000
{ "year": 2008, "sha1": "f030c604b52af436a3cd6637c9dfffaca9808a13", "oa_license": "CCBY", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "8d150fcc674f62006689c6b987eae6976c4c23c8", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
260710746
pes2o/s2orc
v3-fos-license
It's a Bird, It's a Plane, It's a Vein: Jugular Vein Phlebectasia in a Pediatric Patient With Tracheomalacia Jugular vein phlebectasia is seen in the first decade of life and carries a high chance of misdiagnosis as it can often be mistaken for other conditions observed in pediatric populations. High clinical suspicion along with radiological studies can help to confirm the diagnosis. Treatment is usually conservative, with surgery reserved for unique circumstances. This is the first case to be reported with concomitant tracheomalacia and a history of tracheoesophageal fistula repair in a pediatric patient with external jugular vein phlebectasia. Introduction Jugular vein phlebectasia (JVP), also known as venous aneurysm, venous cyst, venous aneurysm, or venous dilatation, is a non-tortuous dilatation of the jugular vein [1][2][3]. Generally, it is a rare condition, with only 247 cases, including both adult and pediatric patients, identified in the most recent systematic review [3]. However, when JVP presents, it is seen in the first decade of life and is typically asymptomatic [1]. Differential diagnoses include other conditions observed in pediatric populations, such as laryngocele, cystic hygroma, and branchial cyst, for which JVP can often be mistaken [2]. First reported in 1928, much of the literature today still revolves around case reports and sparse literature reviews. This case report aims to highlight a unique presentation of JVP and its diagnosis and management. Thus, clinicians can be more prepared when dealing with this entity and be aware of possible complications. Case Presentation A four-year-old male with a past medical history of tracheoesophageal fistula type-C repair on the second day of life, tracheomalacia, mild persistent asthma, small patent ductus arteriosus, and esophageal stenosis and pouch status-post rigid/flex esophagoscopy with steroid injection and balloon dilation presented to the pediatric emergency department (ED) with two months of constant cough, intermittent fever episodes with a maximum temperature (T max ) of 103°F every two to three days, and one week of decreased appetite and lethargy. The patient's most recent fever was one day prior to the presentation, with a T max of 100.4°F treated with acetaminophen at home. Of note, the patient was recently treated four weeks ago for similar symptoms with a seven-day course of clindamycin for suspected pneumonia. Initial laboratory results were significant for an elevated white blood cell (WBC) count of (27.98 x 103)/µL with a right shift, with an absolute neutrophil count of 20.63 x 103)/µL. Radiological imaging included a chest X-ray, which showed right lower lobe consolidation, and subsequently, the patient was admitted for treatment of communityacquired pneumonia. On physical examination, the patient seemed mildly dehydrated with a barking cough but otherwise had no use of accessory muscles of respiration. As seen in Figure 1A, the initial neck examination did not reveal any abnormalities. However, upon coughing, a right-sided soft, compressible, ballotable, seemingly air-filled pocket was present. The mass appeared below the right sternocleidomastoid muscle and along the lateral aspect of the neck ( Figure 1B). This was reproducible with coughing and Valsalva maneuvers such as straining. There were no associated pulsations or bruits, and the mass did not trans-illuminate. According to the patient's parent, this mass was not present at any point prior to this current admission. FIGURE 1: Right lateral view of the patient without (A) and with the Valsalva maneuver (B) demonstrating swelling in the jugular vein site (arrow). The otolaryngology (ENT) team was consulted and recommended a computed tomography (CT) scan of the neck with contrast. The CT scan showed enlarged adenoids, internal jugular lymphadenopathy, and retropharyngeal lymphadenopathy without any mass effects (Figure 2A-C). Additionally, the ENT team recommended the diagnosis of the external JVP. Due to the low risk of complications and absence of symptoms, no acute intervention or further work-up for this was recommended, and an outpatient follow-up was scheduled. He was treated with ampicillin-sulbactam and azithromycin while admitted. Upon discharge, the patient had a significantly improved cough and, subsequently, a less evident neck mass. No pathogens were isolated, and he was discharged with azithromycin. At a follow-up appointment four weeks later, no further neck swelling was noted, indicating resolution. Discussion JVP is usually of unknown etiology, presenting as a compressible, soft mass in the neck during coughing, crying, and sneezing, which is reproducible via the Valsalva maneuver [1][2][3][4]. The lack of tortuosity and nonsaccular and non-segmental nature differentiates it from varices and aneurysms, respectively [2]. It can arise in almost any cervicofacial vein but presents most commonly in the internal jugular, external jugular, anterior jugular, and superficial communicus in decreasing order of frequency [5]. There is a greater occurrence on the right side when compared to the left side or bilateral involvement. In a systematic review by Figueroa-Sanchez et al., left-sided predominance was only seen in 44 out of 247 total cases reported [3]. A proposed mechanism for this is that the right-sided internal jugular bulb is larger than the left side. Additionally, the right brachiocephalic vein is shorter and in direct continuity with the superior vena cava, unlike the left brachiocephalic vein [6]. Another hypothesis for right-sided preference is that the right innominate vein lies in contact with the apical pleura, allowing any increase in intrathoracic pressure to be communicated directly to the right internal jugular vein [7]. While the etiology is still unknown and debated, a possible hypothesis includes the loss of the normal connective tissue of the vein wall [8]. This can be from primary processes, such as congenital causes, or secondary to age-related degeneration of connective tissue in adults. Other causes may include mechanical compression, neck trauma, or gross anatomic abnormalities [9]. In the literature, phlebectasia of the external jugular vein has been reported very rarely and even less commonly in pediatric populations. In our case, the patient had a recent history of chronic cough, secondary to an infectious process. This could potentially have contributed to the discovery of right-sided phlebectasia. The constant stress from repeated bouts of coughing led to increased intrathoracic and intra-abdominal pressure, resulting in transient retrograde venous flow [10]. In fact, during coughing, the intrathoracic pressure increases from 100-250 mmHg, depending on the severity, to as high as 300 mmHg in adults [11]. Additionally, this patient also had a medical history of tracheomalacia, which is characterized by narrowing and increased collapsibility of the trachea, particularly during expiration [12]. It also presents commonly with other congenital defects, such as tracheoesophageal fistula, which was also seen in this case. In tracheomalacia, the weakening of the tracheal wall accelerates the airway narrowing that is seen during normal inspiration, leading to further increased intrathoracic pressure [12]. Phlebectasia of the external jugular vein could be more prone to arise in these situations. The diagnosis of phlebectasia is made with an imaging study such as ultrasonography or color Doppler flow imaging because they are safe, non-invasive, and reproducible [5]. Doppler flow imaging can be used to show flow turbulence both with and without the Valsalva maneuver. It can also help to rule out other differential diagnoses of neck masses, such as laryngocele, mediastinal tumors, and branchial cysts, while remaining safe and low-cost [2,5]. In the majority of cases reported in a recent systematic review, ultrasound was employed as the primary study in 72% of cases [3]. CT scan is also sufficient in identifying the initial pathology, despite being utilized much less frequently [3,13]. However, Doppler flow imaging is preferred to visualize turbulence and luminal filling with and without Valsalva. Most upper extremity and cervical venous aneurysms pose no significant risk to health or adverse outcomes [14]. This is in contrast to deep venous aneurysms, which are more commonly seen in the adult population and are more likely to contribute to thromboembolic events [14]. Treatment is usually conservative, which has been the approach in the majority of cases [3]. In fact, in the pediatric population of 102 reported cases, there have been no complications found from conservative management [3]. This makes it a safe choice for the majority of patients. However, surgical management can be considered in certain rare cases if there is an additional risk of complications. Surgical management is reserved for cosmetic reasons or risk of thrombus formation, pulmonary thromboembolism, or aneurysm rupture [14][15][16]. In our case, the patient was managed conservatively with treatment for pneumonia, and the phlebectasia did not require surgical intervention. Parental education was also provided regarding the pathology and possible complications that would have warranted a surgical intervention, as discussed before. This case is unique from a clinical standpoint because the aneurysm was incidentally discovered at the time of presentation for pneumonia. The patient additionally had a history of tracheomalacia and past otolaryngological surgery for tracheoesophageal fistula repair. Past instrumentation and trauma to the area, along with excess collapsibility, might have created a situation that, when exacerbated by persistent coughing, could have led to the formation of this venous aneurysm. While many mechanisms have been proposed by various case reports, no definitive association has been drawn yet [3]. In clinical practice, the recognition of these factors is important to rule out other conditions since jugular venous phlebectasia is generally managed conservatively due to the low risk of complications [1,3]. Conclusions JVP should be considered in pediatric populations presenting with Valsalva-dependent neck swelling. In patients with pre-existing congenital or anatomical defects of the neck, a phlebectasia is more likely to arise if exacerbated by processes increasing intrathoracic pressure. Early recognition can prevent the use of unnecessary diagnostic imaging and increase patient comfort. The condition is usually self-limited and resolves spontaneously, with follow-up recommended for monitoring of changes or complications. Additional studies need to be performed to determine the impact of aneurysm size on spontaneous regression.
2023-08-09T08:35:50.403Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "2fa773480facb827738879f9e65aeb97428ecf7a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "2fa773480facb827738879f9e65aeb97428ecf7a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
246033933
pes2o/s2orc
v3-fos-license
Enhancing protein inter-residue real distance prediction by scrutinising deep learning models Protein structure prediction (PSP) has achieved significant progress lately via prediction of inter-residue distances using deep learning models and exploitation of the predictions during conformational search. In this context, prediction of large inter-residue distances and also prediction of distances between residues separated largely in the protein sequence remain challenging. To deal with these challenges, state-of-the-art inter-residue distance prediction algorithms have used large sets of coevolutionary and non-coevolutionary features. In this paper, we argue that the more the types of features used, the more the kinds of noises introduced and then the deep learning model has to overcome the noises to improve the accuracy of the predictions. Also, multiple features capturing similar underlying characteristics might not necessarily have significantly better cumulative effect. So we scrutinise the feature space to reduce the types of features to be used, but at the same time, we strive to improve the prediction accuracy. Consequently, for inter-residue real distance prediction, in this paper, we propose a deep learning model named scrutinised distance predictor (SDP), which uses only 2 coevolutionary and 3 non-coevolutionary features. On several sets of benchmark proteins, our proposed SDP method improves mean Local Distance Different Test (LDDT) scores at least by 10% over existing state-of-the-art methods. The SDP program along with its data is available from the website https://gitlab.com/mahnewton/sdp. Protein structure prediction (PSP) is recognised as one of the long standing unsolved problem in bio-informatics, biophysics, and structural biology 1 . A protein's function depends on its three dimensional native structure that has the minimum kinetic energy. PSP is thus a crucial step in developing life-saving medicines, in designing novel enzymes, and in therapeutic science. Prediction of the native structure of a protein directly from its amino acid sequence is a complex procedure since the conformational search space is astronomical and the energy function is by and large unknown 2 . Energy functions such as CHARMM 3 and AMBER 4 are based on molecular dynamics and have computed energy components from chemical bonds, bond angles, dihedral angles, van der waals forces, and electrostatic forces. However, these energy functions have so far led to poor prediction of protein structures. Moreover, neither they are good in capturing long range inter-residue or inter-atomic interactions nor are computationally efficient. Knowledge based energy functions have statistically derived structural features from available experimentally verified proteins. Such energy functions are computationally cheaper since they are mostly at the residue level. Consequently, residue-residue contact (whether distance is less than 8 Å ) prediction algorithms have been developed and predicted contacts have been used as geometric constraints in ab initio PSP search 2,5,6 . Contact maps have also been used in transforming into inter-residue distances by methods such as CONFOLD 7 , CONFOLD2 8 , and DESTINI 9 . However, contact maps suffer from their inability to distinguish distances that are beyond 8 Å and also from the fact that on an average more than 92% residue pairs are not in contact 10 . In this context, inter-residue distance maps are more informative than residue-residue contact maps since distances are real numbers while contacts are boolean values. Recently, AlphaFold 11 and trRosetta 12 have shown promising results using inter-residue distances during search. Inter-residue distances have also been used in threading Benchmark datasets. We have initially taken the same dataset used by MapPred 28 as well as SPOT-1D 41 . This dataset contains 12,450 proteins. These proteins were culled from PISCES 42 on February 2017 and curated by satisfying the constraints of high resolution < 2.5 Å , R-free < 1 , and pairwise sequence identity less than 25% similarity according to BlastClust 43 . However, we have performed some additional cleaning on the dataset. For example, similar to someother work 9,35,41,44 , we have ignored the proteins which have less than 25 or more than 700 residues in their sequences. During additional cleaning, we have found 7145 proteins which have the exact amino acid sequences in both Fasta and PDB files. The rest 1910 proteins are selected by taking amino acid sequences from PDB where Fasta sequence has some additional residues at the beginning or at the end of the sequence. The finally filtered dataset in total contains 9055 proteins. From these proteins, a random set of 680 proteins is selected as the validation set and the remaining 8375 proteins are considered as the training set for our proposed model. To evaluate the effectiveness of our proposed model, we have used three blind test sets: 31 free modelling (FM) targets from CASP13 45 released in 2018, 131 CAMEO-HARD targets 46 released from 8th December 2018 to 1st June 2019, and another 144 CAMEO-HARD targets 46 released from 8th August 2020 to 6th February 2021. These three datasets are denoted by CASP13.31, CAMEO.131, and CAMEO.144 respectively. In case of CAMEO.144, those 144 proteins are obtained from a set of 409 candidate proteins after applying cleaning and excluding the sequences having more than 25% sequence similarly with the training data. For this similarity removal, we have used CD-HIT 47 12 , and PDNET 23 for binned or real-valued distance prediction, we use two dimenionsional Dilated ResNet shown in Fig. 1 for our proposed SDP method. The ResNet in SDP takes generated 2D-features and feeds them to a batch normalisation layer followed by a rectified linear unit (ReLU) activation function. Then, SDP has a 2D convolution layer with 1 × 1 kernel, a layer of 128 residual blocks, another batch normalisation layer followed by a ReLU function, and finally another 2D convolutional layer with 3 × 3 kernel. The last 2D convolutional layer produces the inter-residue distance map. In the layer having 128 residue blocks, each residual block contains a batch normalisation layer, an exponential linear unit (ELU) activation layer, a 2D convolution layer, a dropout layer with drop out rate 20%, and another 2D convolutional layer. The 2D convolution layers have alternating between 3 × 3 and 1 × 5 kernels with dilation. The dilation cycle in the second 2D convolutional layers alternate by 1, 2, and 4 steps. The last 2D convolutional layer producing the distance map has 1 filter while all other 2D convolutional layers in our model have 64 filters and "he normal" kernel initialiser. As is done in AlphaFold 11 and PDNET 23 , we add zero padding of width 5 to all slides of input features and generate cropped samples of 128 × 128 randomly from the input. However, after prediction, we do not do any such types of padding or cropping in the predicted values. As noted before, inter-residue real distance prediction is considered as a regression problem. For a regression problem, it is challenging to pick an appropriate loss function, which can led to prediction of real values as correctly as possible. Commonly used loss functions such as MAE or Mean Square Error (MSE) have the tendency to focus on the long distances because they create higher loss values. However, in the real-valued inter-residue distance prediction problem, shorter distances are more meaningful than longer ones in terms of the usefulness Results To show the impact of various components of the proposed SDP method, we create a number of SDP variants and compare them. We then compare SDP with the current state-of-the-art distance map predictor methods. For comparison, we mainly use MAE values and lDDT 54 scores computed from distance predictions. Table 1 shows percentages of residue pairs having distances within ranges [l, h) where h − l = 4 . We show prediction results for inter-residue distances up to 36 Å and thus cover more than 59% residue pairs while existing methods such as RaptorX 20 and DeepDist 25 consider distances up to 16 Å and cover less than 18% residue pairs. In this context, we define distances below 16 Å as short distances and distances below 36 Å as long distances; short distances are naturally a subset of long distances. Note that while training the ResNet, depending on our target to achieve short or long distance prediction, we might use all possible residue-pairs or those having certain maximum distances. Later, in appropriate sections, we will mention exactly which residue-pairs are used in training of which model. We are interested in improving long distance prediction. Determining best settings. In SDP variants, we consider 6 features CCMPred 33 , FreeContact 39 , PSSM 43 , ShannonEntropy 34 , 7PCP 40 , and 8-state SS 51 . These features have respectively, 1, 1, 44, 2, 14, and 16 channels. Among these features, we consider CCMPred, FreeContact, PSSM as the three core features. Then, we add Shan-nonEntropy to see its effectiveness empirically. Lastly, we consider adding one or both of 7PCP and SS features to see their separate or combined effect. For the ResNet layer having residual blocks, we consider either 64 or 128 blocks. Most existing methods use 128 residual blocks, but we empirically evaluate using fewer blocks. In total, we have 10 SDP variants, which are listed below. CF64, CF128: Core Features (CCMPred, FreeContact, and PSSM) and 64 or 128 residual blocks SE64, SE128: ShannonEntropy with Core Features and 64 or 128 residual blocks PC64, PC128: 7PCP with ShannonEntropy plus Core Features and 64 or 128 residual blocks SS64, SS128: SS with ShannonEntropy plus Core Features and 64 or 128 residual blocks SSPC64, SSPC128: SS and 7PCP with ShannonEntropy plus Core Features and 64 or 128 residual blocks Note that considering short and long distance predictions, various subsets of residues could be used in training these 10 variants. However, to select one best model without cluttering the comparison landscape, we just show the results where all residue-pairs have been used in training the 10 variants. Further, note that we show results only for the CAMEO.144 datasets but the results are similar for the validation datasets and the other test datasets. Figure 2 shows the MAE values obtained by the SDP variants over inter-residue distances in the ranges [0, h) where h is a threshold in multiples of 4 Å . As we can see, in general, the MAE values increase for all variants as more distant residue pairs are included. Also, 128 residual blocks are better than 64 blocks except in SSPC variants. Adding ShannonEntropy with the three core features improves the MAE values. Then, PC128 performs better than SE128 while PC64 is better than SE64 only up to residue pair distances of 16 Å . So addition of the 7PCP features in general improves the MAE values with 128 residual blocks. However, addition of SS features in general causes degradation of the MAE values. Overall, PC128 appears to be the best performer among the 10 SDP variants. So, henceforth, we will use PC128 variant that uses 7PCP, ShannonEntropy, CCMPred, FreeContact, and PSSM features as our main SDP algorithm. www.nature.com/scientificreports/ For the selected SDP algorithm, as discussed above, we have the following five variants depending on our target of short or long distance prediction. These five variants use the same 5 features and the same ResNet architecture, but only the training datasets are different for them. We will later compare the best ones from the five variants with the state-of-the-art inter-residue distance prediction methods. SDP-L: Targeting long distance prediction, uses our training and validation proteins as described exactly before. SDP-X: Targeting long distance prediction, uses the training and validation proteins of PDNET 23 , instead of our training and validation proteins. This allows us to see the effectiveness of our features and the ResNet model over various datasets. SDP-Y: Targeting short distance prediction, uses value 16 Å as the distance between each two residues that are actually more than 16 Å apart. 16 Å is a distance threshold used in RaptorX 20 and DeepDist 25 . SDP-S: Targeting short distance prediction, customises the loss function to ignore residue pairs that are actually more than 16 Å apart. Compared to the approach in SDP-Y, this is another way to target short distance prediction. SDP-Z: Targeting short distance prediction, uses the training and validation proteins of PDNET 23 , instead of our training and validation proteins. Like SDP-S, this customises the loss function to ignore residue pairs that are actually more than 16 Å apart. Like SDP-X, this allows us to see the effectiveness of our features and the ResNet model over various datasets. Note that for training and validation, MSA used by PDNET, SDP-X, and SDP-Z is based on Uniclust30 database of August 2018 55 Comparison with state-of-the-art distance predictors. As noted before SDP uses 2 coevolutionary and 3 non-coevolutionary features such as CCMPred, FreeContact, PSSM, ShannonEntropy, and 7PCP. We compare SDP with most recent inter-residue distance prediction methods PDNET 23 , DeepDist 25 , and LiXu 24 . We briefly describe them below. We could not compare SDP with GANProDist 22 because its model or program is not available and its online server cannot generate distance maps for the proteins with more than 500 or fewer than 40 residues. As noted before, for all testing proteins from CASP13.31, CAMEO.131, and CAMEO.144, we generate MSA using Uniclust30 database of June 2020 53 . We use the same MSA for the testing proteins when we run DeepDist and PDNET. We present our results in two ways: first, with PDNET 23 and DeepDist 25 in details and then, with LiXu 24 briefly. The LiXu program is not available and we compared its published results with our results using the same distance metrics that LiXu uses. Let D ij be the actual distance between residues with indexes i and j and S ij the sequence separation length |i − j|. Comparison with PDNET and DeepDist. Table 2 shows the mean lDDT values for PDNET, DeepDist and SDP methods over all residue pairs in each dataset. As per DISTEVAL 26 , lDDT scores are the most effective metrics to evaluate predicted real-valued distances. As we see from the table, SDP-L among PDNET, SDP-X, and SDP-L obtains the best mean lDDT score while SDP-S among DeepDist, SDP-Y, SDP-Z, and SDP-S obtains the best mean lDDT score. Among all 7 competing methods, SDL-S obtains the best mean lDDT score. Figure 3 shows the 95% confidence interval plots for the lDDT scores of PDNET, DeepDist, and SDP methods. Any overlapping of the confidence interval means the differences are not statistically signficant. As we see from the charts, SDP-L is significantly better than PDNET in CAMEO.131 and CAME.144 proteins but not in CASP13.31 proteins. www.nature.com/scientificreports/ Moreover, SDP-S is significantly better than DeepDist in all three datasets. DeepDist is also significantly better than PDNET in all three datasets. In terms of MAE values, the performance difference between SDP-L and PDNET is statistically significant as per t test with 95% significance level (p values are 0.0 for all datasets) and so is also the difference between SDP-S and DeepDist. Table 2 also shows the MAE values for PDNET, DeepDist, and SDP methods for residue pairs that are short and long distance apart and have various sequence separation length. Although Table 2 shows results for all combinations, we mainly compare SDP-Y, SDP-Z, and SDP-S with DeepDist since DeepDist works mostly in short distance prediction and SDP-Y, SDP-Z, and SDP-S are trained with a target of short distance prediction. For similar reasons, for long distance prediction, we mainly compare SDP-X and SDP-L with PDNET. For MAE values, the smaller the better. As we see from the Table 2, for long distance prediction (D ij < 36) , DeepDist, SDP-Y, SDP-Z, and SDP-S perform much worse than PDNET, SDP-X, and SDP-L. However, SDP-L performs the best among PDNET, SDP-X, and SDP-L in all cases except for CASP13.31 and S ij > 1 . Between PDNET and SDP-X, the latter performs better than the former. This shows our features and ResNet architecture are better than those of PDNET since both PDNET and SDP-X use the same training and validation proteins and the same sequence library for MSA generation. Our training and validation proteins and MSA generation also make differences since both SDP-L and SDP-X use the same features and ResNet architectures but SDP-L performs better than SDP-X in most cases. For short distance prediction (D ij < 16) in Table 2, SDP-S performs the best among the 7 prediction methods, regardless of the sequence separation length. Notice that as normally expected, the performance of PDNET, SDP-X, and SDP-L is much worse than that of DeepDist, SDP-Y, SDP-Z, and SDP-S for short distance prediction. Between SDP-Y and SDP-S, the latter performs better than the former. This shows it is better to ignore distances 16 Å or above when the target is short distance prediction. Notice that SDP-Z is worse than SDP-S but and has a mixed or comparable performance with respect to DeepDist. The performance difference between SDP-S and SDP-Z comes from the training and validation datasets and the MSA generation as both methods use the same features and ResNet architecture. The comparable performance of SDP-Z and DeepDist is interesting. SDP-Z uses about 3500 proteins in its training and validation sets with our input features while DeepDist uses about 6500 proteins in its training and validation sets with many more input features than SDP-Z's. Moreover, DeepDist generates MSA based on 6 sequence libraries of 2018, while SDP-Z (also all SDP variants and PDNET) does that on 1 sequence library of August 2018. Nevertheles, all these show the effectiveness of our input features and the ResNet architecture over the differences in the protein sequences used in training and validation. Henceforth, we perform further analysis of SDP-L against PDNET and SDP-S against DeepDist. Figure 4 shows the MAE values in various actual distance ranges for SDP-L against PDNET and SDP-S against DeepDist in various datasets. As we see, SDP-L and SDP-S obtain smaller MAE values in most cases in all datasets. www.nature.com/scientificreports/ Figure 5 shows the percentages of residue pairs with short and long actual distances such that those residue pairs have predicted values with absolute errors below various given threshold limits. In this figure, the larger the percentages, the better the performance. As we see from the charts, SDP-L and SDP-S methods perform better than the other methods in most cases. Comparison with LiXu. The LiXu 24 method is related to another method 60 but its evaluation is done via contact map prediction accuracy. So we compare mainly with the LiXu 24 method. As already noted before, LiXu 24 program is not available to us. So we compare SDP's performance with the results reported in the article describing LiXu. For this comparison, we use the distance metrics used by LiXu and compute the results for PDNET, DeepDist, and SDP methods. Table 3 shows the comparison of PDNET, DeepDist, LiXu, and SDP methods over CASP13.31 dataset in terms of absolute errors (AE), relative errors (RE), pairwise distance test (PDT) scores, and high-accuracy pairwise distance test (PHA) scores. Note that LiXu 24 results are reported only for CASP13.31 dataset. Moreover, AE is the absolute difference between the predicted and the native distances while RE is the absolute error normalised by the average of the predicted and the native distances. Furthermore, assuming R i denotes the fraction of predicted distance with an absolute error less than i, PDT is the average of R 1 , R 2 , R 4 and R 8 while PHA is the average of R 0.5 , R 1 , R 2 , and R 4 . Following LiXu 24 , we compute AE, RE, PDT, and PHA for distances less than 15 Å . Nevertheless, as we see from the table, SDP-S outperforms all other methods including LiXu 24 in all metrics. Moreover, LiXu performs worse than DeepDist, PDNET, and all SDP versions. Moreover, DeepDist is better than PDNET. Comparison of contact maps obtained from distance maps. There is a separate body of research for contact map prediction. Moreover, in this work, our interest is in improving distance map prediction, particularly long range distance prediction, and not contact map prediction at all since distance maps are more informative 11,12 than contact maps. However, we just want to see what happens if our predicted distance values are converted into contact maps. Predicted distances can be transformed into contact map predictions in the following two ways. Via probability method: Predicted distance D ij can be converted into a contact probability P ij = 4.0 D ij if D ij ≥ 4.0 else 1.0. Then, the top L (or L/2 or L/5) contact probabilities are considered for each protein where L is the number of residues in the protein. Next, precision P L (or P L/2 or P L/5 ) is computed for the top L (or L/2 or L/5) contact www.nature.com/scientificreports/ probabilities assuming two residues are in contact when they are at most 8 Å apart. This procedure has been used in the literature 12,20,35,44 . Direct comparison method: Predicted distance D ij can be directly compared with the threshold distance 8 Å and residue pairs having distances 8 Å or below can be considered to in contact. Then, precision and recall values could be computed. Comparison with distance map predictors on contacts. Using the via probability method described above to compute contacts from distances, Table 4 shows the precision values P L obtained by various methods when sequence separation lengths are at least 12 or 24. As we see from the table, DeepDist performs the best and SDP-L performs the second best. Using the direct comparision method desribed above to compute contacts from distances, Table 5 shows precision and recall values for all residue pairs. We see that DeepDist has better precision values in 2 out of 3 datasets with SDP-L performing the second best, but SDP-S and SDP-L both have better recall values than the other two methods in all datasets. In this work, our key focus is to learn long distances between residues having long sequence separation. In LDDT scores in Table 2, SDP-S performs better than SDP-L. However, considering the better MAE of SDP-L over SDP-S for D ij < 36 and S ij ≤ 12 and S ij ≤ 24 in Table 2 and better P L , precision, and recall values of SDP-L over SDP-S in Tables 4 and 5, we select SDP-L as our best setting and henceforth only show its performance. Table 4. Precision values P L (%) for top contact pairs when sequence separation lengths S ij = |i − j| are at least 12 or 24. For P L , the larger the better. The emboldened and underlined values are the best and the second best values respectively. With SDP-L, we compute contact precision values P L , P L/2 , P L/5 for sequence separation lengths at least 12 and 24. In Table 6, we then compare the computed precision values with that of the contact predictors RaptorX-contact 61 , Chen et. al method 62 , and TripletRes 63 . As we see from the table, for S ij ≥ 12 , SDP-L outperforms the other three contact predictors but could not do so for S ij ≥ 24 . Note that all three other methods are specifically designed for contact prediction while SDP-L is primarily designed for distance prediction. 3D protein structure construction. We build three dimensional structures using the distance maps predicted by SDP-L and DeepDist. We cannot do this for LiXu 24 since its program is not available for us to get its predicted distance maps. For this, we use DFOLD 64 , which has been used by DeepDist 25 as well. Figure 6 (left) shows the template modeling scores (TM-scores) of the structures obtained for the CASP13.31 proteins. Clearly, SDP-L predicted distances in most cases result in better protein structures than DeepDist predicted distances. Note that DeepDist mainly predicts distances up to 16 Å while SDP-L predicts up to 36 Å . Further, we create combined distance maps from DeepDist and SDP-L predicted distance maps by taking DeepDist predicted distances when corresponding SDP-L predicted distances are less than 16 otherwise taking SDP-L predicted distances. As we see in Figure 6 (right), this also shows that the combined distance maps result in better structures in most cases than DeepDist predicted distance maps do. Overall, these results show that distances larger than 16 Å and up to 36 Å help obtain better three dimensional structures. Figure 7 shows sample protein structures and TM-scores values obtained for three CASP13.31 proteins by using SDP-L and DeepDist predicted distance maps with the same program DFOLD. Conclusions In this paper, for protein inter-residue real distance prediction, we propose deep learning models, which use fewer types of multiple sequence alignment (MSA) and sequence based features than existing such methods. Prediction of inter-residue distances and using such predicted distances in designing protein conformation scoring functions have recently led to considerable progress of protein structure prediction. However, prediction of large distances and distances between residues with long sequence separation length still remains challenging. To overcome these challenges, more and more features have been used in existing distance prediction algorithms. In this paper, we scrutinise the feature space to reduce the types of features being used but at the same time, we strive to improve the prediction accuracy. Using only 2 coevolutionary and 3 non-coevolutionary types of features, we improve mean Local Distance Different Test (LDDT) scores at least by 10% compared to the current state-of-the-art distance prediction methods. Our proposed algorithm is named Scrutinised Distance Predictor (SDP). The SDP program along with its data is available from the website https:// gitlab. com/ mahne wton/ sdp.
2022-01-20T06:14:23.600Z
2022-01-17T00:00:00.000
{ "year": 2022, "sha1": "f39b685d95c9f17fbc0ebc8da2602bb0c3a874bf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "f39b685d95c9f17fbc0ebc8da2602bb0c3a874bf", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
119111777
pes2o/s2orc
v3-fos-license
Comments on Background Independence and Gauge Redundancies We describe the definition and the role background independence and the closely related notion of diffeomorphism invariance play in modern string theory. These important concepts are transformed by a new understanding of gauge redundancies and their implementation in non-perturbative quantum field theory and quantum gravity. This new understanding also suggests a new role for the so-called background-independent approaches to directly quantize the gravitational field. This article is intended for a general audience, and is based on a plenary talk given in the Loops 2007 conference in Morelia, Mexico. I. INTRODUCTION The problem of quantum gravity is one of the biggest remaining mysteries in physics. Many conceptual and technical issues make it difficult to treat the gravitational field quantum mechanically in the same way one quantizes, for example, the electromagnetic fields. Moreover, one is not likely to get guidance from experimental results, since the energy scale associated with quantum gravitational effects is enormous, around 10 19 GeV . In those circumstances one has to turn to theoretical considerations and consistency checks, to narrow down the range of possibilities. One of the ideas that has provided valuable clues to the nature of quantum gravity is that of Background Independence. Recently this rather technical concept has captured wide interest well beyond the quantum gravity community, spreading into the general physics community and beyond. Furthermore, the ideas of duality and holography, developed in the last decade, have altered the role these concepts play in quantum gravity research. The purpose of this article, and the talk it is based on, is to provide a personal perspective on this rather nebulous notion, and the closely related notion of diffeomorphism invariance (and more generally gauge redundancies), from the perspective of a string theorist. The article is based on knowledge common to most practitioners of modern quantum field theory and string theory, which could be found in many review articles and textbooks, therefore the list of references will be far from exhaustive. Some references touching on similar issues are 1,2,3 . The outline of this paper is as follows: this introductory section is devoted to defining the notion of background independence (henceforth BI) and motivating that definition. We then turn to discussing the evolution of this concept in a more or less historical fashion, starting from perturbative string theory (which is argued * Electronic address: rozali@phas.ubc.ca to be background-dependent), going through an interlude regarding gauge invariance and dualities in quantum field theories, and ending with a fully BI example of nonperturbative string theory, in the form of the AdS/CFT duality 4 . We end with a short discussion of future directions. A. Basic Definitions Consider the simplest classical or quantum mechanical system, that of a one-dimensional particle moving in the potential V (x). The system is described by a configuration space (parametrized by x) and our role as physicists is to describe the state of the system (classically the location x, or quantum mechanically the wavefunction ψ(x)) and its time evolution. Oftentimes we have only an incomplete knowledge of V (x), or we are only able to calculate the dynamics in the vicinity of some location x 0 . In that case we call x 0 the "background", and we proceed by working in perturbation theory around that background. Obviously, this gives us only partial information. For example we are unable to find what is the ground state wavefunction of the system. More technically, the perturbation series is generally not summable, therefore it is not sufficient to extract information about large fluctuations away from the chosen background, or long time evolution of the system. In quantum mechanics the separation to background and small fluctuations is related to the process of "quantization", the systematic inclusion of small quantum corrections around mostly classical background 12 . The background can be viewed roughly as a condensate (or a coherent state) of a large number of quanta, and including fluctuations around it account for the effect of additional finite number of quanta. In the case we separate the configuration of the system into classical and quantum parts, we call the description of the system "background dependent" and agree it is an incomplete state of knowledge. As described below, this is the situation for example for string perturbation theory. As described, the flaw is generally related to the use of approximate, perturbative techniques. This definition can be generalized to more complicated systems as well. Supposed we are describing the electric and magnetic fields in flat space. Then Maxwell equations are a complete BI description of the system: the configuration space here is the infinite dimensional space of all possible field configurations E(x), B(x). Maxwell's equations do not single out any particular such configuration and therefore it is BI, at least with the definition given here. With this definition of background, and BI, the problem of background dependence is not merely a philosophical or aesthetic unease. Rather, any question which requires comparison of different (generally vastly different) backgrounds, or any physical quantity that receives contribution (at the required precision level) from wellseparated points in configuration space, any such physical issue will be inadequately addressed using a backgrounddependent formulation. Examples of such issues, for which BI formulation is needed are: • In quantum mechanics of a single particle in a double well-potential, perturbation theory around any of the minima of the potential will miss tunneling effects. Therefore, the ground state wave function cannot be correctly described using perturbation theory around a fixed background. • In quantum electrodynamics, a constant electric field can decay by generating electron-positron pairs from the vacuum, which then screen the electric field. This is the famous Schwinger effect. Any description which singles out a particular configuration of the electric field will be inadequate in describing this effect. • Moving to quantum gravity: with one's favorite cosmological selection principle, if such a principle exists, one could try to explain features of the observable universe or its initial conditions. Clearly this task requires comparison of diverse set of possible cosmologies, and dynamical transitions between them, and therefore a BI formulation. • Resolution of the information paradox seems to require contribution from separate backgrounds 5,6 , and therefore a BI formulation of the problem. So, BI in the sense defined above seems like a desirable, indeed a necessary ingredient of any future theory of quantum gravity. One therefore often hears the sentiment that any such theory "must" be BI. Since I argue below that some definitions of string theory are already BI in the sense defined here, I now turn to motivating the above definitions and exemplify them using the paradigm of classical general relativity. B. Backgrounds versus Superselection Sectors The above definition of BI is in the spirit of the background field method of quantizing field theories (including gravitational theories). One fixes a background for all the fields (say a background metric if quantizing general relativity), and quantizes the small fluctuations around that background. Normally, one would not call the results of this method background dependent: all physical results do not depend on the background chosen, though the intermediate steps to obtaining those results might. Nevertheless, when working in perturbation theory, one can only show that BI holds when changing the background by a small (infinitesimal) amount. In other words the results are BI only in as much as they are perturbative, the question of full BI can only be addressed in the context of non-perturbative physics. There are other notions of BI in the literature, but many of them reduce to the above upon closer inspection, whereas some other definitions are too narrow, essentially only applying to general relativity and attempts to directly quantizing it, but not to a more general approaches to quantum gravity. Yet some other definitions are so ambitious as to classify any existing or conceivable future theory as background-dependent. To exemplify precisely what is meant by the definition of BI given here, and what is not, I now turn to classical general relativity, which is often given as the paradigm of a BI theory. Gravitational physics, including for example celestial mechanics, was described prior to Einstein's theory of general relativity by Newtonian mechanics. In Newtonian mechanics the spatial and temporal coordinates of a star are an absolute concept, defined with respect to background spacetime. This raises some aesthetic and philosophical unease, described for example in 7 , which is resolved by Einstein's promotion of the metric to a fluctuating field. In Einstein's general theory of relativity the background now acquires a dynamical nature, it is determined by Einstein's equations and indeed can change with time. The description therefore is more economical, no background structure exists, it is determined dynamically. In the context of the definitions above, the background discussed is the metric structure, which is fixed in Newtonian mechanics (or even in special relativity), but becomes dynamical in general relativity. This is a beautiful example of replacing a background structure (which constitutes an arbitrary choice) by a dynamically chosen quantity (which then has a rationale and can be derived from more fundamental structure). This has led some to judge success in formulating a fundamental theory in the degree to which it is BI. For example in the words of 7 , the "relational strategy" 13 is to "seek to make progress by identifying the background structure in our theories and removing it, replacing it with relations that evolve subject to dynamical law". The usefulness of this strategy clearly depends on what constitutes a "background" and what does not. Is there a useful and a-priori (theory-independent) definition of what constitutes a background and what does not? in other words, how does one go about "identifying" a background? Following 1,8 I suggest to define a background as a quantity that can change dynamically in an ordinary physical process (i.e. one that takes finite amount of time and involves finite amount of energy). A quantity that cannot be changed dynamically cannot be thought as a background, rather it is a fixed parameter of the theory, which defines what conventionally is called a superselection sector. To drive the distinction home let me illustrate the difference in some familiar examples. The location of a single particle can change dynamically, therefore any formulation of the dynamics which singles out a specific location would be inconvenient or inadequate in describing processes in which the particle travels great distances. We call such difficulty background-dependence. On the other hand, in non-relativistic quantum mechanics, the fact that we have a single particle (and not a few) is not changeable by a dynamical process. The process of identifying and removing the background in this context (conventionally known as second quantization) will make new physical processes possible, for example such processes in which the net number of particles changes with time. In this way, Einstein's generalization of Newtonian gravity involves the realization that some aspects of the geometry of spacetime are dynamical, and therefore the fixed background of Newtonian dynamics is inadequate description for those physical processes-namely exactly those in which spacetime geometry changes (for example a collapse to a black hole or even the more mundane passage of gravitational wave through a detector). However, it is important to note that in all those examples there are some aspects of the theory that stay nondynamical, and cannot sensibly be considered a "background". For example in general relativity, the form of the asymptotic geometry is one such aspect. The phase space of classical general relativity in asymptotically flat space is different from the one of asymptotically antide-Sitter (AdS) space. More physically -there is no finite process in classical general relativity which converts asymptotically flat space to asymptotically AdS one. Those two theories, which are distinguished by choice of asymptotic geometry (or more generally boundary conditions), should be thought of as defining superselection sectors, they are really different, and are not related by changing the "background" in some more fundamental theory. This situation is not at all unusual, in all conventional theories dynamics is described by a set of differential equations, and the set of solutions depends on the choice of boundary conditions. Those boundary conditions do not change dynamically, and cannot be considered to be a background. Rather, they are part of the data needed to specify the dynamical problem. It is only with this dis-tinction between background and superselection sectors, that classical general relativity, or indeed any other physical theory, can be considered to be BI. In order to have a meaningful discussion then, I will adopt this definition of what is, and is not, a background. As we shall see below, existing holographic formulations of string theory in various circumstances are fully BI in that sense. C. BI and Gauge Redundancies Once we consider the quantum mechanics of the electric and magnetic fields, or that of the gravitational field, we have a new and interesting complication, that of gauge freedom 14 . Classically, one introduces potentials A, Φ(x) in order to simplify the equations, but they are not necessary in principle, Maxwell equations are already a complete description of the system. However, as demonstrated for example by the Aharonov-Bohm effect, quantum mechanically the situation is different. There are aspects of the system (such as the interference pattern of electrons in the presence of a localized magnetic field) which are insensitive to local values of the electric and magnetic fields alone. Rather, they are summarized by global observables (holonomies, or Wilson loops). In terms of the original fields, those holonomies would be non-local observables. The introduction of potentials is necessary to restore manifest locality to our description of the system. The price to pay is that of gauge freedom: with the introduction of potentials we have now many different potentials which encode the same physics. If we denote the space of all configurations A(x), Φ(x) by C L (L stands for "large"), the physical configuration space of the system is much smaller. One has to account for the fact that many configurations in C L are physically identical and therefore the real configuration space is schematically C L /G, where G denotes symbolically the identifications we have to impose due to gauge invariance. We see therefore that gauge invariance is distinct, but intimately related, to BI. Traditionally, gauge invariance is achieved by working in the larger configuration space and imposing constraints ensuring that physical quantities are gauge invariant. This goes a long way towards ensuring BI as well. One of the lessons of modern investigations of non-perturbative QFT and string theory is that gauge redundancies (including diffeomorphism invariance) are not fundamental and are tied inherently to a particular perturbative expansion of the theory. As such they are inherently background-dependent. I will describe this below, and as we will see in several examples, this also has implications for the best strategy to achieve BI in a physical theory. II. BACKGROUND DEPENDENCE IN PERTURBATIVE STRING THEORY One of the most common mental images of string theory is that of the string worldsheet, the surface spanned by a string as a function of time, in a fixed spacetime manifold. The string worldsheet is then a map X : Σ → M between a two-dimensional surface Σ and a fixed spacetime manifold M , parametrized locally by coordinates X, and is endowed with the metric G µν (X). One describes the "quantization" of the string 15 by summing over all such worldsheets with prescribed boundary conditions, and worldsheet action of the form where the worldsheet is parametrized by σ a , a = 1, 2, and α ′ is the inverse string tension. The manifold M with the metric G µν (X) are a "background" of string theory, namely a manifold on which strings can be consistently quantized. One of the mysterious and exciting results in perturbative string theory is the fact that the string can be consistently quantized if and only if the metric satisfies Einstein's equation (with calculable higher derivative corrections suppressed in low energies). Despite being a very common mental image, and the historical starting point of the subject, this picture is misleading in some important ways. Some of them related to the issue of background independence. I will describe the situation briefly, since the main purpose here is to concentrate on non-perturbative physics. First, in addition to background spacetime, one has to choose in general backgrounds for all the other massless modes of the string. The conditions for consistency of the string propagation (absence of negative norm states) then relate those backgrounds by a set of differential equations including the Einstein equation coupled to matter. In this sense the string background is a generalization of the background used in quantizing quantum field theories via the conventional background field method. The diffeomorphism symmetry is manifested precisely as it does in background field quantizations of gravity or gauge theories. One can show explicitly (and quite easily) that the formulation is diffeomorphism invariant with respect to infinitesimal diffeomorphisms, which is all one can expect in a perturbative framework. It is also worth noting that the so-called sigma model action given above, describing a propagation in weakly curved spacetime with slowly varying fields, is not the most general string background. Rather, the general perturbative string background is described by a two dimensional conformal field theory. Most of those backgrounds do not resemble a classical spacetime at all, they are abstract string backgrounds with no geometrical interpretation. Some subset of possible string backgrounds resemble classical spacetime only in some limit, when a parameter is tuned to an extreme value (in those circumstances the parameter is interpreted as a size of a geo-metrical feature of spacetime, which becomes large in the limit). Moreover, many times there is more than one such spacetime interpretation for a given string background. In this sense spacetime is inherently a derived concept in string theory, even perturbatively. Any relevant concept, including that of BI, has to avoid explicit reference to spacetime structures in order to be applicable in this context. However, quantizing the string perturbatively is clearly not a complete description of the physics, and there are many examples of interesting questions which require a more complete description. Before jumping into nonperturbative string theory, I will make a brief detour into non-perturbative gauge theories, to discuss the important idea of duality. III. INTERLUDE: FIELD THEORY DUALITIES AND GAUGE INVARIANCE Before returning to quantum gravity, the main subject of this article, let us demonstrate the role of gauge invariance in the simpler context of quantum field theory. We discuss the case of strongly coupled non-Abelian gauge theory, and concentrate on sufficiently low energies, where the theory flows to an interacting conformal field theory. This example is chosen for pedagogical reasons, as one of the simplest instances of duality, but most of its specific properties are not important. The phenomena of duality is generic in quantum field theories, and as we will see next the same set of ideas applies (in all energy scales) to more complicated examples involving quantum gravity. In order to gather evidence for duality, one needs to make exact non-perturbative calculations, or to make qualitative arguments. The former is possible in a special set of theories, and the latter gives confidence the phenomena discovered are generic. For the purpose of illustration I'll concentrate on four dimensional theories with a single supersymmetry. The duality exhibited at low energies is known as Seiberg duality 9 . So, let us consider an SU (n c ) gauge theory with n f chiral multiplets in the fundamental representations ("flavors"). Let us call that formulation "description A" of the theory, shortly we will discuss another description which is equivalent. To be in the regime where the theory flows to a non-trivial CFT at low energies, we need to restrict the range of n f , n c appropriately, let us do so. When quantizing theory A in perturbation theory one constructs the Hilbert space from the fields in the action: quarks and gluons (and their supersymmetric partners). Let us denote the resulting space by H L (the large Hilbert space). The space H L is not really a Hilbert space, it has negative norm states, thus the need for gauge invariance. Gauge invariance can be implemented in different ways (e.g BRST quantization), in all of them one restrict attention to a smaller Hilbert space, one on which the constraints of gauge invariance have been consistently imposed. Let us call the reduced Hilbert space H S , the small Hilbert space. However, even at weak coupling the spectrum is much richer, and the physical Hilbert space is bigger that just H S , including for example solitonic excitations whose mass scales as inverse powers of the coupling constant (so they become infinitely heavy in the classical limit). It is not clear what role, if any, the original Hilbert space H L and the the gauge constraint, have in the full theory, away from the weak coupling region where perturbation theory applies. After all, the states in the physical Hilbert space are precisely those which are invariant under the constraints, and all physical quantities are gauge invariant. Gauge invariance, by construction, has no physical consequences. Those semi-philosophical concerns become more urgent due to the discovery of duality symmetries. It turns out, in a growing number of examples, that one can quantize different gauge theories, which look very different in perturbation theory, yet obtaining precisely the same non-perturbative physics. Conversely, one can have non-perturbative quantum field theories which have more than one weak coupling limit. In each such limit they look like some weakly coupled gauge theory, but the details -the matter content, the gauge redundancies, the Lagrangian, are different in each limit. So, in the case of Seiberg Duality discussed here, we have an equivalent description, theory B. That description involves an SU (n f − n c ) theory, with n f flavors and one additional scalar field M (which is a gauge singlet), and with specific interactions. The new gauge theory looks very different from the original one, it utilizes different variables (fields) and has different gauge redundancies, however it turns out that all the non-perturbative physics (at low energies, for the range of n f , n c discussed above) is exactly identical! The situation is described by the following diagram The full theory has two limits in which it simplifies. When some coupling is taken to an extreme value (usually chosen to be called then "weak coupling"), the theory starts looking like theory A. Results near that limit, in which the coupling is weak, are reliably obtained by "quantizing" theory A and treating it perturbatively. Similarly, the same theory has another limit in which another coupling is taken to be weak (perhaps the inverse of the original coupling), where it reduces in the same sense to theory B. The discovery of duality makes it necessary to distinguish between concepts that are well-defined and useful non-perturbatively, and concepts that are specific to a certain classical limit, and the set of variables best suited for that limit. It turns out that the set of fundamental fields, their Lagrangian and the associated gauge redundancies are all specific to a choice of variables, and are not intrinsic properties of the full non-perturbative theory. Let us look more closely at the gauge redundancies of both descriptions, for the example of Seiberg duality given above. In the first description the gauge invariance SU (n c ) is realized the traditional way, for example by quantizing canonically and imposing Gauss law constraints. On the other hand, in the set of variables utilized in description B, the original gauge freedom SU (n c ) is invisible. In other words the original SU (n c ) gauge symmetry has different implementation in those variables, namely they are all singlets. In that sense description B utilizes gauge invariant variables, albeit at the cost of introducing a new gauge redundancy (which, in turn, is invisible in the first set of variables). It is interesting that the gauge invariant variables tend to have their own gauge redundancy (though that is not always the case). I'll make some more comments on that phenomena in the conclusions. IV. NON-PERTURBATIVE GRAVITY: DUALITY AND HOLOGRAPHY We have seen that quantum field theories, in various dimensions and with various amount of supersymmetry, have the property of duality. This means they can be described in many equivalent ways, or in other words using many different variables. Each description, or set of variables, with all the associated mental imagery, is closely tied to a particular classical limit of the theory. The gauge redundancy is a facet of the description, not an intrinsic property of the physics. Different descriptions have different redundancies, but the same physics, which is by definition independent of all those redundancies. What about quantum gravity? can we exhibit similar behavior in gravitating systems? the answer is yes. There are now many examples of gravitational theories which have a more conventional dual description, that of a lower-dimensional non-gravitational theories. In other words, there exist examples of quantum theories which posses more than one classical limit, and in one or more of those limits they look like weakly interacting gravitational theories, whereas in other limits they look nongravitational. In fact holographic dualities seem ubiquitous, any known non-perturbative formulation of quantum gravity seems to be related to lower dimensional field theory, which lives in some vague sense on the boundary of spacetime, encoding holographically all the information in the bulk. The most familiar of those holographic dualities is the AdS-CFT correspondence 4 , which establishes a precise dictionary between all observable quantities in asymptotically AdS spaces and all physical quantities in a spe-cific quantum field theory (the N = 4 supersymmetric gauge theory). This section is devoted to exploring the AdS/CFT correspondence and its implications for BI and the role of diffeomorphism invariance in quantizing gravity. We start by defining the AdS/CFT correspondence precisely, continue by describing some of its salient features, and then elaborate on the role of gauge invariance and the meaning of BI in this context. A. Basics of AdS/CFT Consider your favorite model of quantum gravity in asymptotically AdS space 16 . Five dimensional Anti de Sitter space is given by the metric (in global coordinates) where t denotes the global time coordinate, dΩ 2 3 is the metric on the 3-sphere, and ρ is a radial coordinate spanning half-line. The spacetime is distinguished by having a timelike boundary, at ρ = ∞, which is conformal to R × S 3 . Therefore in order to completely specify the model one has to specify appropriate boundary conditions for all propagating degrees of freedom. One can then discuss the physical observables of the theory as a function of those boundary conditions. Let us specialize to string theory, defined on asymptotically AdS spaces. Denote the set of string fields schematically by Φ(ρ, t, Ω 3 ), where Ω 3 stands for the angular coordinates of the 3-sphere. Each such field satisfies boundary conditions 17 at ρ = ∞ which are given by a function J(ρ, Ω 3 ) (precise details on specifying those boundary conditions can be found at 10 ). The complete information on the theory is encoded in all observable quantities as function of the boundary conditions J(ρ, Ω 3 ). It is strongly believed that the only diffeomorphism invariant quantities are global observables, given by integrals over local densities (no counter-example to this claim is known). They are all encoded in the partition function where the integral sign denotes symbolically some appropriately defined path integral, or stringy generalization thereof, which is used in quantizing the gravitational theory with the specified boundary conditions for all string fields. The quantity Z(J) encodes all the gauge invariant quantities in asymptotically AdS space, quantizing string theory (which includes gravity) in asymptotically AdS spaces amounts to calculating the object Z(J). Let me elaborate on this point. The object Z(J) encodes all the well-defined quantities in asymptotically AdS spaces, therefore it encodes all the answers to the interesting questions regarding the combination of quantum mechanics and gravity in such spaces. For example one can form small black holes and let them evaporate, perhaps even sending some observer through the apparent horizon in the process. All the well defined questions regarding this process are contained in Z(J), but not always in a manner easy to decode. Moreover, for small enough cosmological constant, any local processes in asymptotically AdS space is indistinguishable from the same process in asymptotically flat space. Only global issues, important for example for cosmology, would be sensitive to the difference in asymptotic boundary conditions. The calculation and interpretation of Z(J) is of clear importance for anyone interested in quantum gravity. We now turn to the dual description. It turns out that the object Z(J) can be calculated with no reference to quantizing gravity or AdS space. Consider the gauge theory mentioned above (N = 4 SYM). The complete information about the gauge theory is encoded in all correlation functions of gauge invariant operators. This information is summarized in the object Z(J), the partition function with sources. Schematically where the integral sign stands for path integral over the non-Abelian gauge fields (in an SU (n) adjoint representation) and their supersymmetric partners, weighted by the Yang-Mills action S Y M (and additional terms involving the fermions and scalars). The sources J(ρ, Ω 3 ) couple to all local gauge invariant operators, denoted schematically by Θ (for example Θ(ρ, t, Ω 3 ) = T r(F µν F µν (ρ, t, Ω 3 )). The partition function Z(J) is a generating functional for all correlation functions of those local operators, which are obtained from Z(J) by repeated differentiation. Witten's definition 10 of the AdS/CFT correspondence is simply the statement that the two functionals Z(J) defined above are in fact one and the same. Calculating Z(J) using the gauge theory variables amounts then to a complete non-perturbative, background-independent quantization of gravity in asymptotically AdS spaces. B. Background Independence and Diffeomorphism Invariance In specifying the AdS/CFT correspondence, we restrict attention to asymptotically AdS spaces. On the gauge theory side of the correspondence, we restrict to a specific gauge theory, with given matter content and interactions, propagating on a certain four dimensional manifold (as defined above it is R × S 3 ). Aren't all of those choices "backgrounds", and isn't the theory then manifestly background dependent? Returning to the discussion in the introduction, particularly to the distinction made between dynamical backgrounds and superselection sectors, one is required to decide which aspects of the theory are chosen by the dynamics, and which cannot be changed by any finite dynamical process. From the gravitational description, it seems clear that asymptotic boundary conditions are precisely those aspects of the theory which define "superselection sector" 18 . More technically those boundary conditions are associated with non-normalizable modes in the gravitational descriptions of the system, and those do not fluctuate. This is even more clear in the gauge theory description of the system: the matter content, the Lagrangian, the rank of the gauge group and the manifold on which the (non-gravitational) theory propagates, those are all fixed for all states of the theory, and for all physical processes allowed in the theory. On the other hand, the correspondence does not specify any background metric, background values for any of the fields, or any other aspect of the theory that can change dynamically. When specifying the boundary conditions, the gauge theory description already sums the contribution of all bulk geometries (and other field configurations) which satisfy those boundary conditions, none of those backgrounds makes an appearance in the gauge theory description. In that sense the gauge theory description is as BI as any other theory in physics, including Einstein's general theory of relativity. This correspondence is also an important example for the role of diffeomorphism invariance in quantum gravity. First, a subtlety: with specific boundary condition, there are two types of diffeomorphism: those which change the boundary conditions, and those which do not. The former (when they exist) are global symmetries, which have physical consequences and therefore must be visible in any variables chosen. The latter type of diffeomorphism (sometimes called bulk diffeomorphisms), those which fix the boundary conditions, are a redundancy of the description, they have no physical consequence and are implemented very differently depending on the variables chosen. For example, the definition of quantum gravity (in the superselection sector described by the given boundary conditions) through the dual gauge theory is diffeomorphism invariant with respect to the bulk diffeomorphisms. This is achieved not by the elaborate process of imposing constraints on some auxiliary Hilbert space, instead all variables making appearance in the gauge theory description are already diffeomorphism invariant. This is precisely what happens in the case of gauge dualities, described in the previous section, and summarized in figure 1. V. CONCLUSIONS AND OUTLOOK I'll conclude by commenting on implications of the above for future directions. In trying to directly quantize the gravitational field, one of the main technical difficulties is imposing the constraints of diffeomorphism. This results in an intense study of those constraints, their algebra and representations, and the various ways those constraints can be implemented. On the other hand, in the quantum gravity theories defined via holography the algebra of diffeomorphisms 19 is not a very useful tool, all the fields used in the holographic dual are singlets of diffeomorphism, the structure of the diffeomorphism algebra gives no information. The existence of holographic dualities redefines what one means by quantum gravitational theory. It seems that almost any theory can be regarded as quantum gravity in the sense of being diffeomorphism invariant. A more useful definition of quantum gravity is that of a quantum system which possesses a classical limit containing Einstein's gravity (in some large semi-classical space). In that sense the four dimensional N = 4 supersymmetric gauge theory is apparently a (five dimensional) quantum gravity theory, since in a suitable limit it implies universal gravitational attraction between test masses. We have seen that various instances of gauge freedom, including diffeomorphism invariance, are less fundamental than once thought. One may ask why such redundancies seem to arise generically whenever one takes the classical limit. As mentioned above, the reason seems to be locality, one needs to introduce gauge potentials and the resulting redundancies to make the formulation manifestly local. Indeed, one of the mysteries of the gaugegravity dualities, or any other holographic definition of a quantum gravitational theory, is that of bulk locality. The gauge theory contains everything one expects from a quantum gravitational theory, for example black holes forming and evaporating, in-falling observers etc. etc., albeit all this information is scrambled in a way that hides its local nature. This is intimately related to the fact that bulk diffeomorphisms are realized trivially in that language. Thus, the study of the mathematical structure of diffeomorphism invariance seems to do less to do with the fundamental structure of quantum gravity, and more to do with the limit in which the theory becomes semiclassical and local in the gravitational variables. Perhaps the intense study of the structure of diffeomorphism symmetry and possible semi-classical quantizations of the gravitational fields can aid in identifying the local bulk information in quantum gravity theories defined holographically. For example, it would be nice to see an attempt to provide a loop quantization of asymptotically AdS spaces. On general principles one would obtain a conformal field theory, and the relation of that CFT to the N = 4 supersymmetric gauge theory may be very illuminating.
2008-09-24T17:11:15.000Z
2008-09-23T00:00:00.000
{ "year": 2008, "sha1": "fd96ae1b34c26e592c1c79c6e5fe9dc07213e536", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0809.3962", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fd96ae1b34c26e592c1c79c6e5fe9dc07213e536", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
215721216
pes2o/s2orc
v3-fos-license
Cross‐species transmission of the newly identified coronavirus 2019‐nCoV Abstract The current outbreak of viral pneumonia in the city of Wuhan, China, was caused by a novel coronavirus designated 2019‐nCoV by the World Health Organization, as determined by sequencing the viral RNA genome. Many initial patients were exposed to wildlife animals at the Huanan seafood wholesale market, where poultry, snake, bats, and other farm animals were also sold. To investigate possible virus reservoir, we have carried out comprehensive sequence analysis and comparison in conjunction with relative synonymous codon usage (RSCU) bias among different animal species based on the 2019‐nCoV sequence. Results obtained from our analyses suggest that the 2019‐nCoV may appear to be a recombinant virus between the bat coronavirus and an origin‐unknown coronavirus. The recombination may occurred within the viral spike glycoprotein, which recognizes a cell surface receptor. Additionally, our findings suggest that 2019‐nCoV has most similar genetic information with bat coronovirus and most similar codon usage bias with snake. Taken together, our results suggest that homologous recombination may occur and contribute to the 2019‐nCoV cross‐species transmission. | INTRODUCTION China has been the epicenter of emerging and re-emerging viral infections that continue to stir a global concern. In the last 20 years, China has witnessed several emerging viral diseases, including an avian influenza in 1997, 1 and other wildlife animals were also sold. 4 On 3 January 2020, WMHC updated the number of cases to a total of 44 with 11 of them in critical condition. On 5 January, the number of cases increased to 59 with 7 critically ill patients. The viral pneumonia outbreak was not caused by severe acute respiratory syndrome coronavirus (SARS-CoV), Middle East Respiratory Syndrome coronavirus (MERS-CoV), influenza virus, or adenovirus as determined by laboratory tests. 4 On 10 January, it was reported that a novel coronavirus designated 2019-nCoV by the World Health Organization (WHO) 5 was identified by high-throughput sequencing of the viral RNA genome, which was released through virological.org. More significantly, the newly identified 2019-CoV has also been isolated from one patient. The availability of viral RNA sequence has made it All authors contributed equally to this work. possible to develop reverse-transcription polymerase chain reaction (RT-PCR) methods for the detection of viral RNA in samples from patients and potential hosts. 6 As a result, 217 patients were confirmed to be infected with the 2019-nCoV, and 9 patients died as of 20 January 2020. Several patients from Wuhan were also reported in Thailand, Singapore, Hong Kong, South Korea, and Japan. Highthroughput sequencing of viral RNA from patients' samples has identified a novel coronavirus designated 2019-nCoV by the World Health Organization. Currently, a total of 14 full-length sequences of the 2019-nCoV were released to GISAID and GeneBank. The coronavirinae family consists of four genera based on their genetic properties, including genus Alphacoronavirus, genus Betacoronavirus, genus Gammacoronavirus, and genus Deltacoronavirus. 7 The coronavirus RNA genome (ranging from 26 to 32 kb) is the largest among all RNA viruses. 8 Coronavirus can infect humans and many different animal species, including swine, cattle, horses, camels, cats, dogs, rodents, birds, bats, rabbits, ferrets, mink, snake, and other wildlife animals. 7,9 Many coronavirus infections are subclinical. 7,9 SARS-CoV and MERS-CoV belong to the Betacoronavirus genus and are zoonotic pathogens that can cause severe respiratory diseases in humans. 7 The outbreak of viral pneumonia in Wuhan is associated with history of exposure to virus reservoir at the Huanan seafood wholesale market, suggesting a possible zoonosis. The seafood market also sold live animals such as snakes, marmots, birds, frogs, and hedgehogs. Currently, there is no evidence suggesting a specific wildlife host as a virus reservoir. Studies of relative synonymous codon usage (RSCU) bias between viruses and their hosts suggested that viruses tends to evolve codon usage bias that is comparable to their hosts. 10,11 Results from our analysis suggest that 2019-nCoV has most similar genetic information with bat coronovirus and has most similar codon usage bias with snake. More interestingly, an origin-unknown homologous recombination may occured within the spike glycoprotein of the 2019-nCoV, 5 which may explain its cross-species transmission, and limited person-person spread. | Phylogenetic and simplot analysis Phylogenetic trees were constructed using maximum-likelihood methods and general time-reversible model of nucleotide substitution with gamma-distributed rates among sites (GTR+G substitution model) in RAxML v8.0.9. 14 Support for the inferred relationships was evaluated by a bootstrap analysis with 1000 replicates and trees were midpoint-rooted. To investigate the putative parents of the 2019-nCoV, we performed Similarity and Bootscanning plot analyses based on the Kimura two-parameter model with a window size of 500 bp, step size of 30 bp using SimPlot v.3.5.1. 15 We divided our data set into four clades, the newly discovered 2019-nCoV sequence was grouped as the query sequence. The closest relative coronaviruses (bat-SL- | Synonymous codon usage analysis To estimate the RSCU bias of the 2019-nCoV and its potential host(s), reported. 18 A heat map of RSCU was drawn with MeV 4.9.0 software. 19 The coronavirus and their potential hosts were clustered using a Euclidean distance method. 23 and classical swine fever virus. 18 Similarity plot analysis of the 2019-nCoV revealed that homologous recombination may occurred between Clade A strains (bat-coronaviruses) and the origin-unknown isolates, located within the spike glycoprotein that recognizes cell surface receptor (Figure 2). These characteristics indicate that cross-species transmission may be caused by homologous recombination. | Relative synonymous codon usage analysis As parasitic microorganism, virus codon usage pattern resembles its host to some extent. The RSCU bias shows that the 2019-nCoV, bat-SL-CoVZC45, and snakes from China have similar synonymous codon usage bias ( Figure 3A, Table 1 | 437 Huanan Seafood Wholesale Market where many patients worked or had a history of exposure to wildlife or farm animals. | DISCUSSION In this study, we have performed an evolutionary analysis using The host range of some animal coronaviruses was promiscuous. 7 They caught our attention only when they caused human diseases such as SARS, MERS, and 2019-nCoV pneumonia. 4,9,28 It is critical to determine the animal reservoir of the 2019-nCoV to understand the molecular mechanism of its cross-species spread. Homologous recombination within viral structural proteins between coronaviruses from different hosts may be responsible for "cross-species" transmission. 27 Information obtained from RSCU analysis provides some insights to the question of wildlife animal reservoir although it requires further validation by experimental studies in animal models. Currently, the 2019-nCoV has not been isolated from animal species although it was obtained from one patient. Identifying and characterizing the animal reservoir for 2019-nCoV will be helpful for investigation of the recombination and for a better understanding of its person-to-person spread among human populations.
2020-01-23T09:07:24.249Z
2020-01-22T00:00:00.000
{ "year": 2020, "sha1": "6cf6be20c57948cc9f3ad1db12b4d7a2864f740f", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jmv.25682", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "60d4cf2937fc99e32e86632b4219375fdcfe858a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
234260479
pes2o/s2orc
v3-fos-license
Precipitation-hardened refractoryTi-Nb-Hf-Al-Ta high-entropy alloys This study reports the structure and mechanical properties of new refractory Ti37.5Nb12.5Hf25Al25, Ti40Nb30Hf15Al15, and Ti40Nb20Ta10Hf15Al15 (at.%) high entropy alloys. After annealing at 1200 °C for 24 h, the program alloys had a single-phase B2 structure. Further annealing at 600 °C for 24 h resulted in the formation of Widmanstatten orthorhombic particles (O-phase) in the bcc matrix. The Ti40Nb30Hf15Al15 and Ti40Nb20Ta10Hf15Al15 alloys annealed at T = 1200 °C showed moderate strength and good ductility (>50%) at 22 and 600 °C; while the Ti37.5Nb12.5Hf25Al25 alloy was stronger, but less ductile at both temperatures. Subsequent annealing at T = 600 °C significantly increased the strength of the Ti40Nb30Hf15Al15 alloy at 22 and 600 °C, maintaining compressive sufficient ductility at room-temperature. Introduction Recently introduced refractory high entropy alloys (RHEAs), which demonstrated an outstanding capability to maintain high strength at temperatures up to T = 1600 °C [1], seem promising candidates for next-generation turbines. Despite the attractive high-temperature strength, one of the major drawbacks of most RHEAs is the modest mechanical performance at ambient temperatures. It is believed that balanced properties can be obtained by developing precipitation-strengthened RHEAs. One of the promising examples of such RHEAs are alloys with the bcc/B2 structure [2,3]. The superalloy-like microstructure with a bcc solid solution, strengthened by coherent cuboidal B2 nanoparticles, provides a good combination of strength and ductility at temperatures T≤600 °C. However, the development of bcc/B2 RHEAs is a non-trivial task due to weakly established composition-structure relationships and poor fidelity of thermodynamic modeling in the case of the B2 phase [4,5]. In this work, we presented a series of RHEAs strengthened by nanosized Widmanstatten orthorhombic (O-phase) particles. The O-phase precipitation was found to be an attractive option to achieve a notable strength enhancement both at room and elevated temperatures with only a slight loss in ductility. Materials and methods Ingots of the Ti37.5Nb12.5Hf25Al25, Ti40Nb30Hf15Al15, Ti40Nb20Ta10Hf15Al15 alloys were produced by vacuum arc melting of pure (≥ 99.9 wt%) elements. The alloys were annealed in quartz tubes at 1200 °C for 24 h. Some samples were further annealed at 600 °C for 24 h. The phase composition and microstructure of the alloys were studied using transmission electron microscopy (TEM) and scanning electron microscopy (SEM). The densities of the Ti37.5Nb12.5Hf25Al25, Ti40Nb30Hf15Al15, Ti40Nb20Ta10Hf15Al15 alloys, determined by the hydrostatic weighing method, were 7.23± 0.03, 7.07 ± 0.03, 7.87± 0.03 g/cm 3 , respectively. Isothermal compression was carried out in air at 22 °C or 600 °C using an Instron 300LX test machine. After further annealing at 600 °C, precipitation of profuse Widmanstatten second phase particles was revealed. The particles located adjust to grain boundaries were coarser than the particleswithin the grain interior. The volume fraction of the second phase was nearly equal in all program alloys and estimated to be ~ 35%. The phases found after further annealing at 600 °C, were identified using TEM analysis: the matrix had the B2 (Ti37,5Nb12,5Hf25Al25) or bcc (Ti40Nb30Hf15Al15 and Ti40Nb20Ta10Hf15Al15) structure, whilst the particles were defined as the O-phase. (Figures 3c, d). The Ti40Nb30Hf15Al15 alloy demonstrated a ~50% strength increment both at 22 and 600 °C while maintaining reasonable compressive ductility at 22 °C. A similar annealing effect on room-temperature mechanical properties was also found for the Ti40Nb20Ta10Hf15Al15 alloy; however, at 600 °C, the alloy performance degraded inevitably; i.e. both strength and ductility became worse in comparison with annealing at 1200 °C. In the case of the Ti37.5Nb12.5Hf25Al25 alloy, annealing at 600 °C increased its hightemperature strength, but drastically diminished compressive ductility. The obtained findings suggest complex relationships between the chemical composition, structure, and mechanical properties of the O-phase strengthened RHEAs since similar microstructural changes (precipitation of the O-phase particles) had a drastically different effect on the mechanical performance of the program alloys. Further Conclusions Microstructure and mechanical properties of the refractory Ti37.5Nb12.5Hf25Al25, Ti40Nb30Hf15Al15, and Ti40Nb20Ta10Hf15Al15 high entropy alloys were studied. The following conclusions were drawn: • After annealing at 1200 °C, all the program alloys have a single-phase B2 structure. Further annealing at 600 °C led to the formation of a mixture of the B2 (Ti37,5Nb12,5Hf25Al25) or bcc (Ti40Nb30Hf15Al15, Ti40Nb20Ta10Hf15Al15) matrix and nanosized Widmanstatten O-phase particles. • In the single-phase state, the Ti40Nb30Hf15Al15 and Ti40Nb20Ta10Hf15Al15 alloys demonstrated moderate strength and high compressive ductility at 22 and 600 °C. The Ti37.5Nb12.5Hf25Al25 alloy showed the highest strength, but very limited ductility. The precipitation of the O-phase enhanced the strength of the Ti40Nb30Hf15Al15 alloy at 22 and 600 °C without notable sacrificing in compressive ductility at 22 °C. However, the mechanical properties of the Ti37.5Nb12.5Hf25Al25 and Ti40Nb20Ta10Hf15Al15 alloys deteriorated after annealing at 600 °C.
2021-05-11T00:07:01.802Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "89ac6c8b0c921190d1972b446febddeb7cb46669", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1014/1/012041", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3ee5f9696f2a1025102bbe2dc7cfcbd85183cdef", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
59028211
pes2o/s2orc
v3-fos-license
Evaluating Multiple Integrals Using Maple This paper uses the mathematical software Maple for the auxiliary tool to study two types of multiple integrals. We can obtain the infinite series forms of these two types of multiple integrals by using binomial series and integration term by term theorem. On the other hand, we propose some examples to do calculation practically. The research methods adopted in this study involved finding solutions through manual calculations and verifying these solutions by using Maple. Introduction As information technology advances, whether computers can become comparable with human brains to perform abstract tasks, such as abstract art similar to the paintings of Picasso and musical compositions similar to those of Mozart, is a natural question. Currently, this appears unattainable. In addition, whether computers can solve abstract and difficult mathematical problems and develop abstract mathematical theories such as those of mathematicians also appears unfeasible. Nevertheless, in seeking for alternatives, we can study what assistance mathematical software can provide. This study introduces how to conduct mathematical research using the mathematical software Maple. The main reasons of using Maple in this study are its simple instructions and ease of use, which enable beginners to learn the operating techniques in a short period. By employing the powerful computing capabilities of Maple, difficult problems can be easily solved. Even when Maple cannot determine the solution, problem-solving hints can be identified and inferred from the approximate values calculated and solutions to similar problems, as determined by Maple. For this reason, Maple can provide insights into scientific research. Inquiring through an online support system provided by Maple or browsing the Maple website (www.maplesoft.com) can facilitate further understanding of Maple and might provide unexpected insights. For the instructions and operations of Maple, [1][2][3][4][5][6][7] can be adopted as references. The multiple integral problem is closely related with probability theory and quantum field theory, and can refer to [8][9]. For this reason, the evaluation and numerical calculation of multiple integrals is important. In this paper, we mainly study the following two types of n -tuple integrals Where n is any positive integer, are real numbers for all n k ,.., 1 = . We can obtain the infinite series forms of these two types of multiple integrals by using binomial series and integration term by term theorem; these are the major results of this study (i.e., Theorems 1 and 2). Moreover, we obtain some corollaries from these two theorems. For the study of related multiple integral problems can refer to [10][11][12][13][14][15][16][17][18][19][20][21][22][23]. In addition, we provide some multiple integrals to do calculation practically. The research methods adopted in this study involved finding solutions through manual calculations and verifying these solutions by using Maple. This type of research method not only allows the discovery of calculation errors, but also helps modify the original directions of thinking from manual and Maple calculations. Therefore, Maple provides insights and guidance regarding problem-solving methods. Main Results Firstly, we introduce two notations and two important theorems used in this paper. The following is the first result in this study, we find the infinite series forms of the multiple integral (1). Theorem 1 Suppose n is any positive integer, and . Then the n -tuple integral ,.., Therefore, we obtain the n -tuple integral Thus, the n -tuple integral By Theorem1, we immediately have the following result. Corollary 1 Suppose n is any positive integer are real numbers for all n k ,.., Then the n -tuple improper integral The following is the second major result in this paper, we determine the infinite series form of the multiple integral (2). Theorem 2 Therefore, we obtain the n -tuple integral Thus, we obtain the n -tuple integral q.e.d. By Theorem 2, we obtain the following result. Examples In the following, for the two types of multiple integrals in this study, we provide some examples and use our theorems and corollaries to determine the infinite series forms of these multiple integrals. On the other hand, we employ Maple to calculate the approximations of these multiple integrals and their solutions for verifying our answers.
2019-04-20T13:11:11.835Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "b9ddf8e3e7f2170f4d13b106f33efee337c44438", "oa_license": "CCBY", "oa_url": "http://www.hrpub.org/download/20140405/MS1-13401452.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0bef29ae673c6f8ab1dba08ec133c383c4543e00", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
218582408
pes2o/s2orc
v3-fos-license
Transnational cooperation to develop local barley to beer value chains Transnational cooperation is a common strategy for addressing research and development (R&D) issues resulting from similar challenges that cut across administrative borders. Value chains for food and drinks are complex, and transdisciplinary work is recognised as a method for solving complex issues. The Northern Cereals project ran from 2015 to 2018, and its goal was to increase cereal production and the value of grain products in four regions in the Northern Periphery programme area. The project included both R&D, but the main emphasis was on development, and was carried out by transdisciplinary cooperation between R&D partners and small and mediumsized enterprises (SMEs). By reviewing the project’s methods, outcomes and composition, we discuss if a framework of transnational and transdisciplinary cooperation can help to develop the value chain from local barley to beer. We found that transnational cooperation was achieved successfully, that stakeholder involvement was crucial, but that academic disciplines such as marketing and innovation could have been included. In addition, we recognised that much work remains to further increase cereal production and the use of local grain in the Northern Periphery region, but believe that this project has laid a good foundation for further progress. Introduction Transnational cooperation is one of the main strategies in many research and development (R&D) projects because it is recognised that many common issues can be more effectively and innovatively solved by collaboration than by isolated national initiatives (Dühr and Nadin 2007). Achieving added value is a goal for such cooperation (Colomb 2007). Food and drink value chains are complex, stretching from primary production, through processing, marketing, consumption, waste and recycling. Although food and drink value chains have become increasingly globalised, in recent decades a local food movement has arisen as a reaction to this, and "local" has frequently been associated with "sustainable and healthy production and consumption patterns" (Brunori et al. 2016). It is recognised that new ways of generating knowledge, apart from traditional academic discipline-based approaches, are needed to solve complex challenges. Transdisciplinary is recognised as a suitable method for solving complex issues, where researchers from different disciplines work together with stakeholders (Maasen and Lieven 2006) to develop knowledge that is integrated between science and society (Tress et al. 2005). The Northern Cereals project ran from 2015 to 2018, and its aim was to increase cereal production and the value of grain and grain products in the Northern Periphery region (as defined by the Northern Periphery and Arctic Programme). The project adopted a value chain perspective and included R&D partners from four countries in the Programme area as well as a total number of 310 stakeholders. The region shares several common features such as low population density, long distances to the larger markets, challenging growing conditions and a large impact from climate change (Natcher et al. 2019). One focus was on the barley to beer value chain and here there were large national differences in the extent of its development. The project acknowledged these differences by allowing each partner to concentrate on the aspects that were considered locally most important. In all activities, there were two or more partners  collaborating, creating many opportunities for mutually beneficial exchanges of knowledge and experience. The value chain from barley to beer in the Northern Periphery region has multiple challenges. One of the most basic, however, is the barley production in which the lack of knowledge, experienced producers, machinery and equipment, as well as locally adapted barley varieties, are limiting the expansion of the crop. However, market trends favouring local or high-provenance products, more plant-based food and sustainable production are making northern food and drink products more attractive to consumers (Martin et al. 2016a). One significant result of this has been an expansion in microbreweries in remote regions where they can benefit from a unique locational identity (Withers 2017). Microbreweries usually have an important positive effect on the local economy and tourism (O'Connor 2018), and as product differentiation and provenance are important, some microbreweries have a particular interest in using local cereals (Danson et al. 2015) as a means of linking their products to a locality and heritage. The development of the complete value chain requires access to a wide range of knowledge and skills. Knowledge from various disciplines such as agronomy, plant physiology, chemistry, food science, innovation, marketing and economics must align to achieve success. In addition, when concrete results such as increased barley production, improved drying, improved malt quality and higher value beer products are sought, it is necessary to work closely with the practitioners. Therefore, the project used a transdisciplinary approach where challenges were addressed in close cooperation with associate partners and other stakeholders. The research question of this article is as follows: Can a framework of transnational and transdisciplinary cooperation promote development in local barley to beer value chains? Empirically, the study focuses on work carried out in the Northern Cereals project in the Northern Periphery region. A summary of the methods utilised, outcomes, and partners and stakeholders involved in the project is presented to evaluate the cooperation and transdisciplinary effects. The study concludes with suggestions, both for further development of this value chain and for transnational cooperation. Transnational and transdisciplinary cooperation in R&D The Northern Cereals Project was funded by the EU's Northern Periphery and Arctic Programme in which both transnational and transdisciplinary cooperation are key driving forces. The programme emphasises the use of the individual strengths of the partners, and transnational cooperation facilitates a joint approach for tackling common issues. According to Pisani and Burighel (2014), transnational cooperation projects create an opportunity "to exchange fruitful information, contextual expertise and local knowledge, thus enhancing the opportunities for innovation and economic benefits". Ray (2001) explained the rationale behind the benefits stemming from transnational cooperation as "to take advantage of similarity", "to take advantage of complementarity" and "to reach critical mass". However, it has also been recognised that successful (policy) transfer depends on the nature and quality of the cooperation (Colomb 2007). Cooperation in the Northern Periphery region aiming to improve the barley to beer value chain is well suited to this rationale. Similarities between the areas include geography, climatic conditions and cultural background, which make cooperation easy as participants feel a natural connectivity. In addition, using each participant's strengths improves the result and, in a region that is sparsely populated, there are considerable advantages in linking together SMEs from the whole region through knowledge exchange and networking activities. Such cooperation is not easy, however, and although transnational collaboration is expanding, good examples of working across administrative borders are exceptions (Dühr and Nadin 2007). The academic division into narrow disciplines has fostered specialisation into increasingly more focused areas. Many of today's complex issues, such as climate change, food security or poverty reduction, cannot be resolved within single discipline. As a reaction to this, transdisciplinary methodologies have been proposed as a solution, where researchers from different disciplines work together towards a common goal where theory and knowledge between the various disciplines are integrated, and where non-academic participants are included in the work (Tress et al. 2005). Scholz and Steiner (2015) conceived transdisciplinarity as a mutual learning process between science and society to attain knowledge about a specific real-world issue. Moreover, it facilitates bringing societal concerns into scientific research ). This type of R&D must be contextualised to the specific study area, and the aim of the outcome is to produce "socially robust knowledge". According to Nowotny (2003), such "robustness" is more likely to be achieved through the involvement of a heterogeneous group of "experts". Stakeholder involvement is a crucial part of transdisciplinary research and ensures that "the 'right problem' gets addressed in 'the right way'" (Maasen and Lieven 2006). Triste et al. (2014) also included increased learning opportunities as an advantage stemming from stakeholder involvement, which is also considered to ensure impact, as in real life changes (Gasparatos et al. 2008). However, there are many definitions of what a stakeholder is. Alrøe and Noe (2016) wrote that stakeholders are "those who will bear the consequences and carry out actions for change". Key stakeholders or primary stakeholders are also used as concepts for stakeholders more directly connected to (in this case) the value chain from barley to beer (Alrøe and Noe 2016). Tress et al. (2005) also emphasised that the level of stakeholder participationthe extent to which they are informed, consulted, involved or in controldetermines their influence on the work. For a project, it is therefore important to define the role of the stakeholders, especially to fulfil the expectations the involved parties have about the project. Methods The objectives of the Northern Cereals project were very broad and could only be addressed by accessing knowledge and experience from many different disciplines. All the partners had some of these skills or knowledge, but no single partner had access to all of them. The project, therefore, provided a mechanism for pooling this expertise for the benefit of all partner regions. Although the project included both R&D, the emphasis was on the latter and the partners spent most of their time working with farmers and SMEs in very practical situations and under diverse "northern" conditions. Simplified, the barley to beer value chain consists of the following four distinct parts: growing the barley, malting, brewing and marketing. All of them are connected, and each depends on the quality of the output from earlier steps of the chain to perform well. Figure 1 shows these four parts and the various challenges along the value chain. The activities in the Northern Cereals project were structured under work packages (WPs), coordinated by an overarching Management WP led by Matis, Iceland. Each WP was led by an individual researcher with skills that were relevant to the WP, and the WP leaders were responsible for coordinating WP activities with participating researchers in all the other countries. All researchers then liaised with stakeholders in each WP to ensure that the WP activities were implemented. Project WPs addressed the challenges identified in Figure 1 through five main areas of activity summarised in the following sections. Test production of barley To develop the complete value chain, it is necessary to start with well-adapted varieties of barley. This was the main task in the Northern Cereals preliminary project (Reykdal et al. 2016), but several trials ( Table 1) were also established during the main project period, and these were also used for demonstration purposes. Data collected from trial plots included characteristics such as grain and straw yield, grain moisture at harvest, thousand grain weight, the occurrence of diseases and lodging and rainfall and temperature data over the growing season. There was a special emphasis on the identification of early maturing varieties because of the importance, throughout the region, of early harvesting. Data from the trials were analysed by the researchers responsible for them and were summarised for the other project partners. Grain from trials and demonstrations was used for other project activities, especially product development. Both in the Faroe Islands and in Northern Norway, the introduction of appropriate machinery was also an important issue for potential cereal farmers. Guidelines and handbooks for farmers were developed in all the partner languages to aid the production of good quality grain. In the Northern Periphery region, post-harvest drying of grain is essential for safe storage, and case studies and guidelines concerning drying were developed in Iceland. Malt quality experiments and case studies Several small-scale malting trials ( Table 1) were carried out during the project period in Scotland, Iceland and Norway, and guidelines summarising quality criteria for malting barley and case studies of floor malting methods were prepared. Two experimental malting trials were performed using test malting facilities at The Norwegian University of Life Sciences. The colour, moisture, extract, nitrogen content, friability, homogeneity and diastatic power of malt produced during the project typically were analysed. Although the brewers need to take all of these factors into account while brewing, the extract is especially important as it is a measure of the amount of sugar obtained from the malt after mashing, which is important for alcohol yield. Data from the samples malted during the project were summarised and included in reports stored on the project website which made them available to all partners. Product development Product development was performed by the breweries themselves, and several new products were taken to the market, including beer made from locally grown and malted barley. However, most of these products were test products or produced in limited quantities because of the shortage of local malt. The acceptability of products was assessed by the companies themselves based on in-house testing and feedback from their own client base, sometimes using social media or web-based sites. Market knowledge The marketing segment of the barley to beer value chain was mainly handled by the brewery stakeholders through their normal marketing channels. However, the R&D project partners also carried out a review of the market situation for barley and malt in the region. This also included global trends in the cereal food and beverage markets. Knowledge transfer Knowledge transfer between the project's associate partners (mostly SMEs) and the R&D partners in the different countries was the key to the project's success. Important mechanisms for doing this were the project meetings and the four conferences. They were held in four of the participating countries, with invited presenters and stakeholders, and included field trips and study visits to farms, malting facilities and breweries in addition to social activities. These facilitated the development of new networks and cooperation as well as knowledge transfer. In addition, all regions held local knowledge transfer events throughout the project period. The project also offered 4-day training placements for participants interested in starting their own malting, at a floor malting facility in Orkney, Scotland. In total, 310 stakeholders participated in the project in various ways ( Table 2). 4 The contextthe value chain from barley to beer in the Northern Periphery area The region shares several common features such as low population density, long distances to the larger markets, challenging growing conditions (especially, poor soils, large variations in rainfall and temperature during the short growing season, and difficult harvesting conditions) and a large impact from climate change (principally increased temperatures in both summer and winter). Recent warming in the northern regions has helped to increase the potential for barley production (Martin et al. 2017), and this may help to offset the decreased agricultural production predicted in some more southern areas as a result of climate change (Muller et al. 2010). Nevertheless, in some years, other weather-related factors, such as high rainfall, drought, gales and late or early frosts, continue to make growing barley risky in parts of the region. Barley has been grown in Norway and northern Scotland since ancient times, and production was introduced to both the Faroe Islands and Iceland in the ninth century. However, it is only in Orkney and Northern Norway that the cultivation has been continuous, although in Northern Norway there was a marked decline after the 1940s, due to both economic and political reasons ). In Iceland, cultivation started again in 1923 and has been continuous since then with considerably increased production over the last few decades. In the Faroe Islands, barley was recently re-introduced. In Orkney, barley is an important established crop that is cultivated with a high level of mechanisation. Barley is well-suited for the cool climates of the Northern Periphery region where the growing season is short and strong winds and frost can be expected. However, grain is often harvested at a high moisture content, which means that it needs to be dried for food uses although there is always an option to process the grain as wet feed for animals. An important potential market for local barley is to supply malt for brewing. This results from the recent expansion of northern tourism, the increased demand for high provenance drink products and the growth of microbreweries or craft breweries. For example, in Iceland, the number of tourists has quadrupled from 4,89,000 in 2010 to 22,25, In peripheral northern areas, there are, however, some major constraints on using local barley to produce beer, especially the availability of grain of a suitable quality and quantity, and a lack of local facilities for using this to make malt. Grain quality issues stem mainly from a lack of specific malting varieties adapted to northern areas. As a result, non-malting varieties tend to be used which are likely to give malt with lower extract yields than imported malt made from the recognised malting varieties. Challenging harvesting conditions may also make it difficult to obtain grain of good quality with a high germination percentage for malting. Brewing within the region is therefore mainly carried out with malt imported from a small number of very large malting companies in Germany or the United Kingdom (Nordic Innovation Centre 2009). Most of the partner regions have considerable potential for expanding the area of barley cultivated. In Iceland, for example, it has been estimated (Ministry of Industries and Innovation 2011) that annual production of cereals (barley) could be increased from about 16,000 t to 40,000-50,000 t per year. With the current increase in microbreweries across the region, part of this increased barley production could also be utilised to make local malt and beer. The region, therefore, has opportunities for increased self-sufficiency and sustainability by increasing domestic cereal production for feed, food and drinks. Resultsreview of outcomes From a value chain perspective, we review the main findings and work done in the Northern Cereals project on the barley to beer value chain. These findings are contextualised to the Northern Periphery region. However, areas with similar production constraints, where improved local malting is needed, or there is a market demand for local beer with special qualities, will also find much relevant information in this review. New possibilities for growing, drying and storing high-quality barley The growing season in northern areas is becoming longer, and further lengthening is expected due to higher average temperatures (Uleberg et al. 2014). However, it is also expected that there will be more rainfall in the autumn, especially in coastal areas. This is one of the main challenges for cereal production in the Northern Periphery region, and the project produced several country-specific reports They found that a trend towards warmer growing seasons is favouring barley production and has probably been particularly beneficial in Iceland, but excessive or inadequate rainfall constrain production in many areas. They also found that "both monthly temperature and rainfall show high variability from year to year across the region, which can result in very variable growing seasons". Such yearly variations have a high impact on production as the proportion of years with bad harvests will determine the economic viability of barley production. Another important factor for cereal production and possible expansion in the area is the availability of arable land. Arable land (here defined as the land suitable for barley cultivation) was found to be a limiting factor in many regions (Sveinsson and Dalmannsdóttir 2016). The project investigated the proportion of arable land relative to total land area and found that it is unevenly distributed throughout the Northern Periphery region, with Northern Norway having the lowest proportion (0.8%) and Orkney having the highest (15%) (Sveinsson 2017). Although Iceland had only 3% arable land, Sveinsson (2017) concluded that this area had the largest potential for increased barley production, due to the proportion of available arable land. Well-adapted cultivars are crucial for the successful barley production in northern regions, and early maturity is a key factor due to the short growing season. Some old landraces and varieties bred in the early 1900s are still grown, and prominent among these is Bere, an ancient Scottish landrace, which has a long tradition of cultivation in Orkney. Also, in Northern Norway, four old barley varieties are preserved in addition to the landrace Dønnes ). In the Faroes, there are two surviving landraces, Sigurd and Tampar. For the value chain from barley to beer a special emphasis was put on old varieties and landraces both because of their earliness and local adaptations (Schmidt et al. 2019), as well as their potential for telling stories about food and drink through both new and traditional barley products (Martin et al. 2009). Seed multiplication of the northern Norwegian and Faroese varieties is enabling farmers to start growing these varieties and so there is now real potential for using them for future product development. In the last 40 years, there has only been one breeding programme for adapted varieties for the region, in Iceland. The Agricultural University of Iceland has run a barleybreeding programme, which has released four commercially available cultivars (Hilmarsson et al. 2017). Among these is Iskria, which has been grown successfully in countries within the Northern Periphery region. In the Northern Cereals preliminary project, a project supported by the North Atlantic cooperation, promising cultivars were compared in five countries (Reykdal et al. 2016). Icelandic varieties were also used in the Northern Cereals project for product development in northern Scotland, Northern Norway and the Faroe Islands. Growing cereals in the project region requires specific agronomic knowledge. The project paid special attention to knowledge transfer and capacity building through handbooks and guidelines, which were made available in four of the region's languages. In addition, knowledge transfer events were important tasks for all project partners. The basis for the guidelines is the review by Sveinsson and Hermannsson (2017), which relates barley physiology to the factors necessary for successfully growing the crop in northern areas. They noted that barley is the hardiest cereal species with the lowest heat requirement for growth, and that early varieties require about 1300 growing degree days (with a base temperature of 0°C) to reach maturity. Barley seeds tolerate mild frost during germination and, in this region, it is imperative for successful production that sowing should be done as early as possible. In most of the region (especially the Faroes, Northern Norway and Iceland), grain is harvested when it is physiologically mature, although at a high moisture content (i.e. usually more than 22%). This has implications for obtaining good quality malt and makes it difficult to consistently produce malting barley with the same quality criteria used by large malting companies in more southern barley growing areas (Martin 2015). In addition, as grain is usually harvested at high moisture contents, it needs to be dried to 12-14% moisture for safe storage (Reykdal 2017). This adds energy costs to the production, and it is imperative for economic viability that this is done as inexpensively as possible. The project also investigated economic and environmental aspects of sustainability, including local production, best practices for high-quality grain and malt, and added value through new products based on placebased information and traditions. Other research has also shown that local food value chains have a positive impact on some aspects of sustainabilityfor example, added value at the local level (Brunori et al. 2016). In the project, a Life Cycle Assessment was performed at the Icelandic model farm Thorvaldseyri. This included looking at environmental impacts and energy-use on the farm. The results can be used to demonstrate how environmental impacts and use of resources can be minimised to improve sustainability and reduce footprint (Smárason 2016). Local malting of barley from the Northern Periphery region Recognising that the production of good quality local malt is dependent on growers producing grain of an appropriate quality, the project identified grain quality criteria normally required for malting barley and then developed region-specific growing guidelines to help farmers obtain grain of this quality. However, research trials associated with the project in Northern Norway demonstrated the difficulties in achieving this in more challenging parts of the region. Thus, 2015 was a good growing season with a timely harvest, and the grain had a reasonable moisture content and all seven varieties tested showed good germination and malted successfully (Thomsen 2016). In contrast, 2016 and 2017 were much less favourable for growing, resulting in later harvests, higher grain moisture content at harvest and grain with a very low germination percentage (66% and 30% from the 2016 and 2017 crops, respectively), even 7 months after harvest (Halland 2018). Low germination, as a result of seed dormancy, also challenged malt production in Iceland ). In contrast, in Orkney where growing conditions were more favourable, it was possible in all growing seasons from 2015 to 2017 to produce good-quality barley for malting which had lost seed dormancy by about 4 months from harvest, giving around 98% germination. As most of the breweries involved in the project were unfamiliar with details of the malting process, a high priority was given to knowledge exchange activities related to malting. This included carrying out smallscale malting within the partner regions and making the results available through reports or case studies as well as through presentations. In parts of the region, traditional floor malting of barley is still carried out and it was recognised that this is a low-cost, easily transferable method of malting which might be appropriate for some commercial partners. The floor malting process is also ideal for demonstrating the steps involved in malting. One of the Orkney associate partners in the project was a distillery with floor malting facilities and the company agreed to malt a test batch of 7.5 t of Orkney-grown Golden Promise barley for use by an Orkney brewery and to allow the process to be documented as a case study (Martin et al. 2016b). Laboratory analysis of the malt showed that it was of good quality, although it had a lower extract than would have been obtained from modern malting varieties. In Norway, seven varieties grown in Tromsø were test malted at the Agricultural University of Norway (Thomsen 2016). Malting qualities were found to vary between varieties, but the conclusion was that "we have however, so far no reason to believe that it is not possible to grow malting barley in Northern Norway." In Iceland, test malting trials discovered large variation in germination, but were able to malt successfully around 200 kg of Iskria for further processing Sigurðsson 2018). The variation in initial grain quality is a challenge for local malting in the Northern Periphery region. To achieve good malt, it is necessary to adjust the malting process according to the initial quality. For instance, it is especially important to ensure even-sized kernels and a more even germination by screening the grain. Thomsen et al. (2018) addressed this in a test malting of four different barley varieties from four different regions with varying initial quality and using three different malting processes. The conclusion from their work was that the malting method chosen has a strong influence on malting quality and extract yield. The lack of small-scale equipment for malting in the region is a constraint, but the project suggested some key recommendations for inexpensive floor malting (Martin et al. 2016b). These included steeping vessels that can be easily filled and emptied, a sufficiently large floor area for the scale of malting being carried out, machinery for turning the malt and clearing the floor, drying facilities that allow for temperature regulations, bagging equipment for storing the malt, as well as milling equipment. To assist further with knowledge exchange about malting, a distillery still doing floor malting in Orkney agreed to provide placements for partners from other regions to learn the technique, and eight individuals from SMEs in Iceland, the Faroes and Northern Norway took up placements. Since then, some of these have implemented their own floor malting operations. Market potential for local malt and beer Although it has been difficult in most of the regions, within the life of the project, to develop beers made from locally grown barley, there have been some notable successes. In Northern Norway, three breweries produced beers using local barley and one of these included a traditionally made smoky malt from a farm in Stjørdal, near Trondheim. In Orkney, Bere was used by a local brewery for producing two new beers. The same brewery also used locally grown and malted Golden Promise to produce a new beer, but preferred to use Bere, in spite of its lower malt extract, because of its effect on beer flavour, its long association with the islands, and its unique marketing story. The potential for higher value food and drink products from locally grown barley in the Northern Periphery region results from a global trend of increased consumer interest in both high provenance and local food and drink products (Martin et al. 2016a). In part, this is a reaction to the anonymity and complexity of today's global supply chains. The main reason then for increased local malt production is "based on a wish for local malt, greater self-sufficiency, shorter supply-chains and last but not least the special qualities obtained in these areas" ). In the Northern Periphery region, there has been an increase in the number of local food producers and microbreweries, as well as an increased focus on local food experiences in tourism. Recent data from Norway show that sales of local food increased three times as fast as the total food sales in grocery stores (Nielsen Scan Track 2016). There has also been a huge growth in tourism in the Northern Periphery region and tourists are increasingly asking for local food and drink as part of their experience (Turistundersøkelsen 2016). In the Northern Periphery region, there has been a significant increase in the craft beer industry over the last 10 years. The craft beer revolution started in the United States in the early 1980s, but did not fully reach the Northern Periphery region until about 2010. However, in the last 5 years there has been a large increase in the number of microbreweries. In 2017, in Norway 4% of the total volume of beer sold came from microbreweries, and the Brewery and Beverage Association in Norway estimates that the market for beer from microbreweries can reach 8-10% of national sales before 2020. In addition to using malt in beer, there was considerable interest among project associate partners in all regions in using local barley for whisky production. This reflects strong global demand for high provenance whiskies, which can be seen from the growth of premium products such as Scottish single malt whisky (Scotch Whisky Association 2018). It also stems from an expansion in microdistilleries in parallel to that of microbreweries. Discussion The project utilised various methods to tackle the different challenges (Figure 1) along the value chain from barley to beer. The review of outcomes shows that much new knowledge was gained, and development has been achieved. For each step of the value chain, Table 3 summarises the work undertaken, the results obtained and the partners and stakeholders which were involved. It is clear from Table 3 that the main focus in the Northern Cereals project was on the upstream parts of the value chain, growing and malting barley. However, the knowledge generated and work done on the farming part of the value chain were also used when working on the value chain from barley to food products, which was also an important part of the project. There were three main reasons for the emphasis on the upstream value chain. First, growing high-quality barley in the Northern Periphery region, especially for malting, is generally in its infancy and needs to be increased for providing sufficient raw material for making local beer. Second, apart from in Orkney, local malting was almost nonexistent in the region, and new knowledge needs to be generated to obtain malt for brewing. Third, none of the R&D partners had an academic background in marketing or economics. In spite of this, the project was able to deliver upstream outcomes based on the combination of the background of the researchers and the expertise of the brewery stakeholders. Most researchers came from applied research institutes with good brewing industry links and a knowledge of the practical challenges faced by the industry in all parts of the value chain. Within the region, many microbreweries have been operating for several years, and these have good knowledge and practical experience of brewing, product development and marketing. Researchers were therefore able to rely on the expertise of the microbreweries themselves for achieving outcomes in the brewing and marketing part of the value chain. The role of microbreweries in delivering upstream project outcomes was very much a reflection of the project's transdisciplinary approach. According to Tress et al. (2005), such an approach combines researchers from different academic disciplines as well as stakeholders, and all should work together towards a common goal where theory and knowledge are integrated. In the Northern Cereals project, the objectives sought were mainly tangible, concrete outcomes, in addition to practical knowledge building. Integration of different academic theories was, therefore, to a large degree, not needed to allow the various disciplines "to talk together" and solve common tasks. Stakeholder involvement, mainly farmers, extension workers and microbreweries, was crucial to the success of the project and imperative to achieving concrete results such as new barley production, improved drying, malting and new beer products. Although the value chain perspective of the project might have benefitted from research partners from marketing and innovation disciplines, the shortages of grain and malt would still have been the limitation in complete value chain development. It is recommended, however, that such expertise should be included in a follow-up project to realise the full potential of local beer products. Transnational cooperation was at the core of the Northern Cereals project. The project acknowledged that the regions (and research partners) had different strengths and challenges in the value chain, and because of this each partner concentrated on the aspects that were considered the most important locally. Table 3 shows that partners from all regions participated in many of the outcomes, and there was no outcome where only a single partner/region was involved. This shows that transnational cooperation was truly an integral part of this project. Producing the many outcomes shown in Table 3 does not occur by transnational cooperation in itself. A good plan, appropriate expertise and careful follow-up during the project period are essential for such cooperation resulting in concrete outcomes. In addition, success depends on the nature and quality of the cooperation (Colomb 2007), and where there are historical or other preexisting links between the partners in the project, transnational cooperation can be very important (Dühr and Nadin 2007). Partners in The Northern Cereals project came from an area where cultural-historical roots go back more than 1,000 years, and still today there are many similarities in cultural expression, food, language, social interaction, etc. In addition, the climate and environment are similar enough for the knowledge to be transferable, but sufficiently different for interesting comparisons to be made. We found that the similarities among the many partners and stakeholders strengthened the cooperation, for example, by making the many study trips and company visits of immediate relevance. One of the benefits of transnational cooperation according to Ray (2001) is "to reach critical mass". The Northern Periphery Area is sparsely populated, and the geographical distances are long so that transnational cooperation can help overcome the shortage of critical mass for development and knowledge building. Another advantage is that it can be easier to share company knowledge with the companies in other regions and countries that are not direct competitors in the market. Conclusion We found that transnational cooperation proved to be very beneficial for achieving the aims of the Northern Cereals project and for maximising the impact of a small pool of cereal R&D expertise spread across a large geographic region. To tackle the complexity of the challenges, a transdisciplinary approach was taken and a wide variety of practical and theoretical studies were undertaken utilising the specialist knowledge from many disciplines. The inclusion of many SMEs and other stakeholders ensured that the research was focused on overcoming the various challenges in the value chain, and made, we believe, the outcomes of the Northern Cereals project of major practical relevance to the involved parties. Although stakeholder involvement was probably the project's main strength, the lack of academic knowledge on marketing and innovation may have been a shortcoming. A particularly useful outcome of the Northern Cereals project has been the identification of constraints on the development of the barley to beer value chain, and addressing these should be the priority of future R&D work. Foremost among these is the development of locally adapted varieties of barley which are suitable for malting. This requires a long-term commitment to a regional plant breeding programme in which the development of malting types would be part of a wider programme of developing barley for a range of purposes. The Agricultural University of Iceland's current programme is an excellent starting point for this, but it could be made even more effective by increasing collaboration with researchers and breeders in other northern countries/regions and by testing materials across the region. Another very specialised area requiring attention is the need for small-scale malting equipment, grain drying equipment and development of appropriate methods for malting barley produced in the region. Such facilities would be particularly valuable in the most remote areas (Iceland, Northern Norway and the Faroes). Although the project did not investigate in detail economic and policy issues, it is recognised that these also have a strong influence on barley production. For example, it is known that the lack of cereal production in Northern Norway is partly due to political reasons accompanied by lower subsidies for such production in this region (Bunger and Tufte 2016). Other important areas are product development issues related to using local barley for beer and the need for marketing and economic support to obtain maximum benefit from its high provenance. All of the above future R&D activities would benefit from a transnational and transdisciplinary approach. Although much work remains to further increase cereal production and the use of local grain in the Northern Periphery region, we believe that this project has laid a good foundation for further progress.
2020-05-12T13:04:10.082Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "83b6144122400292508f866b457803ee1be29389", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/document/doi/10.1515/opag-2020-0014/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fbb771a5c8b2f7601022def5c46630aff96ef2f4", "s2fieldsofstudy": [ "Business", "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Political Science" ] }
256044572
pes2o/s2orc
v3-fos-license
Multi-Higgs-doublet models and singular alignment We consider a 4-Higgs-doublet model in which each Higgs doublet gives mass to one of the fermion sets {mt}, {mb, mτ, mc}, {mμ, ms}, and {md, mu, me}. The sets have the feature that within each of them the masses are similar. Our model explains the mass hierarchies of the sets by hierarchies of the vacuum expectation values of the Higgs doublets associated to them. All Yukawa couplings are therefore of order one. Neutrino masses are generated by a type-I seesaw mechanism with PeV-scale singlet neutrinos. To avoid the appearance of tree-level flavour changing neutral currents, we assume that all Yukawa matrices are singularly aligned in flavour space. We mean by this that the Yukawa matrices are given as linear combinations of the rank 1 matrices that appear in the singular value decomposition of the mass matrix. In general, singular alignment allows to avoid flavour changing neutral currents in models with multiple Higgs doublets. Introduction An understanding of fermion masses and mixing is still lacking. In particular, the mass values display unexplained patterns and hierarchies; this is the case when one considers the three generations as well as the species: 1 (2) ? m ν2 (1) ? m ν1(3) 1 That is, any of the four masses within the same generation. JHEP07(2019)036 We can summarize the situation by asking the following questions: • Why is the top quark mass the only fermion mass of the order of the electroweak (EW) scale, m t ≈ v EW with v EW 174 GeV? • Why is the top quark mass so much heavier than the rest of fermion masses, m t m f ? • Why do all charged fermions satisfy the hierarchy, m 3 m 2 m 1 ? • Why are for the first generation the masses (except for neutrinos) closer to each other than for the other two generations, m d ∼ m u ∼ m e versus m c m s ∼ m µ and m t m b ∼ m τ ? • What could the interspecies hierarchy, e.g. m t m b > m τ m ν3 , be telling us? • Why are neutrino masses much smaller than the charged fermions, 2 m ν ∼ 10 −7 m e ? This is commonly referred to as the problem of mass [1]. Part of the mystery lies in the contrast of expecting Yukawa couplings to be order one, y f = O(1), whereas the observed values with a single Higgs doublet are much smaller than 1, except for the top quark, y f 1 (f = t). In the following, we assume Yukawa couplings to be order one, y f = O(1), and try to understand the fermion mass patterns through a theory with multiple Higgs doublets. The most extreme approach along this line would be the "private Higgs" scenario, in which among other things, for each fermion a Higgs doublet is introduced [2,3], see also [4][5][6]. The mass hierarchies are explained by hierarchies of vacuum expectation values of the individual Higgs doublets: m f v f , where v f is the vacuum expectation value of the Higgs that is responsible for the fermion f = u, d, c, s, t, b, e, µ, τ . In general, in a model with N Higgs doublets, Φ i (i = 1, 2, . . . , N ), where each of their neutral components acquires a vacuum expectation value (vev), Φ 0 j = v j e iθ j , a relation among these vacua is satisfied: Here v EW 174 GeV, v i ≥ 0, and all doublets share the same hypercharge Y = 1 2 . Now, if we consider that each single Higgs is fully responsible for the mass of one single fermion (where N should equal the number of fermions in the theory), then the previous relation is modified to JHEP07(2019)036 Furthermore, if we consider that Yukawa couplings should be order one numbers, y f = O(1), we could approximately say that, to good approximation In the case of the Standard Model (SM), with N = 12 fermions, the previous equation is fulfilled. We will call this relation the mass-vacuum relation. An amusing possibility from this relation is that if all N doublets have the same vev, one would have N fermions with mass of about 174/ √ N GeV, which would be about 50 GeV for 12 fermions. If two doublets have vev v EW / √ 2 and the rest a vanishing vev, then there would be two fermions with mass v EW / √ 2 123 GeV. In turn, if only one doublet has a vev, there is only one fermion with mass v EW . Forcing the mass-vacuum relation to be fulfilled and assuming that only one Higgs acquires a vev leaves hardly any mass for the other fermions and explains the top quark's dominance. Moreover, this same argument could help us to understand why neutrinos are so light when assumed as Dirac fermions. 3 The particle content in the main scenario discussed in this paper is smaller than that for a private Higgs-like scenario. Our observation is that the fermion masses can be grouped into four different sets: In each set the masses are quite similar and can in fact be explained by similar O(1) Yukawa couplings to an individual Higgs doublet Φ t,b,µ,d . Such a 4-Higgs-Doublet Model has to the best of our knowledge not been considered before. We find several attractive and testable features of the model, and demonstrate that it is not in conflict with measured Higgs couplings and other tests. Our model traces the hierarchy of the mass values of the different fermion sets to hierarchies of vevs of their respective Higgs doublets. We show that the smaller vevs can be induced by the larger vevs, and the hierarchy among them arises because the four vevs are protected by different symmetries. The main problem in multi-Higgs doublet models is of course the presence of flavour changing neutral currents (FCNC). Theories which through the use of symmetries naturally avoid those FCNC are said to possess Natural Flavour Conservation (NFC). Options to evade FCNC include, next to arranging the additional scalar particles to be very heavy, suppressing dangerous Yukawa couplings [9], separating the Yukawa matrices such that only one scalar doublet couples to a given right-handed fermion field [10,11], or Yukawa alignment [12,13], in which the different Yukawa matrices are proportional to each other. As a proof of principle that FCNC can be entirely avoided in our setup, we assume here another solution. We note that if the Yukawa matrices are proportional to any of the rank-one matrices that appear in the singular value decomposition of the fermion mass matrices, FCNC are absent. We denote this as "singular alignment". The paper is organized as follows: in section 2 we present singular alignment and discuss some of its features. The model with four Higgs doublets to explain the masses of the individual sets {m t }, {m b , m τ , m c }, {m µ , m s }, and {m d , m u , m e } is presented and JHEP07(2019)036 analyzed in section 3. Conclusions are presented in section 4, and some technical details are delegated to appendices. Singular alignment In general, having multiple Higgs doublets coupling to fermions with the same electric charge will produce tree-level FCNC, which are experimentally strongly constrained. Three main possibilities to overcome this problem have typically been studied: (i) assume "dangerous" Yukawa couplings to be sufficiently suppressed at tree-level [9]; (ii) assume the corresponding Yukawa matrices of each type of fermion (up-type quarks, down-type quarks and charged leptons) to be proportional to the mass matrix [12,13]; (iii) impose an adequate symmetry such that each fermion type couples exactly to one of the doublets [10,11]. In the following, we comment only on the last two possibilities and introduce singular alignment. Let us start from the most general case for a Yukawa Lagrangian in a N HDM, where F L and f R are three dimensional vectors in family space and transform as a doublet and as a singlet under SU(2) L , respectively. The N Higgs doublets acquire a vev, v a = Φ 0 a . In general, Yukawa couplings will couple all fermions to all Higgses. Therefore, the most general form of a mass matrix is Each Yukawa matrix, Y i , is a 3 × 3, arbitrary, and complex matrix with rank 3. The appearance of tree-level FCNC is automatic within this setup as diagonalization of the mass matrices does not mean, in general, simultaneous diagonalization of the individual Yukawa matrices. However, to avoid introducing dangerous tree-level FCNC the following can be done: NFC theories. Adequate symmetries are imposed in such a way that each of the three charged fermions will only couple to a single Higgs [10,11], i.e. for each fermion type holds where no sum over k is intended. In this case diagonalization of the l.h.s. means diagonalization of the r.h.s. . For N Higgs doublets, the easiest way to achieve this is via a symmetry of the form where in order for this symmetry to be realizable = N − 1 should hold. Realizable symmetries are a set of allowed discrete symmetries of the scalar potential which have no accidental larger groups that could give rise, for example, to massless Goldstone bosons [14]. JHEP07(2019)036 Now, before turning to the next possiblity, let us comment on the Singular Value Decomposition (SVD) of a mass matrix: (2.5) Here L and R are unitary matrices which rotate independently the left-and right-handed fermion fields and Σ = diag(m 1 , m 2 , m 3 ) with m i > 0. Realize that the SVD may also be written as a sum of three rank 1 matrices, where P i are three projector operators, P 2 i = P i and i P i = 1 3×3 , which have the form In the following, we will denote each rank 1 matrix appearing in the SVD by and call it singular matrix. Yukawa alignment. As each Yukawa term in eq. (2.2) is a rank 3 matrix, a second possibility to avoid FCNC, is to assume that each of them is proportional to the full SVD [12,13]: Here the ζ i are real and we have that Diagonalization of the l.h.s. means diagonalization of the r.h.s. . This is understandable as each Yukawa matrix is rank 3 and thus if related to the singular matrices should be composed of the three independent singular matrices. Furthermore, one has the constraint N j=1 ζ j = 1 . (2.11) Singular alignment. A more general scenario is that in which each Yukawa matrix is given by a linear combination of the singular matrices, i.e. JHEP07(2019)036 Appendix A gives a straightforward proof of the absence of FCNC in case the Yukawa matrices take this form. Comparing with the full mass matrix, which can be written as Hence, all fermion masses are independent linear combinations of the different vevs and all Higgs doublets can be responsible for giving mass to all fermions. In practice, models may also lead to Yukawa matrices with ranks less than 3. In this case the singular alignment can still hold and the only new difference would be to have some of the constants η i , Ω i , Λ i appearing in eq. (2.12) equal to zero. In short, singular alignment is the very strong Ansatz of choosing Yukawa matrices to be related to the rank 1 matrices appearing in the SVD. Through this alignment, no treelevel FCNC appear for any number of Higgs doublets. Let us consider now some explicit examples. The two-fermion family case We assume N Higgs doublets for two generations of charged fermions. In this case, the mass matrix is If no symmetry is imposed all Higgs doublets are allowed to couple to our 6 fermions. Therefore, all the Yukawa matrices are rank 2. Let us implement the singular alignment. For this purpose, the SVD of the mass matrix is written as where, without any loss of generality, we have chosen to work in the basis where the righthanded fermions have been already transformed and we have explicitly written the most general expression for a unitary matrix in two dimensions. The two singular matrices are Singularly aligning our Yukawa matrices in flavour space means JHEP07(2019)036 We identify the masses as Regarding FCNC, note that in the mass basis we have Hence, no FCNC are introduced in this model of N > 1 Higgs doublets that couple to all fermions. Also, if matrices of lower rank are obtained through the use of convenient symmetries, then our general expressions in eq. (2.20) will still hold but with some of the parameters η i or Ω i vanishing. The three-fermion family Case The next example deals with three generations and three Higgs doublets. To be singularly aligned, each rank 1 Yukawa matrix should be seen as a column vector satisfying unitarity conditions (recall eq. (2.12)): Here we have denoted (a 1 , a 2 , a 3 ) T ≡ |a and similarly for the other columns. Notice that the Yukawa couplings should not enter into these expressions. A practical way to implement all these conditions is to make use of an explicit parametrization of a unitary matrix. Then, a singularly aligned mass matrix could take the form where T and Q are diagonal phase matrices with two phases each and we have used the shorthand notation for the sine and cosine functions. Recall that a 3 × 3 unitary matrix possesses 6 complex phases (one of which is global) and 3 real parameters. Each column is proportional to a given singular matrix. At last, realize that masses and mixing get completely decoupled when singularly aligning the Yukawa matrices. Recall in this context that any set of singular vectors corresponding to a set of non-degenerate singular values is always orthonormal. Hierarchical fermion masses A shared feature among all the charged fermions is that their masses are hierarchical, To theoretically understand this in a N HDM with singular alignment, see eq. (2.14), one must understand under what conditions this property gets always realized. We are not interested in any fine-tuned scenario where through adequate values for the set of parameters {η, Ω, Λ} we generate hierarchical masses, we are assuming that Furthermore, we are actually interested in the minimal number of scalar doublets necessary to explain all the observed patterns in the fermion masses. For the moment, notice that one possibility is to couple a single Higgs to each different flavour with the same electric charge. In this case we have It is obvious then that the only way to achieve hierarchical masses with This fact is connected to the mass-vacuum relation. The maximal setup, if neutrinos are assumed as Dirac particles, would require 12 Higgs doublets. However, this large number of scalars can be significantly reduced if one notices that among the different masses there are majorly 4 (5) mass scales, where the (5) corresponds to Dirac neutrino masses. This is what we will deal with in section 3. In case of Majorana neutrinos there are four possibilities depending on from which Higgs doublet the Dirac mass matrix of the type-I seesaw mechanism stems. We will come back to this point later. Of course, neutrino mass could also be independent of the Higgs doublets. Beyond singular alignment If a small amount of flavour violation via neutral mediators is permitted, then a less restrictive venue can be obtained through the following conditions: (i) the third Yukawa matrix for all fermion species is the only rank 1 matrix and proportional to the third singular matrix, (2.28) (ii) the first and second Yukawa matrices are no longer proportional to the singular matrices, so they may in general produce FCNC; (iii) however, to produce a hierarchy between the first and second generation, the second Yukawa matrix should be at most rank 2 and have no contributions to the first family masses; (iv) the first Yukawa matrix can be rank 3, 2 or 1. In other words, the three Yukawa matrices should imply the sequential symmetry breaking chain JHEP07(2019)036 where F might either be baryon or lepton number. The introduction of flavour violation as allowed by the two lightest families means no risk as this set of flavour transitions will be sequentially suppressed by the approximately conserved symmetries at each step. Radiative stability In the absence of a specific symmetry protection, one-loop quantum corrections may induce misalignment in the different singularly aligned Yukawa matrices and bring about FCNC's at the loop level. It is important to know if this effect is small and compatible with current experimental constraints. The study of this issue can be directly related to the work of ref. [13] wherein the issue of radiative stability was investigated for the most generalized Yukawa aligned-like form given by where Ξ i is a complex 3 × 3 matrix subject to the condition This generalized Yukawa-alignment means breaking flavour universality. Notice that the normal Yukawa-alignment, eq. (2.9), can be recovered when all diagonal elements in the r.h.s. of eq. (2.31) are equal (flavour universal). In ref. [13], it was shown that the induced misalignment is a quite small effect, as the initial alignment in the multi-Higgs Lagrangian has some residual flavour symmetries, which tightly limit the type of FCNC operators that can be generated at higher orders. This can be easily understood as the Yukawa alignment is a linear realization of the minimal flavour violation hypothesis [15] and could be derived from it [16]. This hypothesis states that the only source of flavour breaking should come from the Yukawa matrices, even in the presence of new particles and interactions [17][18][19][20]. The previous discussion also applies to the Singular Alignment as it is possible to show that it is equivalent to the generalized Yukawa-alignment via substitution in eq. (2.30) of the relations Therefore, the ansatz of singularly aligning Yukawa matrices in flavour space, in order to avoid FCNC's at tree level, has a sufficiently small misalignment, induced by one-loop quantum corrections, consistent with all known phenomenological tests. The minimal setup: a 4HDM Now we discuss a 4HDM which takes into account that among the measured fermion masses four different sets can be identified: for the masses in their respective sets. 4 The corresponding mass-vacuum-like relation in analogy to eq. (1.3) would take the form The model can be constructed by first imposing fields to transform under the symmetry Z 2 × Z 2 × Z 2 , as shown in table 1. The Yukawa Lagrangian implied by the charge assignment is The way in which we have employed the charge assignment to couple fermions with Higgs doublets has given us a model where all Yukawa matrices for the charged fermions are rank Similar expressions can be given for the other fermion species. Notice we are employing a conventional notation for the Yukawa couplings, y f i , in order to distinguish at this point generic Yukawa matrices from those which have been singularly aligned. Now, to singularly align these matrices, we demand that each column should be given by a single singular matrix (in order to have a hierarchy of masses with order one Yukawa couplings, cf. section 2.3), in our up-type example this means: The explicit form of these singular matrices was given in section 2.2, they correspond to one of the three columns in eq. (2.24). We can also write them as ∆ u,c,t = L † P 1,2,3 R, see the discussion around eq. (2.8). The model presented here arranges that a certain Higgs doublet will couple to a given set of fermions, even if they possess different electric charge. All corresponding Yukawa matrices will already be rank 1. Through the special requirement that Yukawa matrices should be singularly aligned in flavour space, as discussed in section 2, it is possible to avoid flavour violation at tree-level. JHEP07(2019)036 The model allows to reproduce fermion mixing, as shown in appendix B. Neutrino masses are generated via the type-I seesaw mechanism. We have associated the three right-handed neutrinos to the Higgs doublet Φ d . This implies that via the type-I seesaw mechanism the heavy neutrino mass scale M should be around PeV, where we have assumed that m D Φ 0 d 10 MeV and m ν m 2 D /M 0.1 eV. The contributions to some lepton-flavor-violation processes coming from the admixture of the heavy righthanded neutrinos with the left-handed ones can already be estimated via the standard formulae of type-I seesaw models [21]. This calculation is greatly simplified in the limit M M W , which is our case. 5 The following upper bound to various processes of interest may be obtained: B th r (µ → eγ) < 10 −14 , B th r (µ → 3e) < 10 −18 , B th r (τ → 3µ) < 10 −7 , and B th r (τ → µγ) < 10 −4 . Notice how, in general, these numbers will still get suppressions by small mixing-like angles of the order of m D /M ∼ 10 −8 times order one numbers (at most) arising from corresponding form factors. The present experimental upper limits on these decays at 90% C.L. are given by: B exp r (µ → eγ) < 4.2×10 −13 [22], B exp r (µ → 3e) < 10 −12 [23], B exp r (τ → 3µ) < 4.6×10 −8 [24] and B exp r (τ → µγ) < 3.3×10 −8 [25]. The smallness of the estimated branching ratios is of no surprise, as the high-scale type-I seesaw is known for giving very suppressed rates, see for example [26] and references therein. On the other hand, possible contributions coming from the scalar mediators at the loop level can also be expected to be sufficiently small and consistent with phenomenological tests as suggested by analyses of the minimal lepton-flavour violation hypothesis [27,28] and as discussed in section 2.5. Notice, however, that this set of flavour-violating processes have a strong dependence in the ratio between the two scales (Λ LN /Λ LFV ) 4 , where the first and second one correspond to the scale where lepton number (LN) is broken and lepton-flavour-violation (LFV) is produced. Therefore, if a large hierarchy exists between these two scales one may obtain observable effects. In our case, we may estimate this ratio as (v d M/v 2 τ ) 2 ∼ 10 8 which is still sufficiently small. For example, after substitution in B th r (µ → eγ) = 1.6 × 10 24 (Λ LN /Λ LFV ) 4 we obtain B th r (µ → eγ) ∼ 10 −16 , where the previous relation was taken from ref. [27]. Hence, we see that for the particular purposes of this work the rates for LFV processes are expected to be in agreement with the current upper bounds. The scalar potential The most general, renormalizable and gauge invariant scalar potential of the model is (3.5) Here a, b = t, b, µ, d, and for the sake of simplicity we are assuming all couplings to be real. JHEP07(2019)036 In order to generate a hierarchy among the vevs we choose the particular case where µ 2 t < 0 and µ 2 b,µ,d > 0 , (3.6) such that the only Higgs acquiring a vev is Φ t : We are following the convention Φ 0 . As Φ t has no charge under any of the three Abelian symmetries, see table 1, its vev preserves the symmetry. Equivalently, the symmetries are protecting the other scalars from acquiring a vev. Thereafter, through the following subset of soft-breaking terms, where µ 2 ab v 2 t , µ 2 a , we induce vevs for the other three Higgs doublets. To be more specific, the particular choice of soft-breaking terms is motivated by the fact that each of them will only break a particular piece of the whole symmetry. That is, only break Z 2 , Z 2 , and Z 2 , correspondingly. Therefore, once the EW symmetry is spontaneously broken, the first soft-breaking term will induce a vev to Φ b which in return will induce a vev to Φ µ until finally reaching Φ d . It is possible to show that within this limit the minimization conditions are satisfied if the vevs are given as together with eq. (3.7) and where (XY Z) ab = X ab + Y ab + Z ab and µ 2 ab < 0. By virtue of this choice, the vevs are naturally small and obey the desired hierarchy Fermionic couplings to the SM-like Higgs The introduction of the soft breaking terms (eq. (3.8)) in the Higgs potential will produce a small mixing among the four Higgs doublets. For the moment, let us focus on the neutral scalars. We assume all parameters in the scalar potential to be real. Through this choice we consider it to be CP -symmetric. Hence, no admixture between the real and imaginary components of the neutral fields is allowed as they have definite CP quantum numbers. To compute their couplings to all fermions we start from the Yukawa Lagrangian in the mass basis which is written as where we have changed our notation {η, Ω, Λ} to the conventional one, y f . We can bring the CP -even scalar sector to its mass basis via where R is an orthogonal matrix, R T R = RR T = 1 4×4 , and h 0 is the lightest state with a mass of m h 0 125 GeV. Now, in order to find out how fermions couple to the SM-like Higgs, h 0 , we substitute φ k = R 1k h 0 in eq. (3.11) to obtain We can define the following four classes of fermion-scalar couplings: sin α 1 sin α 2 sin α 3 . (3.14) The angles α i in these relations are We note an attractive and testable feature of the model, namely that the couplings between fermions and the SM-like Higgs are modified in the same way for each set. That is, the couplings of the sets {m t }, {m b , m τ , m c }, {m µ , m s }, and {m d , m u , m e } are changed with respect to the SM-case by the same amount for each set, see figure 2. The coupling to the top quark is always essentially SM-like, ξ t h = 1. This is understood because R 11 and cos α 1 are both very close to 1, which is caused by the vev hierarchy v t v b,µ,s . Notice that, even though the mixing R ik in all cases is proportional to the soft-breaking parameters, the implied smallness in |R ik | may be compensated by α i 1, and therefore, in general, ξ f h should not be expected to be small. In fact, within this scenario we can have four different possibilities: (i) hyper-couplings with ξ f h > 1, aligned couplings with JHEP07(2019)036 ξ f h = 1, hypo-couplings with ξ f h < 1, and a mixture of any of these (the ξ f h can even be negative). One has to confront the couplings in this model with present measurements of Higgs couplings. We adopt the following numbers from combined fits of data taken at √ s = 13 TeV [29]: No useful information about the couplings to first and second generation fermions exist, except for the muon, where the uncertainties are nevertheless very large. In our case κ Z,W can be reproduced as in any multi-Higgs doublet model. The values of κ t,τ,b need to be compared with our ξ f h , which is what the plots in figure 2 do for the four benchmark scenarios to be discussed next. 6 Numerical examples A thorough analysis of the Higgs potential is beyond the scope of this work, nevertheless, we will present four numerical benchmark scenarios. They obey the following conditions and constraints: • Vacuum stability: where (a = b, µ, d). This set of conditions was computed from the requirement that the squared mass matrices for the charged scalars and pseudo-scalars should be positive definite, for further details see appendix C. • Contributions to the ρ parameter: that is, it should be consistent with the maximum allowed deviation from the SMexpectation [34]. The first and second uncertainty originates whether the oblique parameter U is fixed to zero or not within the multi-parameter fit. For our calculations, we employ the one-loop contribution coming from a generic N -Higgs doublet model obtained in ref. [35], for further details see appendix D. Our analysis is consistent with ref. [36] where the interplay between the maximum number of N -Higgs doublets and their allowed masses in the oblique parameters is discussed. • Charged Higgs masses above the lower bound [37]: • Recently, a search for a Higgs-like particle, φ, decaying into a pair of bottom quarks with at least one additional bottom in proton-proton collision was reported [38]. The following mass range was excluded with 95% confidence level: 100 GeV < m φ < 300 GeV . While not directly comparable with our scenario, our benchmark points nevertheless obey this constraint. Table 3. Outputs for each of the four numerical benchmark scenarios. Scalar masses are given in GeV. Conclusions Within the SM the huge hierarchy of Yukawa couplings remains a puzzle. In this regard we used the fact that the observed fermion masses indicate that the following sets have similar Yukawa couplings: {m t }, {m b , m τ , m c }, {m µ , m s } and {m d , m u , m e }. We have shown that a 4HDM can be constructed that explains this feature. Each set of fermions has its own Higgs doublet. Their vevs are hierarchical which explains the mass hierarchy of the sets. In the model a flavour symmetry was introduced to generate rank 1 Yukawa matrices. Soft breaking was included in the potential, which makes it possible to induce the smaller vevs by the larger ones, where each smaller vev corresponds to a different broken symmetry, and is thus protected by it. All Yukawa couplings take on "natural" values of order 1. In the model neutrino masses are generated via a type-I seesaw mechanism with a Dirac neutrino mass matrix of order of the down-quark mass scale, hence the right-handed singlet Majorana neutrinos are of PeV-scale. We have demonstrated that fermions of a given set couple to the SM-like Higgs with the same modified factor. In this regard, the clearest signal for this kind of models is to investigate their coupling to the SM-like Higgs and determine if they are grouped. The top quark couples to the SM-like Higgs essentially with the same strength as in the SM. Benchmark scenarios with definite predictions for those couplings as well as for scalar masses were provided. Multi-Higgs doublet models face of course problems with FCNC. By singularly aligning the Yukawa matrices we have shown explicitly that those can be evaded. This alignment JHEP07(2019)036 assumes that the Yukawa matrices are related to the rank 1 matrices that appear in the singular value decomposition of the mass matrices. In this manner, it is in general, not only in our model, possible to avoid FCNC while simultaneously coupling several Higgs doublets to an individual given fermion. Moreover, its equivalence to the most general Yukawa alignment also allows us to state that its misalignment at the one-loop level is sufficiently small and is consistent with all known phenomenological tests. The model as well as aspects of singular alignment allow for several follow-up studies regarding both model building and phenomenology. A Proof of FCNC disappearance in the singular basis Consider a given fermion type coupled to N different scalar doublets. Its mass matrix would be given by where we have assumed that each scalar doublet acquires a vev. On the other hand, the SVD of the mass matrix is where L and R are unitary transformations acting independently on the left-and righthanded fields. Using Dirac notation, the SVD can be rewritten as Singular alignment requires assuming each Yukawa matrix to be related to the rank 1 matrices, | i r i |, of the SVD. In general, we can express the Yukawa matrices as a linear combination of the rank one singular matrices where the parameters {η, Ω, Λ} are real. In the mass basis, each Yukawa matrix would take the form, Therefore, through singularly aligning we have avoided the appearance of dangerous treelevel FCNC. For last, notice that after substitution of the previous relation in eq. (A.1) we obtain JHEP07(2019)036 B Numerical example for quark mixing The following singular matrices allow us to reproduce exactly the observed mixing in the quark sector as recently reported in the PDG 2018 [34]: By virtue of those conditions one may easily derive from the charged scalar matrix that: For the pseudo-scalar matrix: , (a = b, µ, d) . (C.8) We have not neglected here the off-diagonal contributions to the CP -even scalar matrix, as even though they are very small, they can still influence the Higgs-fermion couplings as already previously discussed. D Constraints from the ρ parameter The one-loop level contribution to the ρ parameter from a theory with N Higgs doublets has been calculated in ref. [35] and is expressed as where F (x, y) ≡ an interesting property, as it grows linearly with max(x, y), that is, quadratically with the heaviest-scalar mass, when that mass becomes very large. As long as the difference in the scalar masses is small, δ 200 GeV, the maximum value of this function lies within the 3σ deviation in ∆ρ, as shown in figure 3, even if the masses become very heavy, M > 300 GeV. This can be seen from Taylor expanding the function, δ x, Moreover, realize that these contributions will get further suppressed by the factors coming from the off-diagonal matrix elements in the product of the orthogonal matrices. In the limit in which we are working, mass matrices can be considered to a very good degree of accuracy to be diagonal, therefore, the maximum amount of contributions in this model will take the form
2023-01-21T14:46:07.684Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "f23e6df990768a069388397e163d336599980de2", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP07(2019)036.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "f23e6df990768a069388397e163d336599980de2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
52228101
pes2o/s2orc
v3-fos-license
Improving Microvascular Anastomosis Efficiency by Combining Open-Loop and Airborne Suture Techniques This is a left femoral artery of an adult Lewis Rat weighing 320 grams. The vessel diameter was measured 0.55 mm (pre-dilatation) and 0.70 mm (post-dilatation). 16X magnification, 10-0 nylon suture, and 1A vascular double clamp were used. This surgery was performed without assistance. The author acknowledges that any of the techniques illustrated in this video is not new, however, these techniques, which may improve the success rate or ease the anastomosis without assistance, are not commonly used in other centers. Introduction This is a left femoral artery of an adult Lewis Rat weighing 320 grams.The vessel diameter was measured 0.55 mm (pre-dilatation) and 0.70 mm (post-dilatation).16X magnification, 10-0 nylon suture, and 1A vascular double clamp were used.This surgery was performed without assistance.The author acknowledges that any of the techniques illustrated in this video is not new, however, these techniques, which may improve the success rate or ease the anastomosis without assistance, are not commonly used in other centers. Open-Loop Technique In this technique, one can easily use the tips of the forceps to elevate the edge of the vessel [1,2].This elevation not only enables one to see the lumen clearly at all times instead of counting on "feel" but also helps to evert the vessel edges avoiding adventitia and muscle layer inversion to the lumen.The same logic applies to skin suturing while using adson forceps to elevate the skin edges to reduce the angle of a needle entrance for better eversion.Please note that the surgeon is able to use only one tip of the forceps even in very narrow suture intervals during anterior wall repair (thanks to open-loop technique) in order to elevate the right vessel edge for more secure anastomosis with better eversion.Another advantage of this technique is that it speeds up the anastomosis by 25% [1].This anastomosis was performed in 8 minutes. Traction techniques using suture ends generally require assistance to lift the vessel edge and are based on "feel".One can feel the muscle layer without seeing, however, a small piece of adventitia or foreign body that might get in the anastomosis site can be missed.Moreover, if there is a tendency of intimal separation, the traction techniques without a forceps support beneath the intima may cause more separation.The author does not use other techniques unless the position of vessels is exceptionally difficult to perform an anastomosis. It is notable that the clockwise rotation of the needle holder while pulling out the needle from the vessel followed by the anti-clockwise rotation to hold and reposition the needle back to continue for the next loop. The author prefers creating smaller loops and leaving the long thread in the first and last loops (as demonstrated in the video) if the vessel size is relatively small, magnification is high, and the space is not wide enough to accommodate the redundant suture that tends to be stuck.However, in free flap surgeries with larger vessels and spaces, the author prefers larger loops and leaving short thread end in the first loop, which makes the anastomosis faster and easier without having to pull suture each time for each loop. Airborne Suture Technique This technique has no use in improving the patency rate; however, it speeds up the anastomosis by preventing the suture from sticking the surrounding tissue [3].This technique is much easier to apply in less magnification with larger sutures.The author suggests practicing in these conditions first.The long thread is generally preferred to be twice as long in length as the short thread [3].If the lengths of both threads are similar, more vertical use of the left forceps with more rotation is needed.Right forceps approximate the thread to the relatively stable left forceps.Left forceps do a rotation movement to grab the suture again. Airborne suture technique is especially useful when there is no one for assistance.The tip of the needle can be used to elevate the edge of the vessel to see the lumen clearly and then place the forceps into the lumen without the need for assistance or irrigation.Moreover, this technique can be used to rotate the vessel for corner sutures without the need for assistance.This technique should be applied in a traumatic manner.One should only use the adventitia layer to lift up the vessel and avoid unnecessary trauma. The conventional technique of microvascular anastomosis relies on interrupted sutures that require assistance for traction.Because of the traction applied and the limited space and intervals between sutures, the surgeon tends to take an inverting bite from the vessel without seeing the lumen clearly at all times during the anastomosis, which, in my view, is the most critical factor for anastomosis patency.The combination of these techniques minimizes the need for an assistant as well as makes the anastomosis safer and faster. The main limitation of combining these techniques is the learning curve.Airborne suture technique may not be applicable in every anastomosis environment.It is highly affected by the vessel orientation, limited space, deep anastomosis site, very high magnification, and too much irrigation.With the learning curve, one can overcome these problems.Open-loop technique can be easily applied in any difficult anastomosis environment.The most encountered problem is that the open loops tend to twist and get stuck in the early period of the learning curve.One should cover the dry and irregular surface area around the anastomosis site with wet gauze in order to avoid loops twisting. PCA Technique PCA technique (Posterior wall first-Continuous interrupted-Airborne) [4] is another fast and reliable method.The author prefers PCA technique for end-to-side anastomosis and in situations where it is very difficult to turn over the double vascular clamp for the posterior wall, especially in the setting of short vascular stump or limited space; for example, hepatic artery anastomosis, temporal superficial vessel anastomosis, and rib-sparing technique for internal mammary vessel anastomosis.However, the disadvantage of this technique is that the number of loops created in the anterior wall is sometimes too many, which complicates the repair.Moreover, the author prefers to have a better control during the posterior wall repair with adjusted bites in thickness in the setting of size or thickness discrepancy, and hence the direct repair of posterior wall is usually performed after anterior wall repair. Alternative Techniques There are other techniques described in the literature for eversion, such as eversion with four sutures by Turan et al [5]; however, the author has no experience in this suture technique.In the author's experience, taking smaller bites from the thinner vessel and larger bites from the thicker Non-suture microvascular anastomosis using anastomotic coupling device is also a fast and reliable method.The main drawback of this technique is that the device is not readily available in some centers.Coupling device is not available in our center now.The cost of the device is another issue.Furthermore, a very recent study shows that small-diameter anastomotic coupling devices for smaller vessels have higher rates of venous thrombosis compared to large-diameter anastomotic coupling devices for large vessels and hand-sewn anastomosis [6].
2018-09-05T01:45:01.362Z
2018-07-14T00:00:00.000
{ "year": 2018, "sha1": "2d23f26016e80d31b297bd00cf29b17c92f7f43f", "oa_license": "CCBY", "oa_url": "https://scitemed.com/upload/2564/342/scitemed.imj.2018.00069.pdf?t=0532", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2d23f26016e80d31b297bd00cf29b17c92f7f43f", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
238636946
pes2o/s2orc
v3-fos-license
Protocol of a systematic review and network meta-analysis for the prevention and treatment of perinatal depression Introduction Perinatal depression is common and can often lead to adverse health outcomes for mother and child. Multiple pharmacological and non-pharmacological treatments have been evaluated against usual care or placebo controls in meta-analyses for preventing and treating perinatal depression compared. It is not yet established which of these candidate treatments might be the optimal approach for prevention or treatment. Methods and analysis A systematic review and Bayesian network meta-analyses will be conducted. Eight electronic databases shall be searched for randomised controlled trials that have evaluated the effectiveness of treatments for prevention and/or treatment of perinatal depression. Screening of articles shall be conducted by two reviewers independently. One network meta-analysis shall evaluate the effectiveness of interventions in preventing depression during the perinatal period. A second network meta-analysis shall compare the effectiveness of treatments for depression symptoms in women with perinatal depression. Bayesian 95% credible intervals shall be used to estimate the pooled mean effect size of each treatment, and surface under cumulative ranking area will be used to rank the treatments’ effectiveness. Ethics and dissemination We shall report our findings so that healthcare providers can make informed decisions on what might be the optimal approach for addressing perinatal depression to prevent cases and improve outcomes in those suffering from depression through knowledge exchange workshops, international conference presentations and journal article publications. PROSPERO registration number CRD42020200081. INTRODUCTION Background Depression is the leading cause of disability worldwide, it is a major contributor to the global burden of disease, affecting a variety of populations. 1 Depression experienced during pregnancy and after birth, also known as perinatal depression, is common and can affect up to 20% of mothers. 2 Previous systematic reviews have shown that the prevalence of perinatal depression is generally higher in low-to-middle-income countries than highincome countries in both the antenatal and postnatal stages. 3 4 With perinatal mental disorders, including depression, being more prevalent in mothers who are the most socioeconomically disadvantaged. 4 Cultural factors also have been indicated to be sources of inequality for perinatal mental illness, these include cultural gender-bias, gender-biased violence; both physically and mentally. 4 Perinatal depression can cause a range of adverse health outcomes for women and the development of their children. Depression during pregnancy can lead to multiple problems, including premature delivery, gastrointestinal pain, poorer self-report health and functioning, it can also lead to an increased risk of smoking or alcohol abuse. 5 6 Longer-term depression beyond 1-year post partum, can lead to later problems during Strengths and limitations of this study ► This planned systematic review and network metaanalysis shall evaluate all available evidence from randomised controlled trials to evaluate the comparative effectiveness of each intervention. ► This study shall be conducted following the latest guidelines of the Cochrane Handbook for systematic reviews of interventions. ► Heterogeneity shall be assessed in the network meta-analysis model within the direct-comparisons model and comparing consistency between the direct and indirect model. ► A limitation of this approach could be the different contexts of managing perinatal depression across studies in different regions and cultures. ► To minimise the impact of this subgroup analysis shall be conducted grouping studies by region, allowing for comparisons of interventions within different regions. Open access parenting, including lower interaction and sensitivity between mother and infant. Long lasting depression has been shown to lead to further difficulties in later years for the offspring, including emotional and behavioural difficulties. [7][8][9] Successful prevention of postnatal depression occurring can be achieved. Identifying those with depression early; either during the antenatal period (during pregnancy) or in the postnatal period (up to 1-year post partum) provides a critical opportunity for earlier treatments and prevents poorer outcomes from occurring. 10 11 Despite its significant burden on maternal and child health, less than half of pregnant women suffering from depression are identified within healthcare. 12 Attitudes towards identifying cases of perinatal depression among clinicians are positive. Still, there is a need for support strategies that can identify and treat those at risk of perinatal depression within routine practice. 13 A systematic review suggested that the Whooley questions, a set of twoitem yes/no answered questions were a valid and feasible approach for identifying possible positive cases of perinatal depression. 14 Once identified, healthcare services can provide interventions for preventing those at risk of depression occurring in the future or offering treatments for those with depression. There is a wide variety of interventions that are shown to be effective in treating depression symptoms in perinatal women compared with non-active controls: psychological interventions, pharmacological interventions or combinations of both 15 ; psychoeducation or parenting education 16 ; psychosocial interventions for treatment and prevention [17][18][19][20] ; systemically oriented psychotherapies 21 ; mindfulness 22 ; family therapy 23 ; physical activity 24 and yoga-based interventions. 25 In later stage postnatal women, meta-analyses have suggested cognitive behavioural therapy, interpersonal therapy, counselling and other psychological interventions are effective in treating depression symptoms when compared with usual care. 26 Another meta-analysis on antidepressants for postnatal depression in a small number of studies show that selective serotonin reuptake inhibitors are effective for depression compared with placebo. 27 Clinical guidelines recommend screening and treatment for perinatal depression, these guidelines do not provide recommendations on which treatments are most effective. 28 Treatment options for depression during pregnancy may vary depending on different severities of depression. 10 Many treatments previously evaluated were identified as effective on depression symptoms compared with usual care or placebo, but the relative comparability of these treatments has not previously been investigated. 26 29 Relative comparisons of different treatments would allow healthcare providers to make informed decisions on how different active treatments can be compared. The relative comparison of treatments could also provide evidence for the optimal approach to treating perinatal depression based on all available evidence. Rationale Using a Bayesian network meta-analysis facilitates all interventions to be compared equally with one-another by using the direct evidence (within study comparisons of treatments) and indirect evidence (comparing treatments across different studies), which previous systematic review studies and clinical guidelines in perinatal depression have not yet explored. This approach can provide evidence for the relative comparative effectiveness of each treatment and potentially identify the optimal approach for preventing cases and treating symptoms of perinatal depression. We will conduct a comprehensive systematic review of available peer-reviewed published trial studies for all pharmacological and non-pharmacological interventions, addressing perinatal depression by conducting a network meta-analysis. Based on this we will be able to compare each treatments' effectiveness with one-another and recommend types of interventions that may optimally address the prevention and treatment of perinatal depression. An example of this could be making relative comparisons on the effectiveness of interventions that require fewer resources for health providers to implement that is scalable against more resource intensive interventions that require trained specialists or equipment to implement and the level of trade-off in clinical effectiveness between those interventions. Another advantage of Bayesian network meta-analysis is statistical certainty can be estimated, this allows for identifying potentially promising interventions with low levels of statistical certainty that may require further investigation to establish effectiveness. Objectives This study will assess the clinical benefits of different interventions for addressing the prevention and treatment of perinatal depression. Two objectives have been developed for this study. 1. To identify the optimal approach for preventing perinatal depression in women. 2. To identify the optimal approach for the treatment of perinatal depression in women. METHODS The protocol of this study has been developed following the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols guidelines (online supplemental appendix 1) and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension statement on the reporting of systematic reviews that incorporate network meta-analysis of healthcare interventions. 30 31 Eligibility criteria Study selection and eligibility criteria were based on the Population, Intervention, Comparison, Outcome, Study design (PICOS)-related to objectives 1 and 2 (table 1). Briefly, studies on participants who are perinatal women Open access between 20 weeks gestation to 1 year after birth will be included. We selected 1 year after birth to reflect the time period in which there is a risk of postpartum depression occurring between day 1 to 1-year post partum. 32 We did not specify any limitations on interventions as this review aims to identify and evaluate all intervention types that address depression within the target population. Given the advantages of the network meta-analysis approach, no limitations will be placed on the comparison group for studies. Outcomes for objective 1 will be: confirmed cases of depression and for objective 2: measurements of depression severity or symptoms. Study design will only include randomised controlled trials to minimise the risk of bias when comparing effectiveness of interventions. Eligibility is displayed in table 2. Studies that include participants with substance abuse, psychotic or developmental disorders or medical conditions, long-term care, residential facilities or those in institutions (psychiatric inpatients) were excluded as the treatment needs of these populations' depression symptoms differ compared with those with depression alone. 33 Included articles were limited to those written in English, there was no limitation of publication year for included articles. We shall conduct searches of reference lists and forward citation of identified and included studies using the Web of Science database for additional papers. We shall exclude studies that are published systematic review or literature review identified during our electronic searches but shall examine the reference lists for additional candidate studies. Search strategy Searches of online databases will commence in August 2021. The search strategy has been developed based on the two sets of PICOS with one for each review question. We identified all search terms, related to the two sets of PICOS, from previously published meta-analyses on the prevention or treatment of perinatal depression [15][16][17][18][19][20][21][22][23][24][25] to maximise sensitivity of our search strategy in identifying Open access all definitions of perinatal depression and minimise risk to missing relevant studies. An example full list of search terms for the MEDLINE database can be found in online supplemental appendix 2. Data management Studies retrieved in our search strategy shall be downloaded and stored in EndNote (X9) where duplicates across different sources will be removed. Study selection process Screening of studies shall be conducted using the inclusion and exclusion criteria (table 2). Two reviewers (RDS, JG, SCH or HLI) shall conduct study selection, with both reviewers reviewing each retrieved study independently. Differences in assessment or any disagreements over the eligibility of studies were resolved by discussion, and in cases of disagreement, the third reviewer (KY-WL) would be consulted. Screening of studies shall be conducted in two different stages: (1) title and abstract, where studies will only be excluded if there is a clear disparity to eligibility criteria, if it is unclear, then articles or included articles will be further screened in the stage; (2) full article screening, where eligibility can be decided based on all reported article information, including supplementary materials. The rationale for the exclusion of studies will be recorded. Piloting of the data selection process shall take place prior to the full study selection, with 100 articles randomly selected from the search retrieved. Adjustments to the inclusion and exclusion criteria and study selection process may be made following piloting. Data extraction Data extraction shall be conducted by two of the reviewers (RDS, JG, SCH or HLI) independently using a standardised extraction form. Differences in assessment or any disagreements for data extraction of studies will be resolved by discussion, and in cases of disagreement, the third reviewer (KY-WL) will be consulted. The data extraction forms were taken from the Cochrane Consumers and Communication Review Group's Data Extraction Template for Cochrane Reviews, and were modified to fit this systematic review. The extracted information will include study setting, study participant demographics and baseline characteristics, details of the intervention and control conditions, study methodology, recruitment and study completion rates, outcomes and times of measurement, indicators of acceptability to users, suggested mechanisms of intervention action and information for the assessment of the risk of bias. Missing data will be requested from study authors. Risk of bias assessment Risk of bias will be assessed using version 2 of the Cochrane Risk of Bias tool (RoB2). 34 This tool includes five-domains based on the risk of biases within the randomisation process, deviation from intervention design, missing outcome data, measurement of outcome and reporting of results. The risk of bias assessment shall be completed following the guidance released with the RoB2. Possible risk of bias judgments shall include (1) low risk, (2) some concerns and (3) high risk of bias. Results of the risk of bias shall be presented within each domain of risk of bias, as well as an overall score for each study. Overall risk of bias shall be judged based on the suggested criteria of the RoB2 guidance document. Piloting and adaption of the wording in the risk of bias assessment, if necessary, shall be conducted prior to the full review. Risk of bias will be evaluated by two of the reviewers (RDS, JG, SCH or HLI) independently evaluate the risk of bias. Differences in assessment or any disagreements will be resolved by discussion, and in cases of disagreement, the third reviewer (KY-WL) will be consulted. Data analysis Effect measurements Data from each study shall be extracted with effect size calculated. For studies addressing review question 1, follow-up data on the number of positive depressions cases in each arm shall be extracted allowing for relative risk (RR) to be calculated in each arms' comparison, with RR of less than 1 representing reduced risk of depression. For review question 2, treatment effect shall be calculated using mean difference (MD) if possible, or standardised MD (SMD) for depression severity. To calculate SMD; difference in changes (from baseline to follow-up) for intervention arms shall be used, divided by the pooled SD of change. For studies with three or more arms, a reference group shall be taken to calculate the SMD. Studies with negative SMD effect sizes representing improvements in reducing depression severity. Score changes will be used to control for possible baseline differences between study arms. For studies with multiple follow-up time points, we shall use the longest duration, up to a maximum of 1 year from the end of the intervention. In studies using median and IQRs we shall impute these following the Cochrane Handbook. 35 Studies that do not report the SD of change from baseline will be imputed following the Cochrane Handbook. A correlation coefficient for imputation of SD of change shall be estimated based on the mean correlation in studies that do report all relevant data. If no studies report baseline, follow-up and change values, we shall take the conservative value of r=0.5 to estimate SD of change. Where possible, we shall use the intention to treat sample for analyses. Interventions will be grouped for the network analysis using categories used by previous individual meta-analysis [16][17][18][19][20][21][22][23][24][25] as a framework, new emerging interventions not previously evaluated in meta-analysis shall be organised and grouped by agreement with the reviewing team. Network meta-analysis implementation Two network meta-analyses shall be conducted for depression prevention, using RR and treatment of depression severity, using SMD. We shall estimate model consistency by comparing the RR or SMD of the direct (within study comparisons) and indirect comparisons (between study Open access comparisons), where direct comparisons are possible. The network meta-analysis will be conducted with a Bayesian Markov chain Monte Carlo (MCMC) method fitted using the Just Another Gibbs Samplers software within the R Statistical Software conducted within the BUGSnet package (R Core Team, 2020). We shall run four MCMC chains simultaneously in our model and construct two separate MCMC simulations to compare convergence. The Bayesian model shall run 5000 burn-in iterations and 100 000 simulation iterations. Convergence shall be assessed using the potential scale reduction factor, where we expect the model to reduce to below 1.05. Heterogeneity (direct evidence) and model consistency (direct vs indirect) will be assessed using the node split function of the BUGSsnet package; sources of heterogeneity will be explored between studies. All results of each possible comparison of interventions shall be made using RR or SMD and 95% credible intervals, which can be considered Bayesian equivalent of CIs. Rank probabilities will represent the probability of the ranking performance of each intervention type. Surface under the cumulative ranking score will also be used to estimate the likelihood of the most effective intervention. 36 37 A limitation of the network meta-analysis approach is that treatments that have not been previously combined in trials cannot be combined in the meta-analysis. This precludes an investigation of whether or not combining two or more treatments provides any extra benefits than one treatment only. Treatments for women with depression during or after pregnancy can vary in different populations and across disease severity. 10 To address this, network meta-regression and subgroup analysis shall be conducted to evaluate study characteristics that may influence the effect sizes of interventions within the network. Factors for exploring in the meta-regressions shall include the year of publication, the geographical region the study was conducted, study sample age, whether the study participants were at antenatal or early postnatal stage of motherhood, study sample's baseline depression severity (if possible), risk of bias in all five domains and overall risk of bias. Publication bias shall be assessed using two funnel plots of all included studies for each objective; trim and fill analysis shall also be conducted. Patient involvement Patients and the public were not involved in the design of the protocol or analysis of this study. Ethics and dissemination Ethics is not required for this study, given that this is a protocol for a systematic review, which uses published data. The results of the review would be widely disseminated locally, nationally, and internationally. A paper would be submitted to a leading peer-review journal in this field, reporting of the study will adhere to the PRISMA extension statement on the reporting systematic reviews that incorporate network meta-analysis of healthcare interventions. 31 When presenting our findings from this study, the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) scoring for evidence and strength of recommendations shall be made following the criteria in the GRADE handbook. 38 The findings shall also be presented at a relevant international conference. Twitter Claire Anna Wilson @drclairewilson Contributors RDS drafted the manuscript. KY-WL is the principal investigator of the study and is responsible for conducting the study overall. The study's research question and design were conceived by RDS and KY-WL. Search terms for all databases were constructed and pilot tested by RDS, JG, HLI and SCH. Screening forms, data extraction forms and risk of bias forms were developed and piloted by RDS, KY-WL, JG, SCH and HLI. Assistance with preparation of the manuscript and clinical expertise into the design of the study's PICOS and inclusion/exclusion criteria was given by CAW. Development and plan of analyses were prepared by RDS, DYTF and SA. All authors contributed to the preparation of this manuscript. Funding This research is funded by the Seed Funding from the University of Hong Kong. Competing interests None declared. Patient consent for publication Not applicable. Provenance and peer review Not commissioned; externally peer reviewed. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
2021-10-13T06:16:50.679Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "1658ab92e5a2c6b4b56c9204142284819429bcdc", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/11/10/e048764.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "ca30fbffe40fce32da2de0c9501def62fcbafda4", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
13703915
pes2o/s2orc
v3-fos-license
Genetic Polymorphisms of the UDP‐Glucuronosyltransferase 1A7 Gene and Irinotecan Toxicity in Japanese Cancer Patients Irinotecan often causes unpredictably severe, occasionally fatal, toxicity involving leukopenia or diarrhea. It is converted by carboxyesterase to an active metabolite, SN–38, which is further conjugated and detoxified to SN–38–glucuronide by UDP‐glucuronosyltransferase (UGT). We genotyped the UGT1A7 gene by direct sequencing analysis and polymerase chain reaction‐restriction fragment length polymorphism in 118 cancer patients and 108 healthy subjects. All the patients had received irinotecan‐containing chemotherapy and were evaluated to see whether the variant UGT1A7 genotype would increase the likelihood of severe toxicity of irinotecan consisting of grade 4 leukopenia and/or grade 3 or more diarrhea. Among the 26 patients with severe toxicity, the allele frequencies were 61.5% for UGT1A7*1, 15.4% for UGT1A7*2, and 23.1% for UGT1A7*3. On the other hand, the frequencies were 63.6% for UGT1A7*1, 15.8% for UGT1A7*2, and 20.7% for UGT1A7*3 among the 92 patients without severe toxicity. None of the 118 patients had UGT1A7*4. Neither univariate analysis (odds ratio, 1.13; 95% confidential interval, 0.46–2.75) nor multivariate logistic regression analysis (odds ratio, 0.74; 95% confidential interval, 0.26–2.07) found any significant association between carrying at least one of the variant alleles and the occurrence of severe toxicity. The distribution of UGT1A7 genotypes in 108 healthy subjects was not significantly different from that in the patients (P=0.99 and 0.86 for those with and without severe toxicity, respectively), but significantly less than that in Caucasians reported previously (P<0.001). The results suggested that determination of UGT1A7 genotypes would not be useful for predicting severe toxicity of irinotecan. Irinotecan (7-ethyl-10-[4-(1-piperidino)-1-piperi dino]carbonyloxycamptothecin, CPT-11) is a camptothecin analogue with strong antitumor activity that inhibits topoisomerase I, which is now commonly used in the treatment of patients with colorectal or lung cancers. 1,2) Irinotecan is hydrolyzed by carboxylesterase to form SN-38 (7-ethyl-10-hydroxycamptothecin), which has a 100-to 1000-fold higher antitumor activity than the parent drug. 3) SN-38 is conjugated by UDP-glucuronosyltransferase (UGT) 1A1 in the liver to yield SN-38 glucuronide, which has 1/100 the antitumor activity of SN-38. 4) The SN-38 glucuronide is excreted in the small intestine via bile, where bacterial glucuronidase resolves the glucuronide into SN-38 and glucuronic acid. 5) A part of the SN-38 is reabsorbed from the intestine into body, resulting in enterohepatic circulation of SN-38. 6) Irinotecan often causes unpredictably severe, occasionally fatal, toxicity, involving leukopenia or diarrhea. An inter-individual difference in drug sensitivity would be caused by a difference in drug disposition after irinotecan administration. 6,7) We have recently suggested that genetic polymorphisms of UGT1A1 could explain some of the inter-individual differences in the pharmacokinetics and pharmacodynamics of irinotecan. 8,9) The metabolic ratios (SN-38/SN-38 glucuronide) in a patient homozygous for UGT1A1 * 28, having a 2-base pair (bp) insertion (TA) in the TATA box in the promoter region, were uncharacteristically higher than those in other patients. 8) Furthermore, a case control study of 118 Japanese patients, who had received irinotecan for cancer, revealed that genotypes either heterozygous or homozygous for UGT1A1 * 28 would be a significant risk factor for severe irinotecan toxicity. 9) Thus, we considered that patients with variant genotypes of the UGT1A1 gene would be at higher risk for severe toxicity due to a relatively increased bioavailability of active unconjugated SN-38. Besides UGT1A1, another UGT isoform, UGT1A7, has been recently reported to glucuronidate SN-38 at a more than 9-fold higher level than that by UGT1A1 in vitro, using the human UGT1 enzymes expressed transiently in COS-1 cells. 10) UGT1A7 is expressed in the gastrointestine and lung, but not in the liver. 11,12) Since it has been reported that SN-38 concentrations in intestinal tissues, as well as in liver, were high after irinotecan administration, 13) it can be speculated that SN-38 is conjugated to SN-38 glucuronide again by UGT1A7 after reabsorption into the intestinal tissues. Furthermore, an inverse relationship between SN-38 glucuronidation and diarrhea was reported in cancer patients treated with irinotecan. 14) In mouse models, a dose-dependent relationship between diarrhea and accumulation of SN-38 in the intestine has been noted after intraperitoneal administration of irinotecan. 15) Thus, we hypothesized that the UGT1A7 polymorphisms would also affect the occurrence of severe toxicity by irinotecan, especially in respect to diarrhea, through modifying the enzyme activity in gastrointestinal tissues. Single nucleotide polymorphisms (SNPs) of the UGT1A7 gene are known to be linked to the in vitro enzymatic activity: UGT1A7 * 1 (N 129 R 131 W 208 as the reference sequence), UGT1A7 * 2 (K 129 K 131 W 208 ), UGT1A7 * 3 (K 129 K 131 R 208 ), and UGT1A7 * 4 (N 129 R 131 R 208 ). 16) UGT1A7 * 2 comprises two transversions and one transition (T387G, C391A and G392A), which produce the amino acid substitutions Asn129Lys and Arg131Lys, respectively. UGT1A7 * 3 comprises two transversions and two transitions (T387G, C391A, G392A and T622C), producing Asn129Lys, Arg131Lys and Trp208Arg, respectively. UGT1A7 * 4 has a T622C transition producing a Trp208Arg substitution. When these four UGT1A7 variants were expressed in HEK cells and their in vitro enzymatic activities toward benzo(a)pyrene metabolites were examined, the membrane from the UGT1A7 * 3-expressing cells exhibited a 5.8-fold lower relative V max compared to that of UGT1A7 * 1, whereas UGT1A7 * 2 and UGT1A7 * 4 had a 2.6-and 2.8-fold lower relative V max than UGT1A7 * 1, respectively. While the previous population study revealed that more than 85% of Caucasians had at least one of the three variant alleles, 16) the frequency of UGT1A7 allele in Asians has not yet been reported. We hypothesized that UGT1A7 polymorphisms would also affect inter-patient or inter-ethnic variations in sensitivity to drugs or carcinogens. The purpose of this study is to evaluate the influence of genetic polymorphisms of UGT1A7 gene on risk for severe irinotecan toxicity in cancer patients treated with irinotecan, as well as the frequency of the UGT1A7 allele in the healthy Japanese population. Subjects We genotyped the UGT1A7 gene in 118 cancer patients and 108 healthy subjects (31 females and 77 males; median age 49 years). We retrospectively reviewed the clinical records including patient characteristics (age, gender, primary disease, previous treatments, evidence of distant metastasis, Eastern Cooperative Oncology Group performance status, and major complications), dosage and schedule of irinotecan administration, concurrent use of other drugs or radiotherapy, and observed toxicity following irinotecan infusion (Tables I and II). The chemotherapy regimens were categorized into 3 groups; irinotecan alone, irinotecan plus platinum (cisplatin or carboplatin), and irinotecan plus other agents (paclitaxel, docetaxel, etoposide, mitomycin C or 5-fluorouracil). We counted the number of days when patients received granulocyte colony stimulating factors or loperamide hydrochloride, which is commonly prescribed for irinotecan-induced diarrhea in Japan. Prophylactic uses of granulocyte colony stimulating factor could not be clearly distinguished from those for neutropenia. Since the dose-limiting toxicity of irinotecan results in leukopenia and diarrhea, 2) we defined "severe toxicity" in this research as leukopenia of grade 4 (<0.9×10 9 /liter) and/or diarrhea of grade 3 or worse (grade 3, watery for 5 days or more; grade 4, hemorrhagic or dehydration), classified in accordance with the Japan Society for Cancer Therapy criteria. 17) Then, we identified 26 patients who had experienced severe toxicity and 92 patients who did not. All the patients gave informed consent in writing for their peripheral blood to be used for the research. All healthy subjects, who were unrelated and were not found to have any malignant diseases by medical checkup at Nagoya University Hospital, were consecutively enrolled after having given their informed consent in writing for their blood to be used in the genetic research. The study was approved by the Ethical Committees of Nagoya University School of Medicine and the participating institutes. Genotyping Genomic DNA was prepared from whole blood (100-200 µl) using a QIAamp Blood Kit (QIAGEN GmbH, Hilden, Germany). We distinguished the following variant UGT1A7 alleles from the reference allele UGT1A7 * 1 by direct sequencing analyses: UGT1A7 * 2, Hemi-nested PCR-restriction fragment length polymorphism (RFLP) assay was performed to determine the haplotype of these individuals. We used a forward primer 5′-CAAATTGCAGGAGTTTGTTTAATGACCG-3′ (nucleotide 365 to 392), which matches UGT1A7 * 1 and UGT1A7 * 4 (reference sequence at codon 131), but not UGT1A7 * 2 and UGT1A7 * 3, with the same reverse primer used in the first PCR. One microliter of the 1000-folddiluted product of the first PCR was subjected to the heminested PCR. The PCR condition was: 95°C for 5 min followed by 25 cycles of 94°C for 30 s, 55°C for 30 s, and 72°C for 40 s. Only DNA from UGT1A7 * 1 or UGT1A7 * 4 gave a 454-bp fragment, which was subsequently digested with RsaI (Toyobo Co., Tokyo) for 1 h at 37°C. Restriction fragments were analyzed by 4% agarose gel electrophoresis and ethidium bromide staining. DNA from UGT1A7 * 1 gave an undigested fragment (genotyped as UGT1A7 * 1/UGT1A7 * 3), whereas that from UGT1A7 * 4 was digested into 256-and 198-bp fragments (genotyped as UGT1A7 * 2/UGT1A7 * 4). Statistical analysis The correlation or association between potential variables was assessed using the χ 2 test or Fisher's exact test for categorical variables, or the Mann-Whitney U test for continuous ones. A crude odds ratio with a 95% confidence interval (CI) was calculated to analyze the association between carrying at least one of the variant UGT1A7 alleles and severe irinotecan toxicity. Multivariate logistic regression analysis was used to assess the association when controlling for other variables. Possible variables that seemed to be associated with severe toxicity (P<0.1) were evaluated by stepwise procedures to be included in the final model: UGT1A1 * 28, female gender, and chemotherapy regimen, which were recognized as important covariates to explain the severe toxicity in the previous study. 9) We performed these analyses using SAS ver. 6.12 software (SAS Institute Inc., Cary, NC). A difference was considered statistically significant when the two-tailed P value was under 0.05. RESULTS Among the 26 patients with severe toxicity, the allele frequencies were 61.5% for UGT1A7 * 1, 15.4% for UGT1A7 * 2, and 23.1% for UGT1A7 * 3. On the other hand, 63.6% for UGT1A7 * 1, 15.8% for UGT1A7 * 2, and 20.7% for UGT1A7 * 3 were found among the 92 patients without severe toxicity. None had UGT1A7 * 4 among the total of 118 patients. Distributions of UGT1A7 genotypes among the patients who experienced severe toxicity and those who did not were apparently comparable (Table III). Univariate analysis showed no significant association between carrying at least one of the variant alleles and the occurrence of severe toxicity (crude odds ratio, 1.13; 95% CI, 0.46-2.75). The relationship remained non-significant after controlling for the other factors (Table IV). Among the 26 patients with severe toxicity, 3 and 19 patients experienced grade 4 and grade 3 diarrhea, respectively, and there was no significant association between the variant alleles and the incidence of such diarrhea (crude odds ratio, 1.27; 95%CI, 0.50-3.46). Genotypes of the 3 individuals who experienced grade 4 diarrhea were UGT1A7 * 1/UGT1A7 * 3, UGT1A7 * 2/UGT1A7 * 3 and UGT1A7 * 3/UGT1A7 * 3. The allelic frequencies in 108 healthy subjects were 59.2% for UGT1A7 * 1, 15.3% for UGT1A7 * 2 and 25.5% for UGT1A7 * 3. The UGT1A7 * 4 allele was not found in this healthy population. The patients heterozygous for the variant sequences at the three codons 129, 131 and 208 were all genotyped as UGT1A7 * 1/UGT1A7 * 3 in both the patients and the healthy population. The distribution of the UGT1A7 genotypes in the healthy subjects was not significantly different from that in the patients (P=0.99 and 0.86 for those with and without severe toxicity, respectively), although it was significantly less than that in the Caucasians reported previously (P<0.001) ( Table V). 16) DISCUSSION In the present study, we analyzed the association between UGT1A7 polymorphisms and severe toxicity from irinotecan treatment in Japanese cancer patients, and established the frequency of UGT1A7 alleles in the Japa- nese population. We did not find a significant association between the UGT1A7 genotypes and the severe leukopenia and/or diarrhea caused by irinotecan, though UGT1A7 has been reported to glucuronidate SN-38 more efficiently than UGT1A1 in vitro. 10) The reason for this discrepancy is unclear, but it might be because UGT1A1 is expressed in the liver, the primary organ for detoxifying intravenous irinotecan, while UGT1A7 is not. This finding suggests that UGT1A7 would play only a minor role in SN-38 glucuronidation in vivo. In addition, we found the variant UGT1A7 alleles are rare in the Japanese population compared with those reported in Caucasians. 16) Additionally, the UGT1A7 * 4 allele, whose frequency in Caucasians has been reported as 0.017, has not been found in Japanese. This is the first report investigating inter-ethnic differences in the frequency of the variant UGT1A7 alleles. Such UGT1A7 polymorphisms are potentially a cause of inter-patient or inter-ethnic variations in sensitivity to drugs or carcinogens that are metabolized by UGT1A7. Genetic polymorphisms of drug-metabolizing enzymes might also be an important factor in cancer susceptibility. So far, several phase I drug-metabolizing enzymes, such as cytochrome P450 (CYP1A1, CYP2D6 or CYP2E1), and phase II enzymes (N-acetyltransferase or glutathione Stransferase) have been reported to be associated with an increased risk of cancers. [19][20][21] Glucuronidation by UGTs is regarded as an important pathway to detoxify toxic and or carcinogenic compounds. 22) Differences in the activities of these enzymes might cause variations in the toxic or carcinogenic effects of drugs or environmental pollutants. In in vitro expression systems, the UGT1A7 polymorphisms exhibited significant variations in glucuronidation activity towards 3-, 7-, and 9-hydroxybenzo[a]pyrene, which is a strong carcinogen in cigarette smoke. 16) On the other hand, the UGT1A-deficient Gunn rat was more susceptible to DNA adduct formation after exposure to benzo[a]pyrene than the UGT1A-intact rat. 23) Significant associations of UGT1A7 genotypes with risks of hepatocellular 24) or orolaryngeal cancers 25) have recently been reported. Although the distributions of the UGT1A7 polymorphisms did not differ among cancer patients and the healthy subjects in this study, further investigation into the effects of polymorphism on cancer susceptibility is necessary. Since UGT1A7 is also expressed in the lung, 16) we are planning to investigate the association with lung cancer. Inter-individual variations in cancer chemotherapy are often due to genetic alterations in drug-metabolizing enzymes, for example, thiopurine S-methyltransferase and dihydropyrimidine dehydrogenase. [26][27][28] This study, however, did not suggest any potential clinical utility of the determination of the UGT1A7 genotypes for predicting severe toxicity by irinotecan, and we found no difference in the genotype distribution between Japanese cancer patients and healthy subjects.
2018-04-03T05:20:15.333Z
2002-05-01T00:00:00.000
{ "year": 2002, "sha1": "078337b80517370db83eba8934fef01de627e836", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc5927033?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "078337b80517370db83eba8934fef01de627e836", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
246488838
pes2o/s2orc
v3-fos-license
Traveler Precautions Relating to Covid-19 Countries Over the past two years, the issue of the rapid spread of coronavirus infection has been actively discussed. In this regard, special attention is paid to measures related to restrictions on international movement. Need to know what measures exist in different countries, what documents are required for unhindered entry into the territory of a foreign country. This article analyzes and compares passenger traffic before and during the outbreak of coronavirus infection. The analysis and comparison of air transportation was also carried out and the economic losses of companies that carried passengers to various countries of the world were indicated. Introduction Problems related to the spread and infection of covid-19 have remained relevant for several years. In all countries of the world, the main goal of the authorities is to protect citizens from the outbreak of disease and to fight the coronavirus infection. This topic is relevant today, since each time new strains of covid-19 are reported, and, as a result, the emergence of new precautions in states. The aim of the study of this topic is to identify precautions for travelers, as well as prohibitions in favor of minimizing the incidence of covid-19 among citizens from all over the world. The stated aim of the study requires the solution of a number of problems:  Consider and analyze the statistics of the incidence of citizens in the period from 2020 to 2021 in the main states;  Review and analyze the statistics of the number of trips in 2019 and 2020 in these countries;  Identify precautions and penalties for non-compliance in each of these countries;  Consider the airline companies that went out of business during the pandemic and name the losses countries have suffered due to the closure of airlines. On December 31, 2019, the first case of pneumonia caused by an unknown infection was detected in the Chinese city of Wuhan. Outside China, the first case of covid-19 infection was detected on January 13 in Germany. Almost a month after the incident in China, a global health emergency was declared. On February 27, 2020, cases of infection were recorded throughout the continent, except for Antarctica. As early as March 11, 2020, it was announced that the coronavirus outbreak is a pandemic. Introduction Problems related to the spread and infection of covid-19 have remained relevant for several years. In all countries of the world, the main goal of the authorities is to protect citizens from the outbreak of disease and to fight the coronavirus infection. This topic is relevant today, since each time new strains of covid-19 are reported, and, as a result, the emergence of new precautions in states. The aim of the study of this topic is to identify precautions for travelers, as well as prohibitions in favor of minimizing the incidence of covid-19 among citizens from all over the world. The stated aim of the study requires the solution of a number of problems:  Consider and analyze the statistics of the incidence of citizens in the period from 2020 to 2021 in the main states;  Review and analyze the statistics of the number of trips in 2019 and 2020 in these countries;  Identify precautions and penalties for non-compliance in each of these countries;  Consider the airline companies that went out of business during the pandemic and name the losses countries have suffered due to the closure of airlines. On December 31, 2019, the first case of pneumonia caused by an unknown infection was detected in the Chinese city of Wuhan. Outside China, the first case of covid-19 infection was detected on January 13 in Germany. Almost a month after the incident in China, a global health emergency was declared. On February 27, 2020, cases of infection were recorded throughout the continent, except for Antarctica. As early as March 11, 2020, it was announced that the coronavirus outbreak is a pandemic. More specifically, James Brett Case et al. (2021) "Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) emerged into the human population in late 2019 and caused the global COVID-19 pandemic. SARS-CoV-2 has spread to more than 215 countries and infected many millions of people. Despite the introduction of numerous governmental and public health measures to control disease spread, infections continue at an unabated pace, suggesting that effective vaccines and antiviral drugs will be required to curtail disease, end the pandemic, and restore societal norms. " This excerpt has been added to indicate the severity and magnitude of a new spreading infection worldwide. Materials and methods Empirical theoretical research methods, such as deduction and induction, comparative analysis, and experimental research methods, such as comparative method, would be appropriate to consider travel precautions due to the worldwide spread of covid-19. The main materials for the study were scientific publications by James Brett Case et al., 2021;Guirong Fang and Qunli Song, 2021;Hamid Khataee et al., 2021;Takashi Akamatsu et al., 2021;Liang Tian, 2021;Frank Schlosser, 2020;Pablo Rodríguez et al., 2021;Cristiana J. Silva et al., 2021;Angelo Bonfanti et al., 2021;Nikolaos Askitas et al., 2021;Valentina Palmieri et al., 2021;Yu Liu et al., 2021. On the basis of empirical and theoretical research methods, in particular comparative analysis, it is worth considering statistics regarding the incidence and its impact on travel in some countries of the world. Table 1 presents statistics for covid-19 infections from 2020 to 2021. Table 1 shows that the United States of America was the largest country in terms of incidence, with more than 30 million infections at the end of March. The UK and Russia have become almost equal in number of cases of this infection. In these countries at the end of March, there are about 4.5 million cases of infection. The most disciplined safety precautions in the table are China and Portugal. In these countries, the number of infected citizens does not exceed one hundred thousand people. You also need to consider statistics that will help track travel changes in 2019 and 2020. Table 2 shows that the number of travelers to China in 2020 decreased by 99%. This allows us to say that the measures taken in this country were strict and effective, since the number of diseases did not exceed one hundred thousand. In the US, the number of travelers in 2020 decreased by more than 86%. In the UK, as well as in China, the number of citizens who visited the country fell by almost 99%. More than 70% of tourists decreased in Turkey; in percentage terms, this country became the most visited. In Russia, the number of people entering the country has decreased by almost 83%. And in Germany and Portugal, the number of tourists amounted to 5-7% of the total number of citizens who visited the country in 2019. All over the world, air travel of passengers was cut, which had an extremely bad effect on the economies of all countries. So, according to IATA, last year air transportation decreased by 65.9%. The highest losses were observed in the Middle East, they decreased by 72.2%. In Europe, the figure dropped by 69.9%, in Africa -by 68.8%, in North America -65.2%. The smallest drop in air traffic was in the Asia-Pacific region, but this figure was quite large, the drop was 61.9%. It must be said that the most significant drop was in international air transportation. They decreased by 85.3%. Domestic passenger transportation decreased by 48.8%. It is also important to say that the smallest reduction was in Russia and amounted to 23.5%, and in December they decreased by only 12% compared to 2019. On the basis of theoretical methods, in particular deduction, it is necessary to consider the measures that have been taken to suppress coronavirus infection in each of the previously named countries. China At the end of December 2019, a covid-19 epidemic was discovered in China, which posed a great threat to public health. Analysis of the initial cluster of infections revealed a common common ground: The Wuhan seafood market in Hubei province, which actively trades in wildlife, leading some researchers to speculate that wild animals may be the cause of the outbreak. Based on this, the authors of Guirong Fang and Qunli Song (2021) emphasize that "Since the outbreak of COVID-19 epidemic, China has taken many effective measures to prevent its spreading, including revision of the Wild Animal Conservation Law". This excerpt points to the importance of creating new relevant regulations applied by the Chinese government to protect animals during the covid-19 pandemic. On January 23, 2020, the growth rate of officially confirmed infected people became the basis for the introduction of quarantine in China. Referring to the scientific literature, the team of authors Hamid Khataee et al. (2021) notes that "The comparison of social distancing efforts in China, South Korea, Italy, France, Iran and USA revealed that the initial doubling time of identified cases was about 2 days which was prolonged substantially by the various restrictions introduced in these countries". This excerpt has been added to show that the early days of the covid-19 outbreak are of great importance, as timely measures are taken to prevent the rapid spread of the new coronavirus infection. The Shanghai Ministry of Foreign Affairs has established two channels for the smooth entry of business personnel into China: the regular channel and the fast track. On May 25, 2020, fast-track agreements were signed with the UK, France, Germany, Korea and Japan. The fast track channel allows you to start working within 48 hours after crossing the border, in the case of a negative PCR test for covid-19. People entering the country through the regular channel were required to comply with a two-week quarantine. On June 15, 2020, the Chinese government announced that tourists arriving from Moscow must have a negative covid-19 test result with them within 120 hours of boarding. When visiting Hong Kong, foreign nationals visiting India, Kazakhstan, Pakistan or the United States two weeks before will need to have the following documents with them:  a document confirming a negative test result for covid-19, made 24 hours before the flight;  health declaration upon entry or exit;  accommodation reservation for at least two weeks. These documents must be submitted prior to boarding the aircraft. Tourists arriving in other cities in China must comply with a three-week quarantine at the hotel or inn. Sometimes the self-isolation time can be increased to 28 days. Due to the high incidence of covid-19 in Russia, China has established the following special rules for Russian tourists to visit the country:  72 hours before visiting any city in China, you need to do a PCR test, which will be needed at the border;  you must also provide an antibody test done at any medical facility;  negative test results must be attached to the electronic "Statement of health status" and sent to the Embassy's mail;  after arriving in China, another PCR test is done, followed by a mandatory two-week quarantine;  a second PCR test is done a week later. In case of a negative test result, the tourist will comply with a weekly quarantine, and in case of a positive result, the person will go to a medical facility. As soon as a tourist receives a negative covid-19 test result and an antibody test, he needs to provide passport details, certificates received and a completed health application. Further, all documents are sent to the e-mail of the Chinese Embassy in Russia. After verification, the health statement with the assigned "color" QR-code will be sent back. Only tourists with a green code will be allowed on the plane. The document will need to be printed and carried with you. The Chinese government has decided to simplify the entry into the country of tourists who will be vaccinated with the national vaccine. Such privileges will be available to those foreigners who enter China through the Special Administrative Region. This measure entered into force on March 15, 2021. Covid-19 vaccination statistics in China as of March 25, 2021 in China is 91,346,000 vaccinations. Russia The first cases of coronavirus infection in Russia were recorded in Transbaikalia and the Tyumen region on January 31, 2020. Citizens of China were infected. The third case of infection was recorded only on March 2, 2020, and a massive infection began in the middle of this month. On January 30, the Russian authorities made a decision to temporarily suspend the entry of foreign citizens. First, China became such a country, followed by restrictions on South Korea and Iran. On March 27, Mikhail Mishustin announced the closure of regular and charter flights with all countries. From August 1, 2021, the authorities decided to resume international flights with the UK, Turkey and Tanzania. These flights were carried out from Moscow, St. Petersburg and Rostov-on-Don. It was also allowed to issue visas to visit Russia to citizens of Turkey and Great Britain. For the entry of Russians from foreign countries, it is necessary to provide the test result for covid-19, made 72 hours before boarding the aircraft, as well as fill out an application before check-in and boarding on the State Services website. For foreign citizens to cross the border with Russia, it is necessary to have a test result for covid-19, taken 72 hours before boarding the aircraft. Due to the increase in the number of people infected with new strains of covid-19, the Government of the Russian Federation decided to suspend flights to Turkey for a period from April 15 to June 1, 2021. The exclusion includes flights that are carried out to return home Russian citizens, and flights organized with the aim of cooperation of the Titan-2 concern in the construction of the Akkuyu nuclear power plant. There will also be organized two flights a week, connecting the capitals of Russia and Turkey. USA The first case of coronavirus infection was a positive test in a man who returned from China on January 21, 2020. This was the beginning for the spread of covid-19, as well as for taking measures to prevent infection. However, in our opinion, Takashi Akamatsu et al. (2021) correctly note: "A crude estimate on the basis of these numbers is that the resulting mortality during the progress toward herd immunity can reach 400 per 100,000 populations, that is, about 500,000 deaths in Japan and 1.3 million deaths in the United States. These figures would not be socially acceptable, given that the estimated deaths related to seasonal influenza in Japan and the United States were around 10,000 and 34,000 in the 2018-2019 season, respectively". The presented fragment suggests that covid-19 is a more dangerous virus than influenza or other diseases, since the death statistics around the world have increased several times. The first measures were to restrict the entry of Americans and other citizens who have visited China in the past two weeks. At the end of February, such measures were followed for citizens who have visited Iran over the past 14 days, and since March 13, for citizens arriving from European countries. On the same day, March 13, the American president announced the introduction of a state of emergency in the country. It must be said that the restrictions in individual states were different. So, in some states it was allowed to play sports on the street, as well as go to public laundry rooms and visit close friends, in other US states a full curfew was required. The results of these measures allow Liang Tian et al. (2021) to conclude that "Other than government-led interventions to break the transmission chain, individualled efforts, including social-distancing, mask-wearing, frequent handwashing, etc., can slow down or even stop the outbreak. Among them, radical shifts have taken place in people's attitudes towards population-wide mask wearing. It was practiced in most Asian countries since the initial phase of the outbreak, yet not adopted by the EU and USA until June 2020. As of August 2020, community mask use was recommended or required by most major public health bodies". This excerpt has been added to show the importance and necessity of observing all personal protective equipment by every resident and tourist of the country in order to avoid new outbreaks of the disease. As for entering the country, here you must provide a negative PCR test, which is done no earlier than 72 hours before departure. Children under 2 years of age do not need this test. You need to make a visa, as well as book a hotel room, provide a printed reservation upon arrival in the state. Another measure is filling out a declaration. It is issued on board the aircraft. Some states are asked to undergo a two-week quarantine. United Kingdom The first cases were confirmed in York, UK on 31 January 2020. From June 8, 2020, foreign nationals visiting the UK had to fill out a special form, in which they must provide contact information and their place of residence, and observe two-week self-isolation, mask regime and social distancing. Violation of these rules will result in a fine or expulsion from the UK. Foreign representatives of EU countries and diplomats, employees of research centers and employees specializing in the nuclear industry, oil and gas are not required to comply with the two-week quarantine. From mid-July 2020, tourists from countries belonging to the "green zone" began to be exempted from the mandatory two-week quarantine. This list contains about 30 states. The UK provides an opportunity for its citizens and persons with a residence permit in England to return home from any country by charter. On August 1, 2020, the Russian Federation decided to resume flights with Great Britain to Foggy Albion. Today, Russian tourists can freely visit the UK, observing certain rules:  flights to the UK are made only from three airports in Russia (Moscow, St. Petersburg, Rostov-on-Don);  two days before boarding the aircraft, you need to fill out an electronic document containing information about the trip and personal data;  upon arrival in the UK, you must comply with a two-week quarantine;  the self-isolation site must meet UK requirements, otherwise the authorities will offer you their accommodation. In September 2020, British scientists discovered a new mutation in the virus. The new virus quickly proved to be more deadly than the original covid-19 strain. By December 2020, levels of the new covid-19 strain rose sharply in England, causing massive hospitalizations across the country. By the end of February, 95% of covid-19 cases in the UK were a new strain and it had spread to 59 countries around the world. Effective March 29, 2021, rules come into force stating that all UK citizens and tourists cannot travel abroad without a valid excuse due to the spread of the new covid-19 strain, otherwise they will be fined £ 5,000 sterling. The UK government is discussing the possibility of using "covid passports" for those people who decide to get vaccinated against a new type of covid-19. This innovation has the following privileges: free movement around the country and business trips to other countries of the world. Effective January 5, 2021, the UK authorities have decided to use two vaccines (Pfizer-BioNTech covid-19 and Oxford-AstraZeneca). On February 21, 2021, Prime Minister Boris Johnson announced that all adults in the UK will be vaccinated by the end of July 2021. Germany On January 29, 2020, Germany recorded the first official case of covid-19 infection in Munich, Bavaria. Already on March 13, 2020, the borders with Austria, Denmark and France were closed. Frank Schlosser et al. (2020) elaborate on this issue: "In Germany, these policies included border closures, travel bans, and restrictions of public activity (school and business closures), paired with appeals by the government to avoid trips voluntarily whenever possible". This excerpt shows the promptness of the measures taken by the German authorities. On April 15, 2020, German Chancellor Angela Merkel announced that it would allow travel abroad in cooperation with other European countries. On December 19, 2020, measures were intensified to reduce the increase in the incidence of German citizens due to the spread of a new strain of covid-19, which came from the UK. Border controls in Germany ended on June 15, 2020. During this time, almost 250,000 people were not allowed into the country. Tourists arriving from countries with a high incidence of covid-19 are required to provide a negative test result made no later than 48 hours before arrival in Germany, a vaccination certificate and a positive PCR test result for more than 21 days, but not more than 6 months. Further, foreign citizens are required to comply with a 10-day quarantine, sometimes the quarantine can be reduced to 5 days, in case of a negative test result for covid-19. The exception is passengers who fly for the purpose of a business trip. On the state of Hesse, a 10-day quarantine is not required if a tourist has spent 72 to 120 hours in Germany with a high incidence of covid-19. Vaccination statistics BioNTech -Pfizer covid-19 in Germany as of March 25, 2021 in Germany totals 11,746,915 vaccinations. German citizens wishing to get vaccinated against covid-19 must wait for their turn, otherwise they face a fine of up to 20 thousand euros. If you turn to the scientific literature, you will notice that an innovation has appeared, about which Pablo Rodríguez, et al. (2021) write: "Digital contact tracing (DCT), i.e. using mobile phone apps to make contact tracing and noti fi cation between individuals, has recently been proposed to be a plausible complement of manual contact tracing within the Test, Trace and Isolate (TTI) containment strategy in the context of the COVID-19 pandemic. While several countries, initially including Singapore or South Korea and more recently Switzerland, Italy, France or Germany to cite a few have started to deploy different implementations of such technology, to date there is however a lack of empirical evidence of the effectiveness of such DCT". The presented fragment notes the importance of applying new technologies to control the situation with covid-19. This innovation is still in development, but it already has some results. Portugal In Portugal, the first 2 confirmed cases were reported on March 2, 2020. A state of emergency was declared from March 12 to April 9. Already on March 24, it became clear that the country could not control the spread of covid-19. And the obvious solution was the extension of the state of emergency until April 17, 2020. Thus, Cristiana J. Silva et al. (2021) draw attention to the fact that "The air traffic to and from Portugal was banned for all flights to and from countries that do not belong to the European Union, with certain exceptions". This passage speaks of the operational closure of air borders by the Portuguese authorities, with the exception of EU countries. At the same time, the authors Angelo Bonfanti et al. (2021) suggest not to forget about: "Seven main measures being used by hotel managers to design safe CXs: 1) hygiene and protection measures, 2) internal work reorganization, 3) services cape reorganization, 4) investments in technology and digital innovation, 5) customer wait time reorganization, 6) staff training, and 7) updated communication". This excerpt reveals the main measures used in hotels and inns in Portugal. A specific list of conditions allows the disciplined organization of the work of the whole network. Flights to or from Portugal are allowed to EU countries, Schengen countries, Norway, Australia, China, Korea, Thailand and Switzerland. Passengers must carry a negative PCR test taken within 72 hours of boarding. Tourists arriving from countries with a high incidence of disease (Czech Republic, Slovakia, Estonia, Hungary and Sweden) can visit Portugal only if they have a negative covid-19 test result taken within 72 hours before landing, then they are forced to comply with a two-week quarantine in location designated by the Portuguese Ministry of Health. All routes of communication with the UK and Brazil have been suspended due to the high incidence of diseases in these countries. Travel on urgent matters is allowed to citizens of the Schengen area or EU countries, foreign citizens who have a residence permit in any EU country, and citizens who came for study or family business. Foreign tourists wishing to take a covid-19 test can contact any private medical institution participating in the Portuguese Health Passport program. The results of the analysis will be ready within 72 hours from the date of delivery. From January 2021, only persons with a residence permit are allowed to cross the water and land borders of Portugal. Tourists wishing to visit Madeira and the Porto Santo Islands are required to present a negative PCR test for covid-19 72 hours before crossing the border, and there is also the opportunity to take a free test at the airport. Tourists in need of medical care in Portugal are required to take out health insurance, which will cover cases of infection with a new coronavirus infection. Taking into account the statistics of morbidity in Portugal, the issuance of Schengen visas during covid-19 has been suspended until further notice. Covid-19 vaccination statistics in Portugal as of March 25, 2021 in Portugal are 1,434,044 vaccinations made using the BioNTech -Pfizer drug. Turkey The first case of coronavirus infection in Turkey was reported on March 10, 2020. In order to minimize the number of cases of infection in Turkey, the authorities have made it mandatory for all residents and tourists to wear masks, as well as to observe distancing. The country has introduced a curfew, which means that all residents and tourists are required to stay in their homes from 21:00 Friday to 05:00 Monday. There are also age restrictions: people over the age of 65, as well as children under the age of 20, can be on the street only from 10:00 to 16:00 on weekdays, and these categories of the population are prohibited from using public transport. As for the conditions for the entry of tourists into the country, the authorities introduced the following conditions:  it is necessary to provide a PCR test for coronavirus, which is checked by the airports of departure. The test must be completed 72 hours before arrival in Turkey. The test is not required for children under 6 years old;  Since March 15, 2021, it has become mandatory to fill out a declaration on the register.health.gov.tr website. This declaration must be completed within 72 hours prior to the departure of the aircraft. Only transit passengers and children under 6 years old are exempted from this measure. You need to have with you a printout of the completed document or a screenshot of it. In order to comply with all measures, the country's authorities introduced fines. Fines for the local population and tourists are rarely issued, and the amount of the fine is 900 liras according to the Istanbul standard. In the province of Mugla, as well as in Antalya, they try not to fine, and even distribute free maxi to citizens. But the size of the fine is floating here and ranges from 900 to 3000 Turkish liras. Since mid-March 2021, a sharp increase in the number of cases of covid-19 has been recorded in the country. On April 11, 2021, the Turkish government announced that it was not going to introduce new restrictions on tourists visiting the country. In this regard, Russia decided to terminate flights with Turkey for a period from April 15 to June 1, 2021. Already on April 13, 2021, Turkish Minister of Health Fahrettin Koca issued a statement that this spring is the hardest period for Turkey, "Sezcu" reports. He claims that the third wave of covid-19 has overtaken Turkey. 85% of the population of the entire country is infected with a new coronavirus infection, approximately 70% of this population have been hospitalized with the British strain covid-19, 300 people with the South African strain, 200 people with the Brazilian strain, and cases of infection with California and New York strains have also been confirmed. The Minister of Health also assured that he is constantly in touch with the Russian authorities to adjust the measures taken to prevent the spread of covid-19. Results and discussion On the basis of a comparative research method, it is worth considering what precautions were taken in the aforementioned countries. Thus, it can be seen that in most countries the same precautions were taken, which were recommended by WHO for the implementation of sanitary and epidemiological rules. Based on empirical and theoretical research methods, in particular induction, it is worth analyzing airlines that were closed in these countries due to the pandemic. Table 4 shows that many airlines have ceased their activities, which affected the country's economy, as well as the tourist flow in the countries. It is worth noting that RBC experts have calculated that for Russia alone, the cessation of international flights will affect the country's economy for the worse, so Russia has lost about 360 billion rubles compared to the previous year. In the United States, airline losses amounted to $ 35 billion, in Germany -more than $ 15 billion, in the UK -about $ 84 billion. Travel controls have failed to prevent the pandemic, despite some early and short-lived effects. However, country-specific precautions have helped contain the rate of increase in the incidence of covid-19. According to the United Nations World Tourism Organization (UNWTO), the number of travels around the world in the first 10 months of 2020 decreased by 70%, which is about 900 million tourists. This indicates a fall in the economy and, as a consequence, the level of health care in many countries of the world. However, as we see it correctly, Nikolaos Askitas et al. (2021) note that "International travel controls become effective at reducing incidence about 10 days after their introduction, for a duration of about two and a half weeks, after which they cease to be effective". This fragment allows us to conclude that the number of asymptomatic patients with covid-19 is much higher than patients with pronounced symptoms. Therefore, the introduction of systematic tests at scale will be a high priority, as not all countries in the world can provide timely testing for the presence of covid-19. A similar point of view is shared by the team of authors Valentina Palmieri et al. (2021): "As SARS-CoV-2 continues its global spread, universal mask wearing is protecting the world population. The most evident reason is of course the prevention of viral particles shedding from noses and mouths of infected and asymptomatic people as supported by model simulations and data collected during the first 100 days of 2020 ". This tearing helps to understand that by observing the regime of self-isolation, distancing, using personal protective equipment (gloves, masks and antiseptics) and observing your symptoms, you can significantly reduce the risk of coronavirus infection during international travel. In addition, we must not forget, as noted by the authors of Yu , that "Therefore, investigating the public's awareness of COVID-19 and its psychological state could help in identifying the public knowledge gap and potential psychological disorders so that more targeted efforts could be made to increase public awareness and decrease the risk of psychological disorders". The presented snippet has been added to draw attention to the importance of the psychological state of people during the covid-19 pandemic. Conclusion An analysis of the precautions for travelers in connection with the spread of covid-19 in the countries of the world allows us to draw the following conclusions: 1) The United States of America became the most numerous in terms of incidence of the countries represented, there were more than 30 million cases of infection at the end of March. The UK and Russia have become almost equal in number of cases of this infection. In these countries at the end of March, there are about 4.5 million cases of infection. The most disciplined safety precautions in the table are China and Portugal. In these countries, the number of infected citizens does not exceed one hundred thousand people. 2) Travel carried out in 2020 compared to 2019 has dropped sharply. The number of travelers in China in 2020 decreased by 99%, which allows us to say that the measures taken in this country were strict and effective, since the number of diseases did not exceed one hundred thousand. In the US, the number of travelers in 2020 decreased by more than 86%. In the UK, as well as in China, the number of citizens who visited the country decreased by almost 99%. More than 70% of tourists decreased in Turkey; in percentage terms, this country became the most visited. In Russia, the number of people entering the country has decreased by almost 83%. And in Germany and Portugal, the number of tourists amounted to 5-7% of the total number of citizens who visited the country in 2019. 3) A large number of airlines have ceased operations due to the pandemic, which affected the country's economy, as well as the tourist flow in the countries. It is worth noting that Russia has lost about 360 billion rubles compared to the previous year. In the United States, airline losses amounted to $ 35 billion, in Germany -more than $ 15 billion, in the UK -about $ 84 billion.
2022-02-04T14:10:04.115Z
2022-02-03T00:00:00.000
{ "year": 2022, "sha1": "38caa9ec430a531c2ccd35798a8f9a448078e179", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.trpro.2022.01.025", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "38caa9ec430a531c2ccd35798a8f9a448078e179", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Medicine" ] }
228954096
pes2o/s2orc
v3-fos-license
Exosomal prostate-specific G-protein-coupled receptor induces osteoblast activity to promote the osteoblastic metastasis of prostate cancer Background Prostate cancer (PCa) is the second leading cause of cancer-related deaths worldwide. Prostate-specific G-protein-coupled receptor (PSGR) has been identified as a new potential biomarker and therapeutic target for PCa. However, the influence of exosomal PSGR on PCa metastasis remains unknown. This study aimed to identify the regulatory role of exosomal PSGR in the bone microenvironment, prior to metastasis of PCa and the underlying mechanism. Methods hFOB1.19 cells were co-cultured with PC-3 exosomes exhibiting PSGR overexpression. Alkaline phosphatase (ALP) and von Kossa staining methods were used to measure the osteogenesis of hFOB1.19 cells. RNA sequencing was used to screen the downstream target genes of PSGR and the signaling pathways involved. The expression of the candidate genes was verified using quantitative real-time polymerase chain reaction (qRT-PCR). Results ALP and von Kossa staining results showed that PC-3 exosomes with overexpressed PSGR enhanced osteogenesis of hFOB1.19 cells. A total of 853 mRNAs were differentially expressed in hFOB1.19 cells of the PSGR-overexpressing PC3 cell (PC3PSGR+ exosome) group compared to the negative exosome control (NC) group, among which 182 mRNAs were significantly upregulated and 671 were downregulated. The functional enrichment and pathway analysis showed that differentially expressed mRNAs were mainly involved in cellular responses to interleukin-1 (IL1), chemotaxis, inflammation, transcriptional misregulation in cancer, and MAKP and NF-κB signaling pathways. qRT-PCR showed that levels of intercellular adhesion molecule-1 (ICAM1), RELB proto-oncogene, NF-κB subunit (RELB), and IL1 beta (IL1B) were significantly decreased in hFOB1.19 cells of the PSGR-overexpression group. Conclusions This study suggests that PSGR may regulate the MAKP and NF-κB signaling pathways involved in the process of bony metastases by targeting ICAM1, RELB, and IL1B. Introduction Prostate cancer (PCa), which is a heterogeneous disease, is the most common malignancy amongst males (1). PCa accounts for 9% of all cancer deaths, and the incidence and mortality rate of PCa has been shown to increase with age (2). Approximately 190,000 PCa diagnoses and 27,000 deaths occur annually in the USA (3). Most PCa are activated by TMPRSS2-ERG gene fusion, which drives the expression of silenced E26 transformationspecific transcription factor ERG in prostate cells (4,5). Furthermore, c-Src tyrosine kinase, insulin-like growth factor 1 receptor (IGF-1R), and focal adhesion kinase (FAK) also play important roles in prostate tumor progression (6). Currently, plasma prostate specific antigen (PSA) assays are used globally as the primary screening method for PCa (7,8). The main treatments for metastatic PCa are medical castration, androgen receptor (AR) blockers, and chemotherapy (9). A fair proportion of patients are still presenting in the high-risk disease period (10). In many of these cases, the primary tumor's evolutionary process culminates in the formation of metastases, which is the cause of 90% of cancer-related deaths (11). Considerable research efforts have identified markers associated with the initiation and progression of PCa (12). However, the relationship between the function of PCa osseous metastasis and exosomes is unclear. Exosomes are double-lipid membrane extracellular vesicles, 30-150 nm in size (13), which are formed by inward budding of the multivesicular bodies secreted from cells, and play a key role in intercellular communication (14). Recently, exosomes have become important factors in our understanding of tumorigenesis (15). Tumor-derived exosomes shuttle cellular proteins and RNA to cells within the tumor environment, generating the immunosuppressive properties of tumor cells which promote tumor growth (16,17). In addition, Prostate-specific G-protein-coupled receptor (PSGR) has been identified as a novel specific gene of prostate tissue, with homology to the G protein-coupled odorant receptor gene family (18). Highly-significant, cell-specific overexpression of this receptor has been identified in 67.2% of tumor specimens, when compared to normal tissue (19). It is has been well-documented that exosomes secreted by cancer cells contain a tumor-specific signature (20), and generate antitumor immune responses in several murine tumor models (21,22). However, it is currently unknown whether exosomes containing PSGR from PC-3 cells can regulate the progression of PCa. In this study, in order to clarify the effect of exosomal PSGR derived from PCa cells on the mechanism and function of PCa cells, we tried to construct stable PSGRoverexpressing PC3 cell lines. Following hFOB1.19 coculture with PSGR-overexpressing PC-3 cells, alkaline phosphatase (ALP) and von Kossa staining methods were used to detect the effect of exosomal PSGR derived from PSGR-overexpressing PC3 cells on the osteogenesis capacity of osteoblast cells. Transcriptome sequencing was used to detect differentially expressed mRNAs (DEmRNAs) in osteoblast cells incubated among exosomes with and without PSGR. Transmission electron microscopy (TEM) Exosomes isolated from hFOB1.19 cells were adsorbed onto glow discharged 150 mesh formvar/carbon-coated TEM grids (Ted Pella, Redding California, USA) for 5 min. The samples were negatively stained with 2% aqueous uranyl acetate for 5 min and examined at 80 kV (Hitachi H-7600, Tokyo, Japan). Images were captured with a side-mounted 1K AMT Advantage digital TEM camera system (Advanced Microscopy Techniques, Corp. Woburn, MA, USA). RNA-sequencing Total RNA was extracted from hFOB1.19 cells incubated with exosomes using TRIzol reagent (Invitrogen, Carlsbad, CA, USA). To guarantee a high-quality RNA-sequencing analysis, the integrity of the total RNA was determined by agarose electrophoresis, and Nanodrop (Thermo Scientific Nanodrop 2000 Microvolume Spectrophotometer, RRID:SCR_018042) was used for quality control and quantification. Superfluous RNA was stored at −80 ℃. Following this, RNase R was used (37 ℃, 30 min, twice). After quality control, a sequencing library was constructed using an RNA library construction kit (NEB, USA). The operation steps are as follows: 3'-and 5'-adapters were attached to the RNA, and the first strand complementary DNA (cDNA) libraries were constructed. Their sequences were analyzed via the Illumina HiSeq™ 2000 (Wistar Genomics Facility, RRID:SCR_010205; Illumina Inc, San Diego CA). Fast-QC (http://www.bioinformatics.babraham. ac.uk/projects/fastqc/) software were used to evaluate the overall quality of sequencing data. The low-quality reads and reads containing adapters in raw reads were filtered out. Finally, the constructed library was checked with the Agilent Bioanalyzer 2100 (2100 Bioanalyzer Instrument, RRID:SCR_018043). ALP staining Appropriate cell lysate was used to lyse the cells, which were then centrifuged to obtain the supernatant for the detection of alkaline and acid phosphatase (ALP) activity. ALP assay buffer, p-nitrophenyl phosphate (pNPP) substrate, and supernatant from the lysed cells (non-sterile) were added to 96-place multi-well microtiter plates according to the instructions provided with the alkaline phosphatase activity detection kit. Solutions were mixed by pipetting and incubated at 37 ℃ for 30 min. The 100 μL stop solution was added to each well to stop the reaction, and absorbance was measured at 405 nm. Three replicate wells were used for each sample. Finally, ALP activity was calculated in the samples according to the definition of the unit of enzyme activity. Heat-insensitive ALP activity was determined at 54 ℃. In the experiment, p-nitrophenol standard and blank control were designed. Finally, sample was washed with PBS, sealed with neutral resin, and imaged using an inverted phase contrast microscope (XSP-37XB, Shanghai No. 6 Optical Factory, China). Von Kossa staining When the cells reached 90% fusion, hFOB1.19 cells was washed three times with PBS preheated to 37 ℃. A Von Kossa kit (G3282, Solarbio, Beijing, China) was processed at each time point. For von Kossa staining, hFOB1.19 cells were fixed and dehydrated with 4% paraformaldehyde for 30 min. Fixed cells were incubated in 1% silver nitrate solution for 30 min in sunlight, and immersed in 5% sodium thiosulfate for 2 min. This was followed by counter staining with alkaline fuchsin for 10 s (red staining). Finally, samples were washed with PBS, sealed with neutral resin, and imaged using an inverted phase contrast microscope (XSP-37XB, Shanghai No. 6 Optical Factory, China). Cells stained with Von Kossa stain after 14-21 days were quantified using ImageJ (ImageJ, RRID:SCR_003070; Wayne Rasband, National Institutes of Health). Quantification real-time PCR The total RNA was taken from −80 ℃ and thawed in an ice box. Total RNA samples were examined via agarose gel electrophoresis and Nanodrop quality control and quantified. The cDNA was synthesized with 1 μg total RNA. Quantitative real-time polymerase chain reaction (qRT-PCR) was performed on a CFX Connect Real-Time System (Bio-Rad CFX96 Real-Time PCR Detection System, RRID:SCR_018064; Bio-Rad, Hercules, CA, USA). Gene expression data were normalized by GAPDH. The relative gene expressions were calculated in accordance with the 2 −ΔΔCt method. All experiments were performed in triplicate. All primers used for RT-PCR analysis were designed and synthesized by Yingbio Technology, Co., Ltd. (Shanghai, China). The primer information is shown in Table 1. Bioinformatics analyses Fragments per kilobase of transcript sequence per millions base pairs sequenced (FPKM) were used to evaluate the levels of gene expression. Differential expression of two groups (three biological replicates per group) were evaluated using DESeq (DESeq, RRID:SCR_000154). The log2-fold change (log2FC) and false discovery rate (FDR) were calculated. |log2FC| >0.5, FDR <0.05, and P<0.05 were considered as threshold. The Gene Ontology (GO; GOEAST-Gene Ontology Enrichment Analysis Software Toolkit, RRID:SCR_006580) enrichment and Kyoto Encyclopedia of Genes and Genomes (KEGG; Kyoto Encyclopedia of Genes and Genomes Expression Database, RRID:SCR_001120) pathway enrichment analysis was done using Fisher's test. Statistical analysis Statistical analyses were calculated using Graphpad Prism 8.0 (GraphPad Prism, RRID:SCR_002798; Graphpad Software Inc, USA) and SPSS statistics 22.0 (SPSS, RRID:SCR_002865; SPSS Corporation, USA). The results are presented as mean ± standard deviation (mean ± SD). The two-tailed paired Student's t-test was used to analyze differences, and comparisons among multiple groups were assessed by one-way variance (ANOVA) using the Post Hoc Tukey test. Fisher's exact test was used to evaluate the results of Von Kossa stain. Logarithmic transformation and analysis were performed on the values of logarithmic distribution skew. The criterion for significant differential expression were set to a 2-fold change, and statistical significance was considered as P value <0.05. Characterization of the isolated cell-derived exosomes by TEM To identify the collected exosomes derived from PSGRoverexpressing PC3 cells, TEM analysis was used, which revealed that we had obtained particles with a complete membrane structure (Figure 1). In short, we successfully obtained exosomes derived from PSGR-overexpressing PC3 cells. Effect of exosomal PSGR derived from PSGRoverexpressing PC3 cells on osteogenesis In order to detect the osteogenic capacity of hFOB1.19 cells treated with the exosomes from PSGR-overexpressing PC3 cells (PC3 PSGR+ exosome) group and negative-exosome control cells (NC) group, we performed Von Kossa and ALP staining assays. Von Kossa staining showed that a small amount of mineralized cartilage was present in the NC group. In comparison, the number of mineralized particles present in the PC3 PSGR+ exosome group was significantly higher, and were stained black (Figure 2A). Moreover, ALP staining results showed an increased number of hFOB1.19 cells that were stained blue/purple in the PC3 PSGR+ exosome group compared to the NC group ( Figure 2B). The above results indicate that exosomal PSGR derived from PSGR-overexpressing PC3 cells enhances osteogenesis differentiation of hFOB1.19 cells. To identify mRNAs that are differentially regulated after PSGR overexpression, we performed a differential expression analysis between 3 hFOB1.19 cells in the PC3 PSGR+ exosome group and 3 hFOB1.19 cells in the NC group. Volcano plot analysis was used to visualize variation of mRNAs expression between the two groups and results were plotted ( Figure 3A). A total of 853 DEmRNAs between the PC3 PSGR+ exosome group and the NC group were identified by RNA sequencing. Among the 853 DEmRNAs, 182 were upregulated and 671 were downregulated in the PC3 PSGR+ exosome compared to the NC group. Hierarchical clustering analysis of randomly selected 853 DEmRNAs clearly separated them into two groups ( Figure 3B). Together, our results showed different mRNA expression between the PC3 PSGR+ exosome group and the NC group, suggesting that PSGR overexpression can regulate unique mRNAs that may be associated with PCa. GO and KEGG analysis of the DEmRNAs To understand the main functions of the DEmRNAs, GO analyses was performed. The DEmRNAs were enriched in 907 GO terms, including 90 molecular functions (MF), 236 cell compositions (CC) and 466 biological processes (BP) the in PC3 PSGR+ exosome vs. the NC group. We revealed that DEmRNAs were mainly implicated in cellular responses to interleukin-1 (IL1), chemotaxis, inflammation, and positive regulation of angiogenesis. Importantly, we also found that these DEmRNAs are involved in the positive regulation of prostaglandin secretion ( Figure 4A). Kyoto Encyclopedia for Genes and Genomes (KEGG) enrichment analysis revealed that the target gene was involved in the regulation of 148 signaling pathways, including 24 significantly enriched pathways. In particular, DEmRNAs were significantly enriched in rheumatoid arthritis, transcriptional misregulation in cancer, the TNF signaling pathway, cytokine-cytokine receptor interaction, and the MAKP and NF-κB signaling pathways ( Figure 4B). These results suggest that abnormal expression of mRNAs in the PC3 PSGR + exosome group may activate inflammationrelated pathways. Validation of the key mRNAs To validate the RNA-Seq data, qRT-PCR was performed to determine gene expression levels. We selected four candidate mRNAs (FDSL1, ICAM1, RELB and IL1B) with high differential expression multiples and high abundance in hFOB1.19 cells of the PC3 PSGR+ exosome group. The qRT-PCR results are shown in Figure 5. The expression of intercellular adhesion molecule-1 (ICAM1) (P<0.05), RELB proto-oncogene, NF-κB subunit (RELB) (P<0.05), and IL1 beta (IL1B) (P<0.01) were significantly lower in the PSGR-overexpression group compared to the NC group. In particular, IL1B demonstrated a higher-fold change, compared with other mRNAs. Furthermore, FDSL1 showed no significance between the PC3 PSGR+ exosome group and the NC group ( Figure 5). Discussion PCa is a common malignancy in men in the United States and the second leading cause of cancer mortality (23). At present, serum PSA and HK3 tests are mainly used for the early diagnosis of PCa (24). The occurrence of bone metastases is an important clinical feature and cause of death in PCa (11). Gaining insight into the mechanisms of PCa and the factors surrounding this process of bone metastasis of PCa could provide an opportunity for early diagnosis and therapeutic targeting of PCa. In this study, we obtained exosomes derived from PCa cells with PSGRoverexpression by exosomal paracrine. Our results showed that a large number of mRNAs were differentially expressed in hFOB1.19 cells of the PC3 PSGR+ exosome group, and that some of these are involved in the MAKP and NF-κB pathways. PSGR is a novel prostate-specific gene of the G-protein coupled OR family that maps to chromosome 11p15 (18). It is expressed as different transcripts using at least three different polyadenylation signals (25). Xu et al. (19) found that PSGR is overexpressed in PCa cells and suggested the involvement of PSGR in the progression of PCa. Exosomes are endosome-derived vesicles secreted by many cell types, which participate in cellular communication by transporting mRNAs, miRNAs and proteins to target cells where they can elicit biological responses (26). Tumor exosomes are more abundant in cancer patients, have a significant effect on the increasing of tumor growth and angiogenesis, and have the ability to evade immune-surveillance (27)(28)(29). Ye et al. (30) demonstrated that miR-141-3p levels were significantly higher in MDA PCa 2b cell exosomes, and that exosomal miR-141-3p promoted osteoblast activity and increased osteoprotegerin expression. These results suggest that exosome-mediated PSGR transport may play (34) demonstrated that the interference of LINC00152 expression can inhibit the MAPK signaling pathway and inhibit the lymphatic metastasis of gastric cancer cells. It is suspected that DElncRNAs regulates the expression of ICAM1, RELB and IL1B in PCa cells through the MAPK pathway, and promotes the adherence of PCa cells to bone via the expression of intercellular adhesion molecules-1 (ICAM1) in osteoblasts. Conclusions In conclusion, PSGR-overexpression significantly promotes the process of bony metastasis by the paracrine action of exosomes. Bioinformatics analysis showed that the mentioned mRNAs significantly enriched the MAKP and NF-κB pathways. We believe that PSGR may be secreted by exosomes to regulate the MAKP and NF-κB signaling pathways to participate in the process of bony metastasis by targeting ICAM1, RELB and IL1B. Acknowledgments Funding: This work was supported by the National Natural Science Foundation of China (81702859).
2020-11-05T09:08:56.656Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "0f2d55e874f119eee0cdc10dfad2950b9561604a", "oa_license": "CCBYNCND", "oa_url": "https://tcr.amegroups.com/article/viewFile/45456/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "856d3d35df07f879c1e8523152274ff1e96ce042", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
256369218
pes2o/s2orc
v3-fos-license
Which neural mechanisms mediate the effects of a parenting intervention program on parenting behavior: design of a randomized controlled trial The Video-feedback Intervention to promote Positive Parenting and Sensitive Discipline (VIPP-SD) has proven effective in increasing parental sensitivity. However, the mechanisms involved are largely unknown. In a randomized controlled trial we examine parental neurocognitive factors that may mediate the intervention effects on parenting behavior. Our aims are to (1) examine whether the intervention influences parents’ neural processing of children’s emotional expressions and the neural precursors of response inhibition and to (2) test whether neural changes mediate intervention effects on parenting behavior. We will test 100 mothers of 4–6 year old same-sex twins. A random half of the mothers will receive the VIPP-SD Twins (i.e. VIPP-SD adapted for twin families), consisting of 5 home visits in a 3-months period; the other half will receive a dummy intervention. Neurocognitive measures are acquired approximately 2 weeks before and 2 weeks after the intervention. Mothers’ electroencephalographic (EEG) activity is measured while performing a stop signal task and in response to children’s facial expressions. To obtain a complementary behavioral measure, mothers also perform an emotion recognition task. Parenting behavior will be assessed during parent–child interactions at pre and post intervention lab visits. Our results will shed light on the neurocognitive factors underlying changes in parenting behavior after a parenting support program, which may benefit the development of such programs. Dutch Trial Register: NTR5312; Date registered: January 3, 2017. Background Parents play a pivotal role in children's social, emotional and cognitive development (e.g., [1,2]). Parental sensitivity, defined as the ability to recognize, accurately interpret and promptly respond to children's cues [3], is a core construct indicating quality of parenting. Parental sensitivity has been found to be an important predictor of children's internalizing and externalizing problem behavior [4][5][6][7], social competence [8,9] and emotion regulation [10,11]. The Video-feedback Intervention to promote Positive Parenting and Sensitive Discipline (VIPP-SD) [12] has been proven to enhance parental sensitivity and sensitive discipline in several randomized controlled trials in various countries [13]. However, the underlying mechanisms accounting for the observed change in parenting behavior remain largely unknown. The current protocol presents a randomized controlled trial in which we aim to examine the neurocognitive mechanisms through which intervention effects on parenting behavior might be established. The focus will be on assessing the underlying neural activity of two constructs that may be important in parenting behavior: emotion recognition and inhibitory control. Parental sensitivity To promote survival, infants are biologically predisposed to develop an attachment relationship with their caregiver [14]. A secure attachment relationship is established through early caregiving experiences and is related to positive outcomes in early and later childhood and adolescence [15][16][17][18], highlighting the importance of developing a secure attachment relationship. More specifically, metaanalytic studies confirm that insecure and disorganized attachment is related to later externalizing problem behavior [19], internalizing symptoms [20] and poorer social competence [21]. An important determinant for developing a secure attachment relationship is parental sensitivity [22,23], as changes in parental sensitivity have been shown to lead to changes in attachment security in children [23]. Enhancing parental sensitivity thus benefits the quality of the attachment relationship which in turn is supposed to lead to positive child outcomes [4-7, 24, 25]. Although early caregiving experiences during infancy and early childhood are central to developing a secure attachment relationship, parents' responses to their children's communications regarding feelings of anxiety and stress remain of great importance during childhood. Neuropsychological research into parenting provides insight into parents' processing of and responding to children's attachment cues. For example, EEG research can provide insight into which specific early, automatic processes (e.g. face perception) and/or later, more controlled ('reflective') processes (e.g. resource allocation [26]) contribute to (successful and sensitive) parental behavior. Outcomes may have implications for the malleability of parental responses as well as the kind of interventions needed to optimize parental sensitivity. We will investigate neural processing of emotional facial expressions and the neural correlates of inhibitory control as it is plausible that these processes are important for parental sensitivity. More specifically, the two neurocognitive processes of interest may be affected by the intervention since key elements of the intervention involve parental coping with children's displays of (negative) emotionality. Processing facial expressions An important aspect of parenting is recognition and accurate interpretation of emotional child cues, for example emotional facial expressions. An extensive body of EEG research on faces reports the N170 to be a neurophysiological marker of face processing. The N170 is a negative-going event-related potential (ERP) component that peaks at approximately 170 ms post stimulus onset at occipito-temporal electrode sites and is usually largest over the right hemisphere. The N170 is thought to reflect the relatively early stage of processing and encoding face configuration (e.g., [27,28]) (for a recent review see [29]). Although there is some debate regarding effects of emotional valence on N170 amplitude and latency (with contradictory findings; [30][31][32][33][34][35]), N170 amplitudes are generally larger for emotional compared to neutral faces (see [36], for a meta-analysis) and there is evidence that N170 amplitude is sensitive to the intensity of emotional expressions [34]. In addition, individual differences in socio-emotional characteristics (e.g., [37][38][39]) as well as negative childhood parenting experiences [40,41] have been found to affect N170 and VPP amplitudes (thought to reflect activity of the same set of generator dipoles; [42]). Importantly, a recent study has provided initial evidence that the neural processing of children's emotional facial expressions may be responsive to behavioral intervention: Neural activity in response to emotional facial expressions was found to be different in Child Protective Services (CPS)-referred mothers who received an attachment-based intervention compared to a randomized control group [43]. In the current study we aim to test whether the intervention will affect the N170 in response to children' emotional faces in a large non-clinical sample of mothers of young same-sex twins. To complement neural data on processing facial expressions, mothers will perform an Emotion Recognition Task (ERT; [44]) to measure facial emotional processing at the behavioral level. The ERT measures perception of facial emotional expressions presented at different intensities. The ERT contains neutral child faces (0% emotional expression) that gradually (i.e. in 10% steps) change into an emotional expression (100% emotional expression). By pressing a button, mothers indicate that they recognize the emotion they think is expressed on the face and subsequently select the corresponding emotion they recognized. Inhibitory control Inhibitory control plays a crucial role in emotion regulation [45] and both processes impact parenting behavior, especially in stressful situations [46]. Challenging child behavior may evoke negative parenting, including the use of harsh discipline, and lack of support and structure [47]. Low cognitive control in general has been related to a variety of negative parenting behaviors, such as ineffective and controlling parenting styles, negative reactions toward children's emotions, maternal rejection and risk for maltreatment [48]. Thus, parents' efficient control as reflected in the ability to inhibit negative parenting responses to child attachment signals may facilitate parental sensitivity and sensitive discipline when parents are faced with challenging child behavior. In addition, the association between low inhibitory cognitive control and increased negative parenting was found to be stable in parents with children in early childhood through adolescence [48], highlighting the importance of supporting inhibitory capacities in the early stages of parenting. The amplitude of the N2 component elicited in stop signal tasks (which requires inhibition of a prepotent response at the presentation of a specific stimulus; see [49]) is implicated in inhibitory control over responses (for a review, see [50]). The N2 is a negative-going ERP component that peaks at around 200 ms after stimulus onset at fronto-central electrode sites. The N2 has been found to be involved in response inhibition, and may be affected by a combination of stop signal processing, conflict detection and suppression of motor responses [50][51][52]. Smaller (less negative) N2 amplitudes have been related to less efficient response inhibition [53] as well as impulsive-violent behavior [54]. As inhibitory control plays an important role in emotion regulation and thereby modulates parental reactions to children's behavior [46], we aim to test whether the intervention enhances N2 amplitudes as well as the efficiency of response inhibition in a stop signal paradigm. Parental stress Parenting behavior can be negatively influenced by parental stress [55,56]. For example, parents who experience more daily stressors show more lax and harsh parenting behavior, and may lack warmth and responsiveness [57,58] and daily hassles influence both parenting behavior and parent-child interactions [59]. Parenting interventions may be effective in enhancing parental feelings of efficacy, and in reducing reported parental stress [60]. Stressful life events are robustly related to heightened cortisol levels, and in a previous study a parenting intervention was found to be effective in reducing cortisol levels in children carrying the DRD4 7-repeat allele [61]. For the current study we aim to investigate whether the intervention lowers stress in parents, as reflected in self-reported stress and in lower cortisol levels, which in turn may facilitate parental sensitivity. Intervention The VIPP-SD aims to enhance parental sensitivity and sensitive discipline [12] and has been proven to be effective in twelve randomized controlled trials in various populations (combined effect size of d = 0.47 [13]). For the current study, the VIPP-SD protocol was adapted for families with young same-sex twin children, the VIPP-SD Twins [62]. Compared to parents of singletons, parents of twins are exposed to more parenting challenges that may put them at risk for developing mental health issues [63]. In addition, parents of twins experience more parenting stress and depression, experience parenting as more difficult and obtain less pleasure from their children [64], highlighting the importance of parenting support for twin families. Aims and hypotheses 1) Our primary aim is to investigate intervention effects on the neural correlates of inhibitory control and the neural processing of emotional facial expressions. First, we will examine whether the intervention affects the neural processing of children's emotional faces as reflected in the N170 component. We expect that N170 amplitudes in response to emotional faces will be enhanced in parents in the intervention condition compared to parents in the control condition. In addition, we will explore potential latency and differential emotion effects as well. Second, we will examine whether the intervention affects the N2 during a response inhibition (stop signal) task. Compared to parents in the control condition, we expect N2 amplitudes in response to stop signals to be enhanced in parents in the intervention condition. In addition, we will explore whether the intervention affects latency of the N2. 2) Our secondary aim is to investigate the neurobiological mechanisms through which intervention effects on parenting behavior are established. More specifically, we will investigate whether the intervention results in changes in these neurocognitive processes which in turn contribute to observable effects on parenting behavior. We will examine whether intervention effects on parenting behavior are mediated by intervention effects on the N170 and N2. The expectation is that the intervention positively affects the neural processing of children's emotional faces and inhibitory control mechanisms, as indicated by enhances amplitudes of the N170 and the N2, which in turn will promote parental sensitivity and sensitive discipline during parentchild interactions. In addition, we will examine whether intervention effects on sensitive parenting behavior are mediated by the stress hormone cortisol. It is expected that the intervention reduces stress levels in parents which in turn promotes parental sensitivity and sensitive discipline. 3) Our tertiary aim is to explore whether intervention effects on parenting behavior and on N170 and N2 amplitudes are moderated by patterns of asymmetric frontal cortical activity (see Fig. 1). Asymmetric frontal cortical activity is thought to reflect an individual's motivational tendency toward approach or withdrawal [65]. Individual differences in motivational tendencies may affect their sensitivity to interventions targeting social behavior. In a recent study, for example, we found that effects of administered oxytocin and experiences of love withdrawal on donations to charity were moderated by individual differences in asymmetric frontal cortical activity. Oxytocin and love withdrawal affected donations only for individuals showing greater activity of the right than the left frontal cortex [66]. We expect frontal cortical asymmetry to play a similar moderating role in intervention effects on the N170 and N2, and, ultimately, parenting behavior (Fig. 1). Study design The current study is part of the Leiden Consortium on Individual Development (L-CID) which is a 5-years randomized controlled trial including a parenting intervention in which families with young same-sex twins living in the western region of the Netherlands participate (for a more detailed description on the full L-CID study design, see [62]). The current study focuses on factors involved in the intervention, with the primary caregiver of the twins as participants. The intervention is delivered to a random 50% of the primary caregivers. The study consists of two assessments in which only the primary caregiver will take part. The first assessment (i.e. pretest) will take place 2 weeks before and the second assessment (i.e. posttest) 2 weeks after the intervention. Both assessments will take place in the laboratory and focus on the neural mechanisms through which intervention effects on parenting behavior are brought about. To measure parenting behavior, parental sensitivity and sensitive discipline will be assessed during the first posttest of the L-CID study in which both the primary caregiver and children take part [62]. This protocol paper adheres to the SPIRIT guidelines (See Additional file 1). Participants Recruitment As the current study is part of the larger L-CID study, recruitment has been completed. Families with twins living in the western region of the Netherlands were selected from municipality records. Families were eligible for participation when twins were same gender, when the parents were fluent in Dutch and when the grandparents were born in Europe (for more detailed information on recruitment, see [62]). For the current study, parents will be excluded in case of a history of or current neurological disorders and/or damage, psychiatric disorders and/or use of psychoactive medication. Parents will be invited for the first assessment by phone after which they will receive a detailed information letter. Parents will receive a financial reimbursement of €20 for participating in each assessment and their traveland babysitting expenses will be covered. Randomization Randomization to intervention condition is done every month at the family level in a ratio of 2:3, using a computer-generated blocked randomization sequence, with a block size of 19 families based on timing of the intervention and stratified by gender of the primary parent and twin. For the current study, we will use a condition ratio of 1:1, leading to a group of 50 intervention and 50 control parents. To select this subsample, a similar number of families from the intervention and control condition will be invited for the study, using the same blocked computer-generated randomization sequence and stratified by twin gender, but excluding male primary parents. The remaining families in both the intervention and control condition will be assigned to the intervention or control "shadow sample". The shadow samples will be used when parents who are assigned to the parent study refuse to participate in this part of the project. An independent researcher who is not involved in data collection or coding will perform assignment of participants. Right before the start of the intervention, allocation will be performed in order to prevent selective attrition. Because of the open-label design researchers, interveners and participants are blinded to assignment before, but not after, randomization. Importantly, only after the first (pretest) parent assessment has taken place, parents will be informed about the condition they are assigned to (see Fig. 2). Coders and research Fig. 1 Overview of central study parameters and aims. Note. The numbers in the figure correspond to the order of the aims of the study assistants who carry out the post-intervention homevisits and laboratory sessions are blind to treatment allocation to reduce bias generated by knowledge about allocation of participants to a minimum. Sample size and power For our primary aim, testing the effect of the intervention on the N170 and N2, with a repeated measures analyses with α = .05 and a sample size of 100 parents, the power to detect at least a medium-sized effect is > .9 (repeated measures ANOVA within-between interaction, G*Power 3.1.9.2). For our secondary aim, testing mediating mechanisms, the power to detect medium to large effects is at least .9 as the power to detect mediating effects is generally larger than it is for main effects [67]. For our third aim, testing moderation effects, the power is to detect medium to large effects is .5-.9. VIPP-SD Twins The original version of the intervention (VIPP-SD) has been adapted for the use with twin families, the VIPP-SD Twins (see, [62]). Instead of only including one target child in the intervention sessions, both twins are included. Parenting a twin may lead to different kinds of challenges for parents, such as dividing attention and sharing or competition between twins, which are less relevant for parents with singletons (for a detailed description of the adaptions, see [62]). The experimental group (50% of the parent sample, randomly selected) will receive the VIPP-SD Twins between the pre and posttest (see [62,68] for a detailed description). The VIPP-SD Twins consists of five home visits in which families are visited at home by a female intervener. All interveners were extensively trained and used the manual VIPP-SD version 3.0 [68] that was adapted for twin families [62]. The manual describes the structure, themes, tips, and exercises for parent and children for each session. Every session starts with videotaping approximately 15 min of standardized parent-child interactions, such as playing or reading a book together [69]. Between sessions, the intervener prepares comments on the child's or parent's behavior based on the theme of the next session and selects illustrating video fragments. In the next session, after new video material is collected, the intervener reviews the video of the previous session with the parent and gives video feedback on the selected video fragments. The focus of this feedback period, is on positive and successful interaction moments and the intervener indicates when positive parenting is effective. The parent is explicitly acknowledged as the expert on her own child. The first four intervention sessions each have their own themes with respect to sensitivity and sensitive discipline [12]. Subsequently, the four themes focus on exploration versus attachment behavior, perception of the child's signals, the importance of prompt and adequate responding to child's signals and sharing emotions. The final session is a booster session, in which the previous themes are repeated and integrated. The parents' partner is invited to participate in the final session (for details, see [62,68]). Control condition To ensure the same number of contact for all participating families, a control condition is implemented. During the same period as the intervention sessions, a research assistant will make six phone calls to families in the control condition. The subject of phone calls will be general development of the twins in a semi-structured interview format. However, families do not receive any specific information or advice about parenting or child development (e.g., [69]). Primary aim Our primary aim is to investigate intervention effects on two neurocognitive processes. First, we will examine whether the intervention affects the neural processing of children's emotional faces as reflected in the N170, an ERP component reflecting face processing [27]. The N170 will be quantified from participants' electroencephalographic (EEG) activity recorded during a face processing paradigm. Participants' EEG will be acquired using 129-channel hydrocel geodesic sensor nets with the NetAmps300 amplifier and NetStation software (Electrical Geodesics, Inc.; EGI). While their EEG is recorded, participants will view pictures of children's faces with a happy, angry or neutral expression. Pictures were selected from the Child Affective Facial Expression (CAFE) set [70], a validated set of 2-to 8-year-old children's faces. To make sure that child identity would not vary across emotional categories, we included only pictures of children who had validated pictures for all 3 emotions of interest (n = 16 children). During the face processing paradigm, each of the 48 selected faces (i.e. 16 happy, 16 angry and 16 neutral) is presented 3 times in quasi-random order (with the restriction that the same condition cannot occur more than four times in a row), resulting in a total of 144 trials (i.e. 48 happy, 48 angry and 48 neutral). Every trial starts with a fixation cross (duration: 800-1200 ms, varying randomly) followed by the stimulus, that is presented for 1000 ms. Every 24 trials, 10-second blink-breaks were inserted so participants could rest their eyes. In every set of 24 trials (varying randomly between the fifth and twentieth trial), participants are asked about the gender of the child in the previously presented face, to keep participants engaged in the task. Second, we will examine whether the intervention affects neural activity underlying inhibitory control as reflected in the N2, an ERP component implicated in response inhibition [50]. The N2 will be quantified from participants' EEG activity recorded (see above) during a stop signal task. During the stop-signal task, participants are presented with a "go"-signal, a green arrow pointing left or right (presented on an black background) that requires a response (pressing the corresponding button on a response pad). On some trials, a "stop"-signal, a red arrow (pointing in the same direction as the preceding green arrow) is presented after the go-signal, and participants should withhold (i.e. inhibit) the response. Every trial starts with a white fixation cross (duration: 800-1200 ms, varying randomly) presented on a black screen followed by a green arrow. In a random 25% of the trials, the go-stimulus is followed by the red arrow. Presentation duration of the green arrow is 15000 ms on gotrials (i.e., no stop-signal is presented) and varies on stop trials depending on the participant's performance. The duration equals 250 ms at the start of the task and is increased with 50 ms after every successful inhibition and shortened with 50 ms after every unsuccessful inhibition. The task thus becomes more difficult when participants successfully inhibit their responses and less difficult when inhibition is unsuccessful. The stop signal task consists of 400 trials in total, of which 100 are stoptrials. Secondary aim Our secondary aim is to test if the intervention effects on parenting behavior are mediated by changes in the N170, the N2, and the stress hormone cortisol. To measure cortisol, hair samples (i.e. approximately 100 strands) will be collected during both parent assessments, thus before and after the intervention. Hair strands are collected at the posterior vertex, as close to the scalp as possible (e.g. [71,72]). Samples are taped to a paper on which the scalp end is marked. The samples are packed in tinfoil and stored at room temperature until analysis. Hair is a valid and non-invasive tool to measure total cortisol release over a longer period of time [72][73][74] and has been used to determine cortisol levels in both adults and children [71,75,76]. Parenting behavior is operationalized as parental sensitivity and sensitive discipline. Parental sensitivity is assessed during free play and structured play situations and discipline is assessed during a compliance task. During the compliance task the parent is asked to instruct the child to do something he or she does not like (e.g., cleaning up or to refrain from touching attractive toys [77,78]). All parent-child interaction tasks are videotaped and trained coders will code the videos for parental sensitivity and sensitive discipline. For coding purposes, the Erickson 7-point rating scale for Supportive Presence and the 7-point rating scale for Intrusiveness will be used [79]. To prevent coder drift, regular meetings will be organized to discuss videos to obtain intercoder reliability ICC > .65, Pearson's r > .70. Aggregated measures across ratings and settings will be constructed for each parenting construct. Tertiary aim For our third aim, we will examine whether intervention effects are moderated by patterns of asymmetric frontal cortical activity. Participants' EEG activity will be recorded during four periods of 'rest': Sitting in a comfortable chair facing a computer screen in a dimly lit room, participants will be asked to "just relax" and keep their eyes focused on a fixation cross (as much as possible) presented on the computer screen. After 2 min, participants are asked to close their eyes for 2 min. This sequence of resting measures will be conducted before starting and after ending of the face processing and stop signal tasks, resulting in 8 min of resting EEG recordings. Differences in power in the EEG alpha band (8-12 Hz) over the left and right frontal cortex (right-left) will be computed to quantify asymmetric frontal cortical activity (e.g., [66]). Statistical analyses Initial data analysis with data inspection steps will be carried out after the research plan and data collection have been completed but before formal statistical analyses are conducted [80]. We will apply range checks for data values, to check data quality. It will be tested whether missing data are completely at random, at random, or not at random [81], and multiple imputation procedures will be applied to impute missing data. Data transformation will be applied when necessary to approach normal distribution of data points [82]. To avoid any inflation of statistical tests, we are not planning to examine any interim data-sets. For all aims, the effect of the intervention compared to the control condition will be analyzed using intent to treat analyses. For the primary aim, we propose a repeated measures model to estimate the intervention effect on N170 and N2 with experimental condition as between subjects factor and assessment time-point as within subjects factor. The regression coefficient of the interaction between condition and time-point estimates differential neural activity changes between the intervention and control groups over time. For our secondary aim, exploring mechanisms of intervention effects, we will use the Montoya & Hayes approach [83] in a multilevel or repeated measures design to test for intervention effects on neurocognitive variables and examine whether these neurocognitive changes mediate the observed changes in parenting behavior. For our third aim, examining the moderation of the intervention effect, we will include a moderator term in the model. Data management and ethics Data will be handled strictly confidentially. Data will be stored in the storage environment of the universities Computing Centre in Leiden. Information security is treated in accordance with the International Security Code. Based on European legislation, personal information and data are processed conform the Dutch Personal Information Protection Act and Dutch Personal Data Protection Act. Data and biological specimen is linked to the subject by using a separate subject identification code. Subject are not personally identifiable in scientific communications. Currently, we do not have ethical permission to share data. Only the formal research team, that includes principal investigators, post-docs and PhD-students will have access to the final trial dataset. All research team members signed an agreement of confidentiality. The L-CID trial is embedded in the larger national Consortium on Individual Development (CID), which unites developmental researchers from seven different universities. CID composed an international scientific advisory board for advice on and supervision of the research program, and a supervisory board to whom our research team reports at least annually. The research protocol received ethical approval by the Central Committee on Research Involving Human Subjects in the Netherlands (CCMO; NL49069.000.14). An additional informed consent for the current two assessments was obtained before the first assessment, from all participants. Participants were reminded that participating in the trial is voluntary, that their data are stored anonymously and securely and that they can withdraw from the study at any time, without consequences. All consent forms and related documentation given to the participants were approved by the CCMO and can be requested from the authors. Name and contact information of an independent expert (a MD and professor in child and adolescent psychiatry) who will be available during the trial for questions from participants is included in the information for the participants. The VIPP has been used in twelve previous RCTs, including more vulnerable populations [13,84]. As there are no reported risks associated with the intervention, there are no criteria for discontinuing the intervention, except on the basis of participants' own requests (see [62] as well). Concomitant care during the trial is not prohibited, but we will use an inventory about previous or concurrent experiences with video-feedback or other types of preventive care, such as parent training or well-baby clinics. Trial results will be communicated to participants using newsletters about the trial and to professionals in the form of (popular) journal articles and professional or scientific conferences. Authorships for journal articles will be determined based on the APAguidelines and recommendations from the International Committee of Medical Journal Editors. The trial is registered in the Netherlands Trial Registry (NTR; Trial ID: NRT5312, Date registered: January 3, 2017). Any protocol modifications or plans for ancillary studies will be reported to the NTR, CCMO and this journal, and additional informed consent will be obtained from participants. Discussion The current protocol presents a study design of a randomized controlled trial in which we aim to investigate neural and hormonal mechanisms that may be involved in the intervention effects of the VIPP on parenting behavior. More specifically, we hope to gain insight in the mediating mechanisms through which intervention effects on parenting behavior are brought about. So far, research shows that the VIPP is effective in enhancing parental sensitivity, however the neurocognitive mechanisms involved in enhanced parenting sensitivity remain largely unknown. The results will provide fundamental insight into parenting behavior and intervention efficacy. Strengths and limitations The study has several strengths, such as random assignment to condition, the golden standard to test intervention effects, and the neurobiological and behavioral assessments of mediating, moderating and outcome variables. The VIPP-SD program is firmly rooted in the wellvalidated attachment theory and social learning theory [12], and has been proven to be effective in enhancing parental sensitivity in a series of randomized controlled trials in several countries [13,84]). The pretest posttest control group design provides maximum power to trace intervention effects and its mediators. The study has some limitations as well, such as multiple interveners between families who carry out the intervention. This may introduce variability in intervention efficacy. However, by using a standardized manual and extensive training prior and supervision during the intervention we expect to limit possible intervention divergences. Another possible limitation is that we test parents of twins and therefore the results may be limited in their generalizability. Availability of data and materials We currently do not have ethical approval for sharing the data. Authors' contributions LK drafted the manuscript and contributed to the study design. SE contributed to the writing of the manuscript and to the study design. BGvdB and RH contributed to the development of the tasks and to the study design. MHvIJ and MJBK conceived of the study and contributed to the study design. All authors contributed to revising and writing of the manuscript and read and approved the manuscript. Competing interests The authors declare that they have no competing interests. Consent for publication Not applicable. Ethics approval and consent to participate The Central Committee on Research Involving Human Subjects in the Netherlands (NL49069.000.14) approved the research protocol. During the first assessment, written informed consent for all aspects of the study was obtained from the participants. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2023-01-30T15:13:50.458Z
2017-03-21T00:00:00.000
{ "year": 2017, "sha1": "a3b87364aa68b371ca9831d333b934402fbc3713", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s40359-017-0177-0", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "a3b87364aa68b371ca9831d333b934402fbc3713", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
55880702
pes2o/s2orc
v3-fos-license
Potential spread and economic impact of invasive Drosophila suzukii in Brazil The objective of this work was to outline the potential distribution and economic impact of Drosophila suzukii (Diptera: Drosophilidae), a recent invasive pest, in Brazil. Two maps of the potential establishment of the species were drawn based on the ecoclimatic index (EI), which uses the following thermal requirements for the species: with thermal stress, most restrictive scenario for spread; and without thermal stress. The EI was classified into four ranges: unfavorable, ≤25%; less favorable, >25 to ≤50%; favorable, >50 to ≤75%; and highly favorable, >75%. Economic losses were estimated based on the most restrictive map. The highly favorable areas were overlapped with those of the maps of production data for each possible host (apple, grape, peach, persimmon, fig, and pear). Considering these six hosts, the overlap between the highly favorable and the production areas varied from 45.5% (grape) to 98.3% (apple). However, the monetary estimation of the potential losses in the worst case scenario (no control measures) was possible only for figs and peaches. Southern Brazil is the most climatically favorable area for D. suzukii development and where potential economic losses are expected to be the greatest. Maximum average temperatures (>30°C) are the main ecological factor to limit D. suzukii spread in Brazil. Introduction The dipterans of the Drosophilidae family encompass more than 4,000 species (Yassin, 2013).These flies breed on rotten fruit and other organic matter; however, the majority of them are not of economic importance.Only Zaprionus indianus (Gupta, 1970), the African fig fly, and Drosophila suzukii (Matsumura, 1931) (Diptera: Drosophilidae), are considered pests of economic relevance, because they are able to attack fruit suitable for human consumption prior to harvest (Van Timmeren & Isaacs, 2014). Drosophila suzukii became invasive in the second half of the 20 th century.It was first described in 1931 by Matsumura in Japan (Hauser, 2011).In the United States, it was collected in Hawaii in 1980, then in California in 2008(Hauser, 2011), spreading across the western and eastern coasts of the country and of Canada.The species was also discovered in Spain in 2008, andin France in 2009, then spread to Central Europe and other countries on the Mediterranean coast, such as Slovenia and Croatia (Cini et al., 2014). In Brazil, D. suzukii was reported for the first time in the subtropical forests of the southern region (Deprá et al., 2014), where it damaged strawberries in the municipality of Vacaria, in the state of Rio Grande do Sul (Santos, 2014).In the state of São Paulo, in the southeastern region, D. suzukii was found in fruits traded in a fruits and vegetable wholesale center (Vilela & Mori, 2014).In addition, specimens of D. suzukii were collected in the Brazilian Cerrado (savannah-like vegetation), in Brasília, DF (Paula et al., 2014).This finding confirms that D. suzukii is able to spread until 1,400 km per year (Calabria et al., 2012). Mathematical models for species distribution have been used to estimate the probability of an invasive species establishing itself in areas where it has not yet been found.Climex is one of the most used models to predict the potential geographic distribution of invasive species, such as Helicoverpa armigera (Lepidoptera: Noctuidae) in the United States (Kriticos et al., 2015) and Ceutorhynchus obstrictus (Coleoptera: Curculionidae), Meligethes viridescens (Coleoptera: Nitidulidae), and Oulema melanopus (Coleoptera: Chrysomelidae) in Canada (Olfert & Weiss, 2006). The potential spread of D. suzukii in Brazil is very significant, because more than 80% of the production areas of most of its hosts are located in climatically highly favorable areas.Restricting fruit trade in the internal market is a measure adopted to avoid the spread of this pest; however, it is not very effective due to the species' high natural spreading capability.This restriction could be more effective in unfavorable areas, such as important grape-production areas in the Northeast region of the country.It should be noted that economic evaluations are still necessary for any quarantine restriction. The economic impact of D. suzukii has been described in the United States (Bolda et al., 2010;Goodhue et al., 2011) and in Italy (De Ros et al., 2013).The damaged fruits are considered unmarketable, and chemical control can result in the rejection of fruits for export and consumption due to insecticide residues (Haviland & Beers, 2012).However, there is a lack of estimates of the economic losses caused by this pest around the world, which limits the analysis of costs and benefits for phytosanitary measures (Kehlenbeck et al., 2012). The objective of this work was to outline the potential distribution and economic impact of D. suzukii, a recent invasive pest, in Brazil. Materials and Methods The simulated map of D. suzukii establishment was drawn using the Climex model (Hearne Software, Melbourne, Australia), which is based on information of the ecoclimate index (EI) on temperature thresholds for the development of a species.The index was determined by EI = GI × SI, in which GI is the growth index and SI is the stress index (Sutherst, 1991;Sutherst et al., 1999).Considering insects are poikilothermic and their development is mainly determined by temperature variation, it was assumed that GI = TI, in which TI is the temperature index, defined by the temperature limits (DV parameters) of the Climex model.For D. suzukii, the DV parameters considered were: limiting low temperature (DV0) of 7.2ºC; lower optimal temperature (DV1) of 13.4ºC; upper optimal temperature (DV2) of 28.1ºC; and limiting high temperature (DV3) of 30ºC.All temperature limits were based on data from Tochen et al. (2014).The number of degree-days for complete development is 208 (Table 1), i.e., the thermal constant k in positive degree-days (PDD) (Tochen et al., 2014). In the model, when the average temperature is between DV1 and DV2, the value of the base temperature is subtracted from the average temperature in order to obtain degree-days.When the accumulated degree-days are positive (PDD), the model considers that one development cycle was completed.Calculations of the number of generations per year were performed based on the thermal constant k (PDD). Two environmental stress factors were considered for the SI: cold stress (THCS) and heat stress (THHS).For THCS, a temperature below 5°C is the limit for adult activity (Kanzawa, 1936); however, adults can survive in the winter with negative temperatures.For THHS, 30°C is the limit for larval eclosion and development (Kinjo et al., 2014).These two stress parameters were adjusted according to the Mediterranean template of Climex (Sutherst et al., 1999).When the minimum temperature is lower than the THCS value or the maximum temperature is higher than the THHS, the rhythm of organism development is decelerated at a rate provided by the model. The Climex model uses a matrix of 61,076 terrestrial points, with distances of 0.5 degree among them.For each one of them there is data on minimum, average, and maximum temperatures, based on normal climatological variables between 1961and 1990(New et al., 2002)).The EI was calculated combining TI and SI, and the maps were generated by the Dymex simulator software, using the "Compare Locations" function for one species of Climex, version 3.00.009(Hearne Software, Melbourne, Australia). For the species establishment maps, two scenarios were considered for Brazil: one with heat and cold stress, and the other without heat and cold stress.The EI was classified into four classes of favorability: unfavorable, ≤25%, less than 3 months climatically favorable per year; less favorable, >25 to ≤50%, between 3 to 6 months climatically favorable per year; favorable, >50 to ≤75%, between 6 to 9 months climatically favorable per year; and highly favorable, >75%, more than 9 months climatically favorable per year. Another map was drawn, showing the distribution of the production per municipality of the following D. suzukii hosts: apple, grape, peach, and persimmon.The data about these hosts were obtained from the statistics on production per municipality, "Produção Agrícola Municipal" (Instituto Brasileiro de Geografia e Estatística, 2013), and all the information was classified according to the fruit species and to production value.The two simulated maps -of D. suzukii potential distribution and of host production -were overlapped in order to observe regions with high phytosanitary risk (EI for D. suzukii >75%). A partial budgeting method was used to estimate the potential economic impact of D. suzukii, employing a scenario without control measures (Soliman et al., 2010).The potential economic losses in monetary terms (values in US$) were obtained through the equation: L ($) = PV (hfa) × L (r) , in which L ($) is the potential economic losses in monetary terms; PV( hfa ) is the crop production value of the highly favorable area (EI >75%); and L (r) is the maximum production loss reported without control measures (%). Results and Discussion The Climex parameters resulted in simulated distribution maps that are very similar to the real distribution of D. suzukii around the world, especially in North America, Europe, and Japan (Figure 1).In the United States, both the east and west coasts are suitable for the establishment of D. suzukii.This is similar to the information that was reported by Walsh et al. (2011).The obtained world map is also in alignment with current D. suzukii distribution across Europe (Cini et al., 2012) and Japan (Mitsui et al., 2010). Most of Brazil is not favorable for the establishment of D. suzukii (EI <25%) (Figure 1 and Table 2).However, states in the southern region are located in climatic areas that are considered favorable to highly favorable for D. suzukii establishment.The southeastern region, including the states of São Paulo and Minas Gerais, has more than 50% of its area classified as favorable or highly favorable.The average temperature of these areas is about 25°C, which is favorable for the establishment of invasive species (Shi et al., 2010). The two simulated maps, with temperature stress and without temperature stress, of predicted favorability in Brazil show some similarities and differences.In the scenario without stress, the spread of D. suzukii throughout Brazil would not be more extensive than that of the stress-restrictive scenario (Figures 2 and 3).The main difference between the scenarios is that the areas with intermediate favorability (EI >25% to <75%) are predicted to be more extensive than those in the restrictive scenario.Therefore, the potential distribution of D. suzukii can occur somewhere between the two simulated scenarios. The distribution limits of D. suzukii will be governed by the occurrence of average maximum temperatures above 30°C.However, despite this temperature limit, the adaptation of D. suzukii to new environments and ecological niches might occur, although this process is not fast (Broennimann et al., 2007).Overall, the states of Rio Grande do Sul, Santa Catarina, Paraná, São Paulo, and Minas Gerais are located in regions that show a high probability of economic losses due to D. suzukii.These states are the main producers of grapes, apples, peaches, and persimmons (Figure 4).Other D. suzukii hosts, such as strawberry (Fragaria vesca L.), raspberry (Rubus idaeus L.), and mulberry (Morus sp.), were not considered in the economic impact study because there is no data available concerning production per municipality (Instituto Brasileiro de Geografia e Estatística, 2013).However, these hosts are still important.In 2013, for example, 9,708 Mg of strawberry were marketed by the fruits and vegetable wholesale center "Companhia de Entrepostos e Armazéns Gerais de São Paulo" (Ceagesp) at the price of R$ 10.00 per kilogram (Agrianual..., 2015), indicating losses of 30% in production (Santos, 2014), which would mean R$ 29 million or US$ 11.5 million.The potential economic losses of peaches and figs were 20 and 30% according to data from Lies (2009) and from Berry (2012), respectively, representing a Figure of about US$ 29.2 million or R$ 75.9 million as maximum potential economics losses caused by D. suzukii in Brazil (Table 3).It should be highlighted that yield losses estimated in 20% must be considered only an average benchmark (Bolda et al., 2010), which is a critical simplifying assumption (Goodhue et al., 2011).An analysis based on price response reduces estimated losses, but the inclusion of managing costs on market prices is not possible due to the lack of identification of suitable control approaches (Goodhue et al., 2011).In this scenario, a powerful tool for economic analysis, such as partial equilibrium modeling (Soliman et al., 2010), cannot be used to estimate losses of D. suzukii in Brazil.Despite showing limitations for a precise estimation of losses, predicted monetary values for economic impact assessment of invasive pests are very useful for cost-benefit analysis of pest eradication programs (Miranda et al., 2015). In spite of the available statistics for grape, apple, pear, and persimmon production in Brazil, there is no data about losses caused by D. suzukii around the world.It is probable that potential losses are lower than those reported for small fruits.The production value of these four fruits is of R$ 3.87 billion or US$ 1.49 billion (Instituto Brasileiro de Geografia e Estatística, 2014).Brazilian apple production is almost entirely located in highly favorable areas for D. suzukii establishment, whereas grape production is located in less than half of the highly favorable areas for D. suzukii (Table 2).However, highly favorable areas for D. suzukii account for 90% of the grape production area in South Brazil, mainly in the states of Rio Grande do Sul, Santa Catarina, and Paraná, and for 45% of the grape production area in the state of São Paulo.The São Francisco Valley, the most important area of grape production in the states of Bahia and Pernambuco, in Northeast Brazil, is a climatically unsuitable area for D. suzukii, even in simulated scenarios without temperature stress. Considering a hypothetical "worst case scenario" -D.suzukii spreads to all climatically suitable areas in Brazil and no control measures are adopted -, fruit production in the state of Rio Grande do Sul would experience the greatest economic impact.Southeastern and northeastern Rio Grande de Sul are areas that concentrate the majority of preferential hosts for D. suzukii.This impact could be expected based on the fact that, in the United States, D. suzukii is adding a cost between US$ 250 to US$ 350 per acre (Werts & Green, 2014).Producers of small fruits have been the most affected in economic terms.Before D. suzukii invasion, small fruit growers from Oregon spent US$ 1 million on pest management, and, after the invasion, this aggregated cost increased to US$ 15 million in 2013 (Werts & Green, 2014).One of the aspects that has relevance in order to quantify the impact of D. suzukii is that the species is broadening its host range in new environments.This adaptation to new hosts is inferred based on the first records of D. suzukii in Brazil in forest reserves (Santos, 2014). The population dynamics of D. suzukii may be affected by the fructification patterns of wild hosts.Moreover, the abundance of D. suzukii in orchards will depend on how well the food resources provided by the wild hosts can sustain populations in natural areas.Expected adaptation to new hosts (cultivated and native) in short term may allow D. suzukii to spread in all areas favorable for its development, in which its population might be forming a continuum (Hauser, 2011;Maier, 2012).As a parallel example, the invasive drosophilid Z. indianus is undergoing niche shift after colonization of new areas around the world (Mata et al., 2010).In this case, potential distribution areas can be inferred based on the new locations in which the organism establishes itself. Considering that the population fluctuation of D. suzukii depends on food resources and temperatures, an inference may be made using population data from other drosophilids.Some vinegar flies, such as Drosophila melanogaster, Drosophila nigricuria, and Drosophila cardinoides, are found in the Pampas region of the state of Rio Grande do Sul.These species are more abundant between June and September, when minimum and maximum average temperatures are between 10 and 25°C (Poppe et al., 2013).These temperatures are close to the temperature limits, and these four months will probably be the most favorable.Therefore, this short season of most favorable conditions may decrease the potential economic impact of D. suzukii. In general, it will be necessary to develop some management options in order to control D. suzukii, with constant surveillance and monitoring during some years.Control measures can be applied based on solid scientific information in the case of phytosanitary emergencies caused by D. suzukii. Conclusions 1. South Brazil is the region that shows the most climatically favorable areas for Drosophila suzukii establishment. 2. Maximum average temperatures (>30°C) are the main ecological factors to limit D. suzukii spread in Brazil. Figure 1 . Figure 1.Potential distribution of Drosophila suzukii around the world.Areas in circles are regions where D. suzukii occurred before it was registered in Brazil.USA, according to Walsh et al. (2011); Europe, Cini et al. (2012); and Japan, Mitsui et al. (2010). Figure 2 . Figure 2. Map of predicted climatic favorability for Drosophila suzukii establishment in Brazil in a scenario without temperature stress.Probability ranges are temperature intervals of the ecoclimatic index from Climex (Hearne Software, Melbourne, Australia).The darkest areas are highly favorable for D. suzukii establishment. Figure 3 . Figure 3. Map of predicted climatic favorability for Drosophila suzukii establishment in Brazil in a scenario with temperature stress.Probability ranges are the temperature intervals of the ecoclimatic index from Climex (Hearne Software, Melbourne, Australia).The darkest areas are highly favorable for D. suzukii establishment. Table 1 . Parameter values used in the Climex software (Hearne Software, Melbourne, Australia) for the simulation of the potential distribution of Drosophila suzukii in Brazil. Table 2 . Percentage distribution of occupied area according to the classes of climatic favorability for Drosophila suzukii establishment in Brazil and in five states. Table 3 . Estimation of production value and maximum potential losses for Drosophila suzukii in six hosts plants in Brazil(1).
2018-12-15T02:19:42.143Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "407a604957d0ac3ecbf6ab8b7ace7e0b70bd3c47", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/pab/v51n5/1678-3921-pab-51-05-00571.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "407a604957d0ac3ecbf6ab8b7ace7e0b70bd3c47", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
266225756
pes2o/s2orc
v3-fos-license
Clinical phenotype and outcome of persistent SARS-CoV-2 replication in immunocompromised hosts: a retrospective observational study in the Omicron era Purpose This study aims to describe clinical, virological and radiological characteristics as well as treatment strategies and outcomes of immunocompromised patients with persistent SARS-CoV-2 replication. Methods We performed a retrospective cohort study of immunocompromised patients at the University Medical Center Freiburg between 01/2022 and 05/2023. Patients with substantial immunosuppression and persistent SARS-CoV-2 detection (Ct-value < 30 after 14 days) were included. Results 36 patients in our cohort reported mainly fever, dyspnoea or continuous cough. Viral load was significantly higher in concurrent samples taken from the lower respiratory tract (Ct-value = 26) than from the upper respiratory tract (Ct-value = 34). Time of detectable viral RNA after start of antiviral treatment was shorter in patients receiving two antivirals (median 15 days vs. 31 days with one antiviral agent). Short-course antiviral therapy (≤ 5 days) was less efficient in reduction of symptoms and viral load than prolonged therapy > 10 days. In 30% (8/27) of patients with repeated CT scans, we found the emergence of chronic pulmonary changes, which were more frequently in patients with B cell depletion (37%, 7/19) compared to patients with organ transplantation (12%, 2/17). Conclusion Ongoing SARS-CoV-2 replication in the lower respiratory tract is a relevant differential diagnosis in patients with severe immunosuppression and continuous cough, fever or dyspnoea even if nasopharyngeal swabs test negative for SARS-CoV-2. Especially in B cell-depleted patients, this may lead to inflammatory or fibrotic-like pulmonary changes, which are partially reversible after inhibition of viral replication. Antiviral therapy seems to be most effective in combination and over a prolonged period of time of > 10 days. Trial registration number DRKS 00027299. Supplementary Information The online version contains supplementary material available at 10.1007/s15010-023-02138-0. Introduction In the early phase of the SARS-CoV-2 pandemic with predominance of Alpha, Beta and Delta variants, immunocompromised patients were at risk to suffer from fulminant and often fatal courses of SARS-CoV-2 infection.With the global emergence of the Omicron variant and the broad availability of vaccinations, hospitalisation and mortality rates were declining [1].Recent evidence mainly derived from case reports or case series points towards a change in the clinical manifestation of SARS-CoV-2 infection in immunocompromised patients from fulminant courses to long-term viral replication [2][3][4][5][6][7][8][9][10][11].Yet, little is known about the clinical course of immunocompromised patients as well as long-term outcomes including chronic pulmonary changes.As most of these patients, especially after B cell depletion, are unable to mount a robust humoral immunity after vaccination or infection, effective antiviral treatment seems essential.However, efficacy of antiviral substances Veronika Götz and Philipp Mathé as first authors contributed equally to the study. Extended author information available on the last page of the article has not been studied in this patient subgroup and no general recommendations are available.Therefore, therapeutic management remains challenging [12]. In the Omicron era, a vast majority of patients treated for SARS-CoV-2 infection in our tertiary care centre suffer from underlying disease and most are immunocompromised.We present data from 36 patients with substantial immunosuppression and persisting SARS-CoV-2 detection.We aim to describe the clinical, virological and radiological characteristics as well as the management and outcome of immunocompromised patients with persistent SARS-CoV-2 replication in the Omicron era. Patients and setting We performed a retrospective cohort study of immunocompromised patients at the University Medical Center Freiburg (UMCF) between January 2022 and May 2023.This time period is characterised by the dominance of the Omicron variant and its sub-strains in our region [13].The recruitment of patients was done by the Infectious Diseases (ID) consultation service (see Supplementary Fig. 1).Patients were included if a) substantial immunosuppression was present (see definitions) and b) SARS-CoV-2 with a Ctvalue of < 30 (reflecting a concentration of around > 30.000 copies/ml in our setting) was detected in a respiratory sample > 14 days after the first positive test for SARS-CoV-2 [14][15][16]. Definitions Substantial immunosuppression was defined as patients receiving either immunosuppressive medication for solid organ transplants or having received B cell-depleting agents less than one year ago (like rituximab, obinutuzumab). The start of the SARS-CoV-2 infection was defined as the first date with a positive SARS-CoV-2 PCR.In cases where only a time period for a positive SARS-CoV-2 PCR could be assumed based on clinical documentation, the date was chosen which led to the shortest possible time of persistent infection (3/36 cases). Mode of acquisition was classified as hospital-acquired if the first detection of SARS-CoV-2 and its related symptoms occurred > 48 h after admission [17].Recurrent fever was defined as body temperature > 38 °C more than 14 days after start of infection with an initial improvement of symptoms. Treatment failure was defined by a persistent positive SARS-CoV-2 PCR (Ct-value < 40) > 14 days after end of performed therapy.Clinical treatment success was defined accordingly, namely as the absence of symptoms > 14 days after end of therapy. Data acquisition Clinical data were retrieved from electronic health care records, including radiological and virological data.Due to official regulations, a random subgroup of patient samples were sequenced for variant detection as previously described [18].SARS-CoV-2 PCR was performed according to standard protocols from naso-oropharyngeal swabs or broncho-alveolar lavage using CE in vitro-certified diagnostic assays. Radiological analysis All of the acquired CT scans were retrospectively analysed in a dedicated reading session by an experienced thoracic radiologist (P.A.).The images were reviewed on a Picture Archiving and Communication System (PACS), using 1 mm slice thickness in standard lung window. For the initial CT scan, the pattern of lung involvement was classified as per the RSNA Expert Consensus Statement and the CO-RADS classification systems, which suggest the level of suspicion of SARS-CoV-2 pneumonia [19].Furthermore, the extent of overall lung involvement was scored semi-quantitatively with a score from 0 to 15 (volume of involvement of each lobe was scored as 1 when less than 1/3rd of parenchyma was involved, 2 for involvement of 1/3rd to 2/3rd of lobar volume and 3 when more than 2/3rd of the lobar volume was affected) [20].A note was made of the axial and craniocaudal distribution of lung involvement.The predominant pneumonia pattern (ground-glass opacities, consolidation, or mixed) was analysed.In the follow-up CT scans, the extent of findings was compared to each previous CT for progression or regression and the residual findings were further characterised.Fibroticlike changes referred to perilobular bands, bronchial dilatation and reticulations [21]. Statistical analysis For the comparison of viral load in broncho-alveolar lavage and naso-oropharyngeal swabs (the latter ± 2 days around the date of the lavage), a paired Wilcoxon rank-sum test was used.Fisher's Exact test was used for group comparison of symptoms.Figures were produced using R Studio (R version 4.2.2). Nearly half of the patients needed low level oxygen via nasal cannula at initial presentation, yet, oxygen supplementation could be stopped before hospital discharge in all patients (excluding one patient with previous long-term oxygen therapy). Length and compartmentalisation of viral detection The median time between the date of first and the last PCRbased detection of SARS-CoV-2 infection was 40.5 days (1.Q.-3.Q.: 26.5-66), the time to the first negative sample was 59 days (1.Q.-3.Q.: 42-164.5).The time to the last positive test was shorter in the organ-transplanted subgroup compared to patients with B cell depletion (median 36 vs. 41 days, respectively) but varied greatly between patients in general (see Fig. 1). In patients with samples taken from the upper and the lower respiratory tracts within a three-day time frame, we found a significant difference of viral load of around eight Ct-value steps between lower respiratory tract samples (median Ct-value 26) and upper respiratory tract samples (median Ct-value 34) (see Fig. 2, p = 0.007). Radiological findings In 75% of patients (27/36), a CT scan was performed, with the first scan at a median of 14 days (1.Q.-3.Q.: 2-23.5 days) after the first positive test and varying follow-up CTs (75% of CT scans within 114 days after the first positive test).In 7 patients, preexisting pulmonary diseases including pulmonary fibrosis (n = 3) and emphysema ( n = 3) were prevalent.The findings in the initial CT scan were grouped in 37% (10/27) as RSNA Category 1 and 33% (9/27) as CORADS Category 4/5, indicating a typical pattern of lung involvement by SARS-CoV-2.In the follow-up CT scans, we found newly emerging patterns of fibrotic-like changes in 30% (8/27) of the patients after a median of 50 days (range: 28-193 days) after the first positive test (see Fig. 3 & Supplementary Fig. 2).Patients with new signs of fibrotic-like changes or progression of pre-known lung fibrosis reported initial and recurrent fever, dyspnoea, cough and fatigue in 44% (4/9) cases, whilst only one patient reported no recurrent symptoms (11%, 1/9).Overall, recurrent fever was the most frequent symptom (78%, 7/9), followed by ongoing cough (67%, 6/9) and fatigue (56%, 5/9).The radiological picture during the overall course in patients with evolvement of signs of fibrotic-like changes showed in most cases an organising pneumonia pattern (89%, 8/9) after an initial inflammatory phase.In two out of three patients with pre-existing pulmonary fibrosis, we observed changes that could point towards an acceleration of the preexisting lung fibrosis (see Supplementary Fig. 2).Moreover, we found in two patients the presence of a new SARS-CoV-2-like infiltrate after initial unremarkable radiology (one patient with organ transplantation, one patient with allogeneic stem cell transplantation) or a significant rebound after distinct initial improvement in one patient.New signs of fibrotic-like changes appeared more frequently in the group of patients with B cell deficiency (37%, 7/19), compared to patients with solid organ transplant (12%, 2/17). Therapeutic approaches and outcome The majority of patients (81%, 29/36) received at least one antiviral agent (nirmatrelvir/ritonavir, remdesivir, or molnupiravir), 31% (11/36) received two antiviral agents, either as combination or successively, and 39% (14/36) were treated with a combination of antiviral agent and monoclonal antibody preparations.Sotrovimab was the main monoclonal antibody therapy used (16/36).Antiviral treatment for a five-day course, as suggested in immunocompetent patients, failed to achieve a sustained viral suppression in 76% (16/23) of cases.Repetition of the five-day antiviral course, also with an alternative antiviral substance, was successful in one of five cases (20%).Prolonged antiviral therapy for median 10 days was administered in certain cases (9/36), especially in patients with previous failure of short-course antiviral therapy.In patients treated with a prolonged course of antiviral therapy (> 5 days), a drop in viral load in the collected specimens was detected with a sustained clinical response in 4/9 cases (2/9 clinical treatment success without virological data, 3/9 treatment failure, see Supplementary Fig. 3).Patients with clinical treatment success under prolonged therapy received medication for a longer time (6/9, median = 20.5 days, range: 10-61 days) vs. patients with treatment failure (3/9, median = 10 days, range: 7-10 days).However, in one patient with prolonged antiviral therapy, we recognised recurrent fever or reoccurrence/progress of pulmonary infiltrates even under therapy, with delayed reconvalescence.Time of detectable viral replication after start of antiviral treatment was shorter in patients receiving two antiviral agents (median 15 days) than in patients receiving only one antiviral agent (median 31 days) (see Supplementary Fig. 4).Overall, all-cause in-hospital mortality rate was 3% (1/36), taken into account that 4 patients had to be excluded because of death before giving consent.Of all 5 intra-hospital deaths, only one was associated with COVID-19 and secondary bacterial superinfection with sepsis and multiorgan failure.Four additional patients died after hospital discharge between onset of infection and May 2023 with all four deaths not being directly related to COVID-19.No additional long-term oxygen therapy had to be initiated.The median time between the onset of symptoms and the last known date alive for all patients was 167.5 days (min = 41 days, 1.Q.-3.Q.= 97.5-313days). Discussion Main findings of the current study are: first, persistent viral replication over months occurs in immunocompromised patients, also in the Omicron era and especially after B cell depletion.Second, persistent infection may lead to slowly progressing pulmonary changes and should be suspected in immunocompromised patients suffering from recurrent fever, cough, dyspnoea and fatigue.Third, viral replication can be limited to or predominantly occur in the lower respiratory tract and therefore diagnostic specimens only from the upper respiratory may miss it.Fourth, clinical symptoms and radiological changes are at least partially reversible under antiviral therapy with prolonged and combined antiviral treatment being more often associated with sustained clinical improvement than short monotherapy. Whilst prolonged viral replication of SARS-CoV-2 has already been described for previous variants as well as in case reports for Omicron [2,22,23], we would like to highlight the potential of a prolonged COVID-19 course in patients with immunosuppression leading to subacute or chronic complications or sequelae.The relevant number of patients in this monocentric study suggests a higher prevalence and probably underdiagnosed facet of the disease, than case reports in the latest literature may show.In the Omicron era, in which mortality rates are lower than under previous variants, morbidity as leading factor has to be taken more seriously, especially in this vulnerable patient collective.Two previous Italian and Japanese case series investigated the combination of (two) prolonged antiviral agents plus a monoclonal antibody in patients with ongoing viral replication and were able to interrupt SARS-CoV-2 replication effectively, even if long-term follow-up data on these patients are still missing [24,25].We need further studies to derive the narrowest and still effective therapy regimen.Unlike previous variants, the Omicron variant of SARS-CoV-2 predominantly replicates in the upper airways [26,27].Different from this, we found a shift to the lower respiratory tract in some patients, with loss of detectable viral RNA in the upper respiratory tract.Previous studies have highlighted the possibility of intra-host evolution with emergence of variants especially in immunocompromised hosts that might be adapted to antiviral treatment [28][29][30]. Whether a change in tropism of the virus in our cases was the reason for the observed shift in the replication site is not yet clear and has to be further investigated.Also the emergence of antiviral resistance mechanisms should be taken into account [31]. In immunocompetent hosts, pulmonary inflammation is induced in the first two to three weeks of SARS-CoV-2 infection which eventually leads to scarring and chronic changes thereafter [32].In contrast, in immunocompromised hosts, our data suggest subacute progressive pulmonary changes by prolonged or ongoing SARS-CoV-2 replication.The mechanism behind the higher frequency of fibrotic-like lung changes in patients with B cell depletion compared to patients with organ transplantation is not yet clear.The more intense and T cell involving immunosuppression in patients with organ transplantation may suppress inflammatory processes, which can lead to evolvement of fibrotic changes.Anti-fibrotic effects have for instance been discussed for mycophenolate mofetil [33].This would suggest that at least a relevant part of the observed lung changes is linked to immunological effects, not viral replication itself, as seen for COVID-19 in immunocompetent patients [34,35].Another explanation might be that patients with B celldepleting therapies have underlying diseases that predispose to interstitial lung diseases (e.g.ANCA-associated vasculitis, scleroderma) or have therapeutic regimens with drugs/ radiation therapy that have pulmonary toxicities rendering the lung more vulnerable to virus-induced changes.Further radiological characterisation is necessary to review this, especially in the light of changes by the emergence of the Omicron variant. Strengths Our study has several strengths.First, we present a relevant number of immunocompromised patients with ongoing viral replication.A significant proportion of the patients in our cohort had follow-up CT scans of the lung making an assessment of the time course of the occurrence and recurrence of lung changes feasible.In addition, many follow-up SARS-CoV-2 PCR test results were available, especially paired samples between lower and upper respiratory tract, enabling a more detailed analysis of the special risk group of immunocompromised patients. Furthermore, for many patients, long-term follow-up data were available with a median time of 167 days.This is especially important as we observed relapses of SARS-CoV-2 replication, which could be missed if the follow-up time is too short. Limitations Limitations of our study (apart from those inherent to the retrospective observational design) include the absence of a control group.Therefore, the estimation of the relative risk of ongoing replication or the development of chronic lung conditions is not assessable.The same holds true for the impact of antiviral therapy on the replication pattern and the course of fibrotic lung changes.A selection bias probably results from the exclusion of patients that died before Fig. 2 Differences in viral load detection according to sample type by patient.Differences in detected viral load according to sampling location in patients with a positive broncho-alveolar lavage for SARS-CoV-2.Nasooropharyngeal swabs were paired according to sampling date ± 2 days before and after the date of broncho-alveolar lavage.Significance test was performed using paired Wilcoxon test with exact distribution.The dashed line indicates the limit of detection.Abbreviation: Ct Cycle of threshold giving consent and the recruitment via our ID consultation service.Deaths of all patients who died after giving consent were considered not COVID-19-related.Whether the previous ongoing replication of SARS-CoV-2 played a role in a reduction of health in general and thereby leading to acceleration of comorbid conditions is difficult to extrapolate. Second, we tried to describe the time course of radiological changes in our patients based on the repeated CT scans.Still, the exact time point of the occurrence of changes is hard to identify because of a varying frequency of radiological imaging. Third, we only included patients with severe immunosuppression like solid organ transplant recipients or B cell-depleted patients which make up the majority of patients with ongoing SARS-CoV-2 replication in our tertiary care setting.Patients with other types or levels of immunosuppression could also be affected by prolonged SARS-CoV-2-replication. Fourth, the reporting of symptoms is highly subjective (besides fever).The investigated population at the same time was highly comorbid with multiple other possible causes of fever, cough or fatigue making the identification of COVID-19-related symptoms challenging.Nevertheless, to reduce over-reporting of unrelated symptoms, a thorough individual case review was performed and symptoms were only considered SARS-CoV-2-related, if no other plausible condition was identified. Implications Our results have implications on daily practice.In patients with substantial immunosuppression (especially B cell depletion) reporting prolonged symptoms like fever, cough or dyspnoea, persistent SARS-CoV-2 infection should be considered. To exclude a suspected prolonged replication, lower respiratory tract samples are essential, as SARS-CoV-2 replication may be low or absent in upper airways.A high delta of viral load between the lower and the upper respiratory tract could help us to consolidate the diagnosis of ongoing replication of SARS-CoV-2 in the lower respiratory tract. Continuous detection of a high viral load (Ct-value < 25) in respiratory material especially from the lower respiratory tract should be taken seriously.Further radiological (re-) imaging should be initiated to detect or exclude chronic lung changes, even with an initial unremarkable lung imaging, as these pulmonary changes may develop over time. As many patients especially after B cell depletion fail to mount an effective immunity against SARS-CoV-2, antiviral therapy is a major part of most therapeutic approaches towards persistent SARS-CoV-2 infection.Therefore, it should be considered at any point in time of infection.Our data suggest that treatment is most effective in combination and over a prolonged period of time.As rebound of infection is possible after discontinuing antiviral treatment, followup testing is essential.The effect of antiviral treatment on chronic pulmonary changes is not yet conclusive.However, we identified some cases with a pronounced regression of pulmonary changes in CT scans after initiation of antiviral therapy, which was accompanied by clinical improvement. Conclusion With the rise of the Omicron variant, we see a shift in the clinical course of infection in immunocompromised patients from fulminant towards subacute prolonged infection.Recurrent fever, cough, dyspnoea and fatigue may be unspecific indicators of ongoing SARS-CoV-2 replication.The diagnostic work-up should include pulmonary imaging and virological samples from the lower respiratory tract as persistent replication predominantly may occur in this compartment and may lead to progressive fibrotic changes of the lung.Most patients under substantial immunosuppression fail to mount a prompt and robust immunity against SARS-CoV-2.Therefore, antiviral treatment is key to most therapeutic approaches.Our data may point towards prolonged treatment duration and a possible benefit of a combination of antiviral agents, which may result in sustained termination of SARS-CoV-2 replication, clinical improvement as well as regression of inflammatory and fibrotic pulmonary changes. Fig. 1 Fig. 1 Length of SARS-CoV-2 positivity in patients.Black dots represent the time points when SARS-CoV-2 PCRs were performed.Median number of days between first positive and last positive sample according to group: B cell-depleted: 41 days, organ-transplanted: 36 days Fig. 3 Fig. 3 Two exemplary cases of occurrence of radiological changes and the concomitant pattern of SARS-CoV-2 detection during time course.A and B represent two different exemplary patients with their
2023-12-16T12:39:22.170Z
2023-12-14T00:00:00.000
{ "year": 2023, "sha1": "2c87ebfe45a0f2e801f2bdeaf2e64277366b3413", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s15010-023-02138-0.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "cf02a94023e908c11b63292d22ce285422fac3b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238046500
pes2o/s2orc
v3-fos-license
Evaluation of aphrodisiac potential on the basis of semen motility of poultry by feeding the extract of Madhuca longifolia (Flowers) The present study was conducted to investigate the aphrodisiac potential of ethanolic extract of Madhuca Longifolia (flowers) in male poultry birds. The bird treated with Madhuca (flowers) extract showed positively increase in value of the semen parameter (General and progressive motility). General and progressive motility was also increase more significantly in group 2 in comparison to group 1 and control group. On the basis of these findings we can say that Madhuca has positive aphrodisiac potential. Madhuca 1/10 dose of LD 50 less effective in comparison to 1/5 dose of LD 50 and 1/5 dose of LD50 showed high motility in comparison to control group of male poultry birds. Aphrodisiac potential increases and less effective 1/10 dose of LD 50 of Madhuca on 7nth, 14nth and 28 th day in comparison to 1/5 dose of LD 50 . Madhuca flower give energy help in increasing the semen motility and then become high quality and murine is absent in flower that is toxic. Introduction Present time regular use of pesticides is increasing day by day particularly in third world countries that decrease the fertility continues [1] . India uses approximately 85,000 tonnes of pesticides annually and an increase of 8% is expected every year. The residue of such environmental pollutant remain in soil, water, air, feed & fodder items for a longer period, to contaminate them [2] . Chicken are especially vulnerable to pesticides toxicity because poultry houses are dusted with pesticides that decrease the all the parameter of semen those related the aphrodisiac potential. Exposure of poultry to chemical pesticides causes health consequences to poultry contributing in great economic loss, while also posing a potential threat to public health due to presence of pesticides in poultry meat, ample evidences exist to suggest that the use of pesticides on crop, in store houses, in poultry houses, the no judicious application for spraying or in dipping solution to prevent ectoparasites, leaves behind in residue causing serious health effect [3,4] . Chronic exposure of chicks to small amount of OPP leads to deleterious effect on metabolism, immune system and reproductive system of birds [5] . In fact, dairy cattle rearing on drinking water contaminated with sewage reduced their reproductive performance [6] . The exposure of males to pesticides can adversely affect pregnancy outcome through a direct genetic or epigenetic effect of their residues on the male germ cells either during spermatogenesis in the testis or sperm maturation in epididymis or by the direct exposure of oocyte during fertilization to the pesticide residues in the seminal plasma [7,8] . There is growing evidence regarding the adverse impact of certain pesticide residues on reproductive system and such pesticide residue are known as "reproductive toxicants" or "endocrine disrupters". These toxicants modulate and or disrupt reproductive hormone milieu by acting at a variety of sites including hypothalamus, pituitary and reproductive organs [9] . During the course of foetal or early neonatal life, any disruption in the differentiation/ multiplication of sertoli cells in fetal testis by the environmental estrogens in detrimental for the adult to produce sperm is determined by the sertoli cells [10,11,12] . Materials and Methods The flowers of Madhuca longifolia were collected from the campus of N.D. University during the month of May & June. The plant material were identified and authenticated with the help of scientist of college of Horticulture. After proper identification flowers was shed dried powdered and passed through 40 meshed and stored in closed vessel for further use. Madhuca longifolia flowers was use to prepare ethanolic extract. For this purpose absolute alcohol 95% ethanol was use to prepare for extract. Percent yield (w/w) of Madhuca longifolia flowers ethanolic extract was calculated as 42.0%. Percent yield of Madhuca longifolia flowers 45% (w/w) with 95% ethanol [13] . Experimental design The experimental design for this study is shown in the table No 1.Twenty four male birds about the age 10-12 month were randamaonly divided into three groups i.e. A, B, C Each test group comprised of 8 birds along with control as mentioned in table. Doses were given in drinking water approximately 1/10 th and 1/5 th of LD50 of alcoholic extract of M. Longifolia (flowers). Semen collection The cocks kept on ambient temperature i.e. 30 0 C and relative humidity i.e. 65% during the study period. One month prior to commencement of semen collection, all cocks were kept in individual cages (32×34×53 cm). All cocks were fed with commercial poultry pellets consisting of 18% crude protein and water was provided ad libitum. Semen samples were collected once a week (Monday). Semen collection was done once a week because the time range required for semen to pass from testes to distal region of duct's deferens varies from 1-4days [14] . First, the cloacae area was cleaned. The back and tail feathers and the abdominal region were stroked gently and repeatedly, which resulted in erection of phallus. Semen was ejaculated after slight pressure was applied to the inverted cloacae. The semen was carefully collected in a test tube and placed in a water bath maintained at 37 0 C prior to evaluation. Semen Evaluation Sodium citrate and egg yolk were prepared and used as an extender in this study. The volume of the ejaculated semen from each cock was measured using a graduated test tube. The pH was determined using a pH meter. The evaluation of sperm motility from the diluted semen was conducted at 400X magnification on a warm stage. A drop of diluted semen was placed on a pre-heated slide and a cover slip was used to cover the slide; the cover slip helped to prevent overflow, allowed a uniform film to form, and prevented quick drying of semen [16] . The colour and consistency of the semen were evaluated visually, including varieties that were creamy, grainy, bloody, watery, or contaminate. The following parameters were recorded for the evaluation of aphrodisiac effect of alcoholic extract of Madhuca longifolia (flowers) in male poultry birds. Statistical study In this study statistically, analysis of variance was applied. Completely randomized design was used and significant differences was analysed at 5% level of significant. Comparative study among group1 and group 2 to control group was done by Das [17] . The bird treated with Madhuca flowers extract showed positively aphrodisiac effect by increase in value of various semen parameters. General and progressive motility was increase in both group 1 and group 2 but in groups 2 General and progressive motility increase more significantly in group 1. At 7 days both dose (1/5 and 1/10 LD50) of Madhuca longifolia flowers extract had positively significant result in all the parameters. After 14 days effect of Madhuca represented in table no 3. In group 2 where 1/5 dose of Madhuca was given result was highly positively significant in comparison to group 1. After 28 days effect of Madhuca represented in table no. 4. At the 28 days both doses of Madhuca were continue to increase the value of various semen parameter (General and progressive motility). The animal of group 2 exhibited more positively significant in comparison of group 1. Comparing 1/10 th &1/5 th of LD50 of Madhuca, 1/5 th showed better result i.e. better aphrodisiac effect. These finding suggests that Madhuca has positive aphrodisiac effect [20] . Similar findings were observed at 14 and 28 days, but there was a continuous improvement recorded in various semen parameters. Conclusions On the basis of these findings we can say that Madhuca has positive aphrodisiac potential. There is a scope of further investigation regarding Madhuca flower extract.
2021-09-28T23:14:19.428Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "844f84f1cff22756eaea34d370bac19f9e2d7528", "oa_license": null, "oa_url": "https://www.phytojournal.com/archives/2021/vol10issue4/PartA/10-3-103-948.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "3bc8e21b672d5c441b2798489a6a371541749f9d", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
252742457
pes2o/s2orc
v3-fos-license
Systematic Down-Selection of Repurposed Drug Candidates for COVID-19 SARS-CoV-2 is the cause of the COVID-19 pandemic which has claimed more than 6.5 million lives worldwide, devastating the economy and overwhelming healthcare systems globally. The development of new drug molecules and vaccines has played a critical role in managing the pandemic; however, new variants of concern still pose a significant threat as the current vaccines cannot prevent all infections. This situation calls for the collaboration of biomedical scientists and healthcare workers across the world. Repurposing approved drugs is an effective way of fast-tracking new treatments for recently emerged diseases. To this end, we have assembled and curated a database consisting of 7817 compounds from the Compounds Australia Open Drug collection. We developed a set of eight filters based on indicators of efficacy and safety that were applied sequentially to down-select drugs that showed promise for drug repurposing efforts against SARS-CoV-2. Considerable effort was made to evaluate approximately 14,000 assay data points for SARS-CoV-2 FDA/TGA-approved drugs and provide an average activity score for 3539 compounds. The filtering process identified 12 FDA-approved molecules with established safety profiles that have plausible mechanisms for treating COVID-19 disease. The methodology developed in our study provides a template for prioritising drug candidates that can be repurposed for the safe, efficacious, and cost-effective treatment of COVID-19, long COVID, or any other future disease. We present our database in an easy-to-use interactive interface (CoviRx that was also developed to enable the scientific community to access to the data of over 7000 potential drugs and to implement alternative prioritisation and down-selection strategies. Introduction COVID-19 is the most devastating natural calamity of the 21st century. It emerged in December 2019 and in March 2020 was declared a pandemic by World Health Organisation (WHO) [1]. Since the beginning of the pandemic, SARS-CoV-2 has infected over 620 million people and has claimed more than 6.5 million lives across the world, and up to 26.7 million excess deaths were reported as of September 2022 [2,3]. Most of these deaths have resulted from the complications of the SARS-CoV-2 infection, such as systemic inflammatory response syndrome (SIRS); organ dysfunction and failure (particularly acute respiratory and renal failure); and thrombotic, neurological, and cardiovascular complications [4][5][6]. A significant proportion of patients also endure symptoms long after the acute viral infection phase has passed, so-called "long covid" [7]. The emergency of the situation has led to extraordinary efforts from health care workers and biomedical scientists across the globe to seek approaches to mitigate the severity and impact of the disease. A range of vaccines using different technologies such as viral vector, RNA, and protein subunit vaccines are now approved for use and have rolled out worldwide [8][9][10]. These vaccines have successfully reduced the risk of transmission, the severity of the disease, and hospitalisations [9]. However, none of the vaccines have proven to be entirely effective at blocking infection of the virus, and breakthrough infections that lead to severe disease remain [11]. The available vaccines are also contraindicated or less effective in specific patient populations, such as the immunocompromised. Therefore, we need to continue to seek additional solutions to reduce the health, and consequently societal, impacts of COVID- 19. Several antiviral drugs have been repurposed or developed to reduce SARS-CoV-2 replication, including small molecule antivirals remdesivir, molnupiravir, and nirmatrelvir (in combination with ritonavir to extend plasma residence time) [12]. Currently, remdesivir is the only drug molecule that has received approval from the FDA, although its intravenous route of administration has dramatically diminished its application. While molnupiravir was initially shown to be effective in reducing hospitalisation or death by almost 50%, the final analysis revealed that molnupiravir was only 30% effective in the MOVe-OUT study [13]. Moreover, there are some concerns over its potential mutagenicity to the host [14]. While ritonavir-boosted nirmatrelvir (paxlovid) shows great promise in reducing hospitalisation or death, it is prone to complex drug-drug interactions due to the ritonavir component of paxlovid [15]. Four monoclonal antibodies directed toward epitopes of the spike protein receptor-binding domain (RBD) of SARS-CoV-2 have received emergency use authorisation from the US Food and Drug Administration (FDA): bamlanivimab plus etesevimab, casirivimab plus imdevimab, sotrovimab, and tixagevimab plus cilgavimab. There is concern that these monoclonal antibodies will have reduced activity against future variants with different spike protein structures or none at all. Indeed, bamlanivimab plus etesevimab and casirivimab plus imdevimab have already been paused by the FDA due to reduced activity against the omicron variant [16,17]. The systemic inflammatory response to the virus is also being treated using immunomodulatory drugs that include corticosteroids, monoclonal antibodies, IL-1 receptor antagonists, and Janus kinase inhibitors [18,19]. Hence, there is a prescient need to identify further effective treatments to neutralise the virus, which may be used alone or in combination with the currently available therapies. A promising strategy to find potential COVID-19 treatments is to repurpose drugs that are already approved for other diseases by major regulators and have established and proven safety profiles. This can reduce the cost and time of drug development. Indeed, a range of studies have evaluated the potential antiviral effects of compounds in high throughput screening assays. Some of these drugs have progressed to clinical trials, with promising results for the common anti-depressant, fluvoxamine. Whilst fluvoxamine reduced hospitalisation by~30%, there remains room to find more effective drug treatments. From this starting point, our 'Drug Selection Committee (DSC)', comprising experts across a range of relevant disciplines, assembled a database of drugs for repurposing against SARS-CoV-2. Together, we decided on specific criteria to down-select the most promising drug candidates. The methodology we developed provides a template for prioritising drug candidates that are safe, efficacious, and cost-effective for COVID-19 and that may be useful in tackling future diseases. To share the findings of our efforts with the world, we also created an online database (CoviRx, https://www.covirx.org/ (accessed on 3 October 2022)) which is described in a matching publication [20]. Herein, we focus on the process used to down-select the top drug candidates, present our 'top 12 candidates, and analyse the potential mechanisms of these drugs. Assembly of the Data Due to the lack of a specific database for drug repurposing against SARS-CoV-2, we assembled a database that is publicly available as CoviRx (https://www.covirx.org/ (accessed on 3 October 2022)). We took the Compounds Australia Open Drug collection as our starting point to ensure the ready availability of selected compounds in assay-ready formats [21]. This collection comprises of~8000 compounds derived from several commercial collections (Figure 1). Many of these compounds have regulatory approval from the U.S. FDA or Australian Therapeutics Goods Administration (TGA), while others are under clinical investigation or are approved in other jurisdictions. Metadata covering compounds' physicochemical properties and biological data such as the target and pathways, mechanism of action, IC 50 values for the original clinical indication, pharmacokinetics, safety, etc., were extracted from a range of online sources, as detailed in Table 1. After data standardisation and the removal of duplicate entries (see Methods), we were left with a final dataset of 7817 compounds (Figure 1). From here, the results of nine in vitro antiviral assays were mapped to the database [22][23][24][25][26][27][28][29]. These assays are summarised in Section 2.3. pies. A promising strategy to find potential COVID-19 treatments is to repurpos that are already approved for other diseases by major regulators and have establis proven safety profiles. This can reduce the cost and time of drug development. In range of studies have evaluated the potential antiviral effects of compounds throughput screening assays. Some of these drugs have progressed to clinical tria promising results for the common anti-depressant, fluvoxamine. Whilst fluvoxam duced hospitalisation by ~30%, there remains room to find more effective drug trea From this starting point, our 'Drug Selection Committee (DSC)', comprising expert a range of relevant disciplines, assembled a database of drugs for repurposing SARS-CoV-2. Together, we decided on specific criteria to down-select the most pr drug candidates. The methodology we developed provides a template for prio drug candidates that are safe, efficacious, and cost-effective for COVID-19 and t be useful in tackling future diseases. To share the findings of our efforts with the we also created an online database (CoviRx, https://www.covirx.org/ (accessed o tober 2022)) which is described in a matching publication [20]. Herein, we focu process used to down-select the top drug candidates, present our 'top 12′ candida analyse the potential mechanisms of these drugs. Assembly of the Data Due to the lack of a specific database for drug repurposing against SARS-Co assembled a database that is publicly available as CoviRx (https://www.covirx.o cessed on 5 October 2022)). We took the Compounds Australia Open Drug colle our starting point to ensure the ready availability of selected compounds in assa formats [21]. This collection comprises of ~8000 compounds derived from sever mercial collections (Figure 1). Many of these compounds have regulatory approv the U.S. FDA or Australian Therapeutics Goods Administration (TGA), while ot under clinical investigation or are approved in other jurisdictions. Metadata c compounds' physicochemical properties and biological data such as the target an ways, mechanism of action, IC50 values for the original clinical indication, pharma ics, safety, etc., were extracted from a range of online sources, as detailed in Table data standardisation and the removal of duplicate entries (see Methods), we were a final dataset of 7817 compounds (Figure 1). From here, the results of nine in vitr ral assays were mapped to the database [22][23][24][25][26][27][28][29]. These assays are summarised in tion 2.3. [43,44] Breastfeeding data KEGG database [45] and ATC index [46] Target, category, and indication The CoviRx Database Reveals Host-Cell Pathways To assess whether drugs targeting specific pathways or molecular targets are more likely to show activity against SARS-CoV-2, an enrichment analysis was performed on words describing the original target of each active compound in the CoviRx database. After multiple-testing correction to control the false discovery rate at α < 0.05, a total of nine informative terms were found to be significantly overrepresented amongst SARS-CoV-2 inhibiting compounds ( Figure 2). It is striking that none of these terms were suggestive of a viral target; instead, they appear to point to host-cell pathways that the virus depends upon for survival and replication. This is perhaps consistent with the general lack of success in repurposing existing antiviral drugs against COVID-19 [47,48]. , and the number of active compounds contributing to each target combination is also shown. Down-Selection of Drugs Based on Available Data and Regulatory Approval Status Using an iterative approach, the CoviRx database was subjected to a series of filtering criteria to identify those with potential for repurposing against SARS-CoV-2. The first filter sought to identify compounds for which at least one in vitro assay result was present [22][23][24][25][26][27][28][29]. The assays are summarised in Figure 3. The total number of drugs that failed this filter was 4278, and these compounds were excluded for further analysis. The next filter applied was the removal of drugs that are not approved by major regulatory bodies, as this would help identify compounds with a faster regulatory acceptance pathway. About 57% of the compounds were approved by the U.S. FDA (39%) and/or Australia (18%). Additionally, investigational drug molecules were excluded from our study as these would require additional time to develop, including complete pharmacokinetic, safety, and pharmacological profiling [49]. Figure 3 summarises the types of assays found and the number of drugs that passed these first two filters. The remaining drugs were taken Examination of the individual active compounds associated with each enriched term highlights substantial overlap, implying that only a few host-cell pathways contribute to the signal identified here. In particular, four of the thirteen Phosphoinositide 3-kinases (PI3K) inhibitors that show SARS-CoV-2 activity are annotated as targeting the mammalian target of rapamycin (mTOR). Likewise, there is extensive overlap between the active compounds targeting the growth factor receptor kinases c-Kit, Vascular endothelial growth factor receptor (VEGFR), and endothelial growth factor receptor (EGFR). The enrichment of the apoptosis term comprises contributions from the growth factor receptor kinases, PI3K/mTOR axis, cyclin-dependent kinase (CDK) inhibitors, and from epigenetic modulators, possibly suggestive of a common mechanism linking all these diverse molecular targets. Down-Selection of Drugs Based on Available Data and Regulatory Approval Status Using an iterative approach, the CoviRx database was subjected to a series of filtering criteria to identify those with potential for repurposing against SARS-CoV-2. The first filter sought to identify compounds for which at least one in vitro assay result was present [22][23][24][25][26][27][28][29]. The assays are summarised in Figure 3. The total number of drugs that failed this filter was 4278, and these compounds were excluded for further analysis. The next filter applied was the removal of drugs that are not approved by major regulatory bodies, as this would help identify compounds with a faster regulatory acceptance pathway. About 57% of the compounds were approved by the U.S. FDA (39%) and/or Australia (18%). Additionally, investigational drug molecules were excluded from our study as these would require additional time to develop, including complete pharmacokinetic, safety, and pharmacological profiling [49]. Figure 3 summarises the types of assays found and the number of drugs that passed these first two filters. The remaining drugs were taken forward as input for implementing other filters based on the likely efficacy and safety of the compounds. , and the number of active compounds contributing to each target combination is also shown. Down-Selection of Drugs Based on Available Data and Regulatory Approval Status Using an iterative approach, the CoviRx database was subjected to a series of filtering criteria to identify those with potential for repurposing against SARS-CoV-2. The first filter sought to identify compounds for which at least one in vitro assay result was present [22][23][24][25][26][27][28][29]. The assays are summarised in Figure 3. The total number of drugs that failed this filter was 4278, and these compounds were excluded for further analysis. The next filter applied was the removal of drugs that are not approved by major regulatory bodies, as this would help identify compounds with a faster regulatory acceptance pathway. About 57% of the compounds were approved by the U.S. FDA (39%) and/or Australia (18%). Additionally, investigational drug molecules were excluded from our study as these would require additional time to develop, including complete pharmacokinetic, safety, and pharmacological profiling [49]. Figure 3 summarises the types of assays found and the number of drugs that passed these first two filters. The remaining drugs were taken forward as input for implementing other filters based on the likely efficacy and safety of the compounds. Down-Selection of Drugs Based on Indicators of Efficacy and Safety Firstly, the drugs were down-selected based on their potential to provide a new treatment with in vivo efficacy against the virus (Figure 4, Flower 1). Each sub-filter and their need for implementation is discussed in more detail under the Methods section. To ensure that we were identifying new treatments and to avoid duplicating previous work, in Filter A, we removed drugs from drug classes that had already been tested in COVID-19 clinical trials as antiviral treatments, resulting in 599 remaining drugs. Filters B-D removed drugs that were unlikely to be efficacious and/or safe in vivo based on their IC 50 against COVID-19 being >10* the original indication, selectivity index (CC 50 /IC 50 ), and drugs likely to provide false-positive results in vitro (CAD/PAINS filter) [50][51][52][53]. A subset of 465 drugs passed all these filters. Next, the drugs were further down-selected based on potential safety and ease of use. We selected drugs that are administered via oral or inhalation routes as this would enable easy use at home (Filter E), those that are reasonably safe in pregnancy (Filter F), that carried no black box warnings (Filter G), and we removed any remaining drugs with unusual indications or that were diagnostics (Filter H). This sequential down-selection resulted in 214 drug molecules (Figure 4, Filters E-H). 19 being >10* the original indication, selectivity index (CC50/IC50), and drugs likely to pro-vide false-positive results in vitro (CAD/PAINS filter) [50][51][52][53]. A subset of 465 drugs passed all these filters. Next, the drugs were further down-selected based on potential safety and ease of use. We selected drugs that are administered via oral or inhalation routes as this would enable easy use at home (Filter E), those that are reasonably safe in pregnancy (Filter F), that carried no black box warnings (Filter G), and we removed any remaining drugs with unusual indications or that were diagnostics (Filter H). This sequential down-selection resulted in 214 drug molecules (Figure 4, Filters E-H). Discussion Following the application of robust filtering methodology to the drugs in the curated database, the final output was 214 potential drug candidates for COVID-19. Once these top 214 drugs were in place, they were sorted from those most likely to least likely to show antiviral activity based on their computed activity rank score. This resulted in the top 15 candidates being selected ( Table 2). The team then evaluated the top 15 drugs in greater detail to determine promising candidates for repurposing. The top 15 drugs, along with their rank score, original indication, mechanism of action, and associated targets, are represented in Table 2. It is interesting to know that the drugs obtained in this list are from Discussion Following the application of robust filtering methodology to the drugs in the curated database, the final output was 214 potential drug candidates for COVID-19. Once these top 214 drugs were in place, they were sorted from those most likely to least likely to show antiviral activity based on their computed activity rank score. This resulted in the top 15 candidates being selected ( Table 2). The team then evaluated the top 15 drugs in greater detail to determine promising candidates for repurposing. The top 15 drugs, along with their rank score, original indication, mechanism of action, and associated targets, are represented in Table 2. It is interesting to know that the drugs obtained in this list are from our down-selection methodology, and no attempts were made to select drugs based on their mechanisms. Next, we examined each compound thoroughly to determine their likelihood of providing safe and effective treatments. We also decided to prioritise the evaluation of drugs belonging to diverse pharmacological classes. We noted that several histamine receptor antagonists were in the top list of drugs, so we evaluated the potential efficacy and safety profiles of all antihistamine drugs. This led to the selection of meclizine for further use. Moxidectin is a closely related analogue of ivermectin that has been extensively debated and tested as a potential drug for COVID-19. The results suggest it has poor IC 50 against SARA-CoV-2 compared to ivermectin. Hence, it was not considered in further studies [54][55][56][57]. Deflazacort, a corticosteroid, was left out, as several corticosteroids are already in clinical trials and are being used clinically due to their anti-inflammatory properties in treating COVID-19 complications [58,59]. Cefaclor and other cephalosporin antibiotics were strongly considered but ultimately not included. Cefaclor performed the worst among a series of antibiotics against SARS-CoV-2 in the drug screening. Other cephalosporins were only active in in vitro antiviral assays at concentrations expected to be unachievable in in vivo screening [60]. There was insufficient literature available to consider nifurtimox for repurposing against COVID-19. Several anticholinergic drugs were near the top of the list, including procyclidine and tolterodine. These were considered undesirable due to anticholinergic side effects, which may be troublesome, particularly in the elderly. In addition, the drug concentrations required to inhibit the virus were high compared to plasma concentrations when used for their current therapeutic indication. Though cysteamine has some activity against SARS-CoV-2 in vitro [61,62], the concentration at which it exhibits its antiviral activity is too high, and attaining similar plasma concentrations in human patients would be difficult and likely result in off-target toxicities. Selective serotonin reuptake inhibitors (SSRI's) are noteworthy to study due to their immunomodulatory effects, and their action against SARS-CoV-2 as fluvoxamine have shown good inhibition in tissue models [63]. As dapoxetine and mianserine belong to a similar category of drugs, these would prevent the cytokine storm, thereby preventing COVID-19-associated symptoms. In addition, rilpivirine is an antiretroviral drug. Hence, further evaluation of these drugs against SARS-CoV-2 should be considered. Additionally, drugs that were filtered out but had an activity rank score < 0.2 were further investigated to see if they could be reconsidered for repurposing despite their shortcomings. Drugs that were reconsidered and their reason for initial removal are highlighted in Table 3. We reviewed each drug individually, and a decision was made not to pursue any of the drugs except mTOR inhibitors and NK-1 receptor antagonist (rolapitant). Among the mTOR's, only everolimus was chosen, as zotarolimus is used in stents while pimecrolimus is administered topically. Although everolimus has a black box warning, this is only for skin cancer seen after long-term administration as an immunosuppressant. A short course of everolimus for COVID-19 may not cause such harmful effects to patients [64]. Additionally, there are convincing reasons available to repurpose mTOR inhibitors for use in COVID-19 due to their immunomodulatory and antiviral properties [64][65][66][67][68]. Similar arguments could also be made for rolapitant, a neurokinin-1 receptor (NK-1) antagonist. Substance P and NK-1 receptors have been hypothesised to play an aberrant role in the cytokine storm observed in COVID-19 patients [69][70][71]. A report published in a preprint server claims that aprepitant, a congener of rolapitant, has beneficial effects in COVID-19 patients when combined with dexamethasone [70]. Additionally, a clinical case report disclosed positive effects of aprepitant in a patient with post-acute COVID-19 syndrome [72]. These results encouraged us to examine further if rolapitant has antiviral activity against SARS-CoV-2. For the final prioritisation of the drugs for repurposing, we sought to determine whether the concentration required for an antiviral effect would be achievable in plasma after administration of typical therapeutic doses. We collected the single-dose maximum plasma concentration (Cmax) and average steady-state plasma concentration (Cssave) values obtained after administration of the current clinical doses. We then adjusted those values for each drug based on available plasma protein binding ratios (Table 4). Another consideration we made in selecting the final list was to look for other drugs belonging to the same class as serotonin receptor antagonists, antihistamines, and tyrosine kinase inhibitors (TKIs). These classes of drugs were ranked in the top 15. Accordingly, ondansetron (antiemetic), cyclizine (antihistamine), cetirizine (antihistamine), and lapatinib (tyrosine kinase inhibitor) were included from these classes for final consideration. Once the free drug concentration calculations were done for all 12 drugs, we calculated the free plasma drug concentrations that could effectively inhibit the virus [50,73,74]. The top 12 drugs listed in Table 4 were selected for final screening in relevant ex vivo models of COVID-19. Database Assembly, Data Collection and Processing The database was assembled by first extracting compound structures and all available metadata from the original supplier's documentation. Where chemical structures were not available from the suppliers' documentation, supplier catalogue numbers were converted to PubChem Compound IDs and InChiKey using the PubChem Identifier Exchange Service. ChEMBL, DrugBank, and PubChem identifiers were derived by the InChiKey lookup of the respective databases. Physicochemical properties were calculated using RDKit. RDKit was also used to identify pan-assay interference compounds based on established substructure filters [75]. Duplicate entries were identified based on unsuitable indication compound names or InChiKey and merged to a single consolidated entry. This resulted in a final set of 7817 compounds. SARS-CoV-2 Assay Data Collection and Activity Rank Score Calculation All the available SARS-CoV-2 assay data of the FDA/TGA approved drugs were extracted using the drug identifiers from ChEMBL and other published datasets (Figure 2) [22][23][24][25][26][27][28][29]. The 3539 drugs with 13,919 assay datasets were ranked for activity. These ranks were then converted to fractional ranks by dividing the total number of assay results for each assay in our dataset. The assay rank score for each compound was the mean of these fractional ranks' overall assays in which the compound was tested. Thus, the assay score for compound j was given by: where i runs over the N j assays in which compound j was assessed, r i,j is the activity rank of compound j in assay i, and n i is the number of compounds in the CoviRx database assessed in assay i. Filtration Methodology Drugs have various pharmacological properties and sometimes result in severe lifethreatening effects. In addition, various chemical compounds either lack therapeutic action or are inactive. Apart from these, there are possibilities to get false-positive results due to cationic and amphiphilic compounds (CAD) or pan-assay interference (PAINS). Hence, to obtain active drugs that are safe to use in healthy humans and pregnant women, a series of filters were used to further screen the drugs. All the filters and the reasons we used them are described briefly in Table 5. Table 5. Sub-filters used for the study. Filter Type Description Objective Clinical trials (A) Clinicaltrials.gov database was used to search the drugs under clinical evaluation against SARS-CoV-2 infection. Simultaneously, the Tanimoto index was used to look for the analogues of drugs under clinical trials against SARS-CoV-2 and these were also excluded from our study as a similar scaffold produces similar action. To prevent duplication of existing work. CC 50 < 10 µM Or SI < 10 (B) Compounds with CC 50 value < 10µM were considered toxic, while those with >10 µM were deemed non-toxic. Hence, the drugs with CC 50 values below 10µM were filtered out. In addition, the selectivity index (SI) was also determined, and SI < 10 was considered the minimum acceptable efficacy. Selectivity index (SI) = CC 50 /IC 50 To filter out cytotoxic drugs. COVID-19 IC 50 > 10(*) Original Indication (C) The drug that has ten times more than IC 50 of original indication are usually toxic, as high doses are needed to show an inhibitory effect. To filter out drugs that have poor IC 50 values. CAD/PAINS (D) We removed cationic amphiphilic drugs (CAD) that exhibit antiviral activity by inducing phospholipidosis rather than interacting with a specific target. We also removed compound classes that cause pan-assay interference (PAINS) [76]. To screen out false-positive results. Route of administration (E) Drugs that are deliverable by oral or inhalation routes were considered in our study, as other routes of administration would limit applicability for the treatment of SARS-CoV-2 infection. Hence, oral and inhalation drugs were retained, and the rest were filtered out. To filter out drugs that are not orally bioavailable. Pregnancy (F) Pregnant women with SARS-CoV-2 infection have been a subject of concern as the present drugs approved for COVID-19 cannot be used to treat them. Hence, drug pregnancy categories were found from the ARTG database and category D and X drugs were removed. To remove drugs unsafe for use in pregnancy. Black box warning (G) Black box warning refers to serious side effects [77] Filter out drugs with black box warnings to obtain drugs that are safe to use. Indication (H) Compounds that have no pharmacological action are also in the database. Hence, all the pharmaceutical aids, diagnostic agents, and supplements were filtered out. To retain pharmacologically active drugs. * = folds; ARTG = Australian Register of Therapeutic Goods. Enrichment Analysis The target enrichment analysis was performed by first assembling a list of all words appearing in the original target annotation for the compounds in the CoviRx database for which SARS-CoV-2 assay data are available. This list was edited manually to remove non-specific terms that were not informative of the molecular target, resulting in a list of 441 target terms. For each term, we constructed a contingency table of active and inactive compounds that are and are not annotated to target the specified term, where we define active compounds as those with an assay rank score less than 0.1. Enrichment was assessed with the Fisher exact test and resulting p-values were adjusted by the Benjamini-Hochberg procedure to control the false discovery rate to α < 0.05. Conclusions COVID-19 continues to cause havoc across most parts of the world [3,78]. While vaccines have significantly reduced disease severity and mortality, vaccine hesitancy, poor coverage, and a lack of efficacy against emerging variants of concern (VOCs) necessitate developing alternate clinical tools to deal with the virus and its future VOCs. There is an urgent and critical need to identify or develop drugs that can perturb viral replication in target tissues and reduce or prevent long-term health complications. Novel drug discovery and development suffer from certain drawbacks such as a high attrition rate and the significant time required to progress a drug into the market. At the same time, repurposing approved drugs provides an opportunity to circumvent such drawbacks and speed up the development process. Repurposed drugs could be expedited to phase 2-3 trials for COVID-19 if robust pre-clinical data are available. In the present study, an effort was made to assemble a database of more than 7000 compounds and systematically down-select potential drugs for repurposing against COVID-19. The down-selection was achieved by employing several filters that assessed various attributes required for the drug to be safe and effective, such as SARS-CoV-2 antiviral activity, approval status, PK-PD data, and toxicity. Drugs such as L-cycloserine, everolimus, pyrimethamine, cyclizine, lapatinib, and rolapitant were identified and prioritised for repurposing against COVID-19. The top 12 repurposed drugs selected based on the various filters used in this study are not suitable for SARS-CoV-2 treatment by clinicians and self-medication by people. As a next step, the selected drugs will be evaluated against SARS-CoV-2 and its VOCs in relevant in vitro/ex vivo models, and the results will be communicated elsewhere. Of 4278 compounds with no assay data, only 453 drugs are FDA/TGA approved. Out of these, 242 compounds were filtered out due to duplication, unsuitable indication, or under clinical evaluation. We recommend the remaining 202 compounds as a priority for generating SARS-CoV-2 assay data. In addition, an interactive and easy to use web-interface (CoviRx) has been developed to provide information regarding these~7000 compounds. More details regarding this can be found in Jain et al. [20].
2022-10-07T08:17:15.863Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "c2017453530e6aaf881874653f48de0732c8b1d3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "c2017453530e6aaf881874653f48de0732c8b1d3", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231887110
pes2o/s2orc
v3-fos-license
Disentangling Turbulent Gas Diffusion from Non-diffusive Transport in the Boundary Layer An analysis based on the law of linear momentum conservation demonstrates unequivocally that the mass fraction is the scalar whose gradient determines gas diffusion, both molecular and turbulent. It illustrates sizeable errors in previous micrometeorological definitions of the turbulent gas flux based on fluctuations in other scalars such as the mixing ratio or density. In deference to conservation law, we put forth a new definition for the turbulent gas flux. Net gas transport is then defined as the sum of this turbulent flux with systematic transport by the mean flow. This latter, non-diffusive flux is due to the net upward boundary-layer momentum, a Stefan flow forced by evaporation, which is the dominant surface gas exchange. A comparison with the traditional methodology shows exact agreement between the two methods regarding the net flux, but with the novelty of partitioning gas transport according to distinct physical mechanisms. The non-diffusive flux is seen to be non-negligible in general, and to dominate turbulent transport under certain conditions, with broad implications for boundary-layer meteorology. Introduction Surface exchange of atmospheric constituents such as greenhouse gases (GHGs) has gained in importance in the twenty first century, and is best assessed in the turbulent surface layer. Applying the intricate eddy-covariance technique, scientists from many disciplines operate "flux towers" worldwide, supplying key biogeochemical flux data at the ecosystem scale for numerous scientific purposes (Baldocchi 2020). Flux-tower data help explain and predict evolving atmospheric concentrations of CO 2 (used hereinafter as a proxy for GHGs in general), and enable land-use managers to cool climate forcing as motivated by the Kyoto and Paris climate agreements. Flux towers directly estimate the evaporation rate (E) over spatial domains that scale up readily, helping to constrain the hydrological cycle. For these and other reasons, networks have grown to include many hundreds of towers around the globe (Papale 2020), and strive to standardize eddy-covariance methodologies internationally (e.g., Franz et al. 2018;Metzger et al. 2019). Despite the adoption of standard data processing steps, however, eddy-covariance methodologies are not yet definitive. Remarkably, even labelling an individual eddy as either rich or poor in CO 2 remains a matter of some debate. Specifically, the exact definition of the gas index-or "concentration"-to use in eddy covariance calculations is not yet resolved, as the following brief history demonstrates. Early flux-tower researchers quantified GHG fluxes via a covariance between vertical motion and whatever gas index instruments could provide. Initially this was the constituent partial pressure (p c , Swinbank 1951;Dyer and Maher 1965;Hicks 1970), but with the emergence of infrared gas analyzers (IRGAs), the eddy covariance scalar shifted to the molar fraction (χ c , from closed-path IRGAs; Desjardins and Lemon 1974) or density (ρ c , from open-path IRGAs; Hyson and Hicks 1975;Jones and Smith 1977;Ohtaki and Matsui 1982). Then researchers realized that turbulent CO 2 fluctuations could arise due to surface exchange, not only of CO 2 , but also of water vapour and/or heat. A debate ensued over "density corrections" (Jones and Smith 1977;Bakan 1978), with consensus endorsing the Webb et al. (1980; hereafter WPL80) definition of the turbulent flux based on the covariance with the CO 2 mixing ratio (r c ). Yet the debate persisted (Fuehrer and Friehe 2002), and some micrometeorologists continued to express eddy fluxes in terms of the covariance with ρ c (Lee 1998;Finnigan et al. 2003;Finnigan 2009). If uncertainty exists regarding the exact definition of the turbulent CO 2 flux, it is both obscured by imprecise language and entangled with the exact definition of the average vertical velocity (we use the synonyms "average" and "mean" interchangeably). Webb et al. (1980) make no clear distinction between the turbulent flux and the net flux of CO 2 . Aside from a footnote, the term "turbulent flux" appears only on their first page defining it as their paper's objective, which they specify in terms of fluctuations in r c , but as requiring corrections if ρ c fluctuations are used. Throughout the rest of the paper, they make no distinction and simply refer to "the flux". However, given that it is their purported mean vertical velocity (their Sect. 3) that defines the WPL corrections, researchers have drawn different conclusions as to the physical meaning of the corrected flux. Some interpret the WPL corrections as integral parts of the turbulent flux (Massman and Lee 2002;Lee et al. 2004;Leuning 2004;Ibrom et al. 2007) but to others (e.g., Wyngaard 1990;Liebethal and Foken 2003;Finnigan 2009) the WPL terms define transport by the mean flow that is distinct from the turbulent flux. This entanglement between the turbulent flux and the average velocity was noted by Paw U et al. (2000), who postulated the existence of two schools of thought in micrometeorology regarding turbulent transport. From this, we can deduce with certainty that at least one of these two interpretations leads to an erroneous specification of turbulent diffusion. This is clear when considering an updraft (w′ = 0.1 m s −1 ) whose state variables combine to yield different signs for the fluctuations ρ c ′ and r c ′, as in Table 1. According to Wyngaard (1990), Liebethal and Foken (2003), and Finnigan (2009), this rising air with low CO 2 density contributes to downward turbulent CO 2 transport of w′ρ c ′ = − 11.3 µmol m −2 s −1 and any corrections for density effects would affect not turbulent transport but rather transport by the mean vertical flow. By contrast, the WPL80 specification of the turbulent flux (as we interpret it; see also Massman and Lee 2002;Ibrom et al. 2007) would have it contributing to upward turbulent transport of CO 2 with ̄ w′r c ′ = + 1.1 µmol m −2 s −1 . (The overbar denotes the average of ρ, the air density; Appendix 1 derives transport magnitudes and contextual information for the eddy described in Table 1.) Since these magnitudes differ by 12.4 µmol m −2 s −1 in terms of turbulent CO 2 transport, the meaning of the WPL corrections and their relationship to turbulent transport is disputed by a wide margin. Thus, the question remains: just what influence does this upward-moving, CO 2 -containing eddy exert on exchange of CO 2 by turbulent diffusion? In other words, how exactly should the turbulent flux of CO 2 be specified? This paper aims to answer these key questions and disentangle diffusion from meanflow transport using an analytical framework and adhering strictly to physical conservation laws. The organization is as follows. Section 2 establishes notation and nomenclature, and the subsequent section reviews the state of knowledge regarding diffusive transport, both molecular and turbulent, including a multidisciplinary literature survey regarding Fick's first law. Section 4 presents analyses of carefully specified, theoretical cases that serve to identify unambiguously the scalar determinant of diffusion. Section 5 proposes a "new" definition of the turbulent CO 2 flux, which sums with non-diffusive transport (Kowalski 2017) to yield the net CO 2 flux. Section 6 applies both the new methodology and the WPL80 approach to representative data from a Mediterranean wetland with a broad range of gas exchange rates and compares outcomes. The penultimate section discusses these results and argues the relevance of the new methodology to different aspects of micrometeorology, followed by general conclusions in Sect. 8. Mathematical Operators For some variable x, the overbar denotes its arithmetic average ( , as is common in modern notation. However, x may not necessarily be equal to the exact average ( x ), depending upon the variable x in question and how it is sampled/measured (Kowalski 2012). Following micrometeorological tradition, the prime denotes fluctuations in x about its arithmetic average (i.e., x � = x −x ). By contrast, the double prime denotes fluctuations in x about its exact average (i.e., x �� = x −x). Gas Variables and Physical Processes The numerous variables that quantify atmospheric component gases can confuse even experienced scientists. Perceptive meteorology students first notice this regarding humidity, whose relevant variables are described here (and in Table 2, with subscript "v" for water vapour). Since water vapour is a gas, the ideal gas law relates its partial pressure (e) to T and its density (or absolute humidity, ρ v ); density is also relevant for studying dynamics, as buoyancy's determinant. Greenhouse absorption or other radiative extinction depends on the number density (η v ), according to Beer's law. Early meteorologists (Guldberg and Hohn 1876) defined the water vapour mixing ratio (r v ) in order to track phase changes; invariant to changes in temperature (T) and pressure (p), its increases/decreases reflect net evaporation/condensation. Also, the mixing ratio for a dry-air component (r c ) is invariant to water phase changes. However, predicting phase change's direction requires knowledge of the relative humidity (U). Finally, both because it facilitates calculating the partial pressure from atmospheric state, and also for its conservation properties, the molar fraction (χ v ) is popular with plant physiologists (e.g., Farquhar and Cernusak 2012). While these associations between physical processes and variables challenge first-year students, it takes the complicated issue of diffusion to perplex scientific researchers. The challenge of specifying diffusion is underscored by both the number of gas variables that have been used to characterize turbulent fluxes (Sect. 1), and the contradictory versions of Fick's first law that are found in the literature (Sect. 3). In this regard, the appearance in Table 2 of the constituent mass fraction (f c ; or q for water vapour, termed the specific humidity) foreshadows this paper's conclusions. Regarding nomenclature, a number of terms can describe gas transport that is due to the average fluid motion or system velocity, and not to diffusion. Irrespective of scalar gradients, the direction of such transport is in the direction of fluid momentum (i.e., downwind). Table 2 Variables quantifying gas abundance for water vapour (subscript v) and a more general constituent (c), along with their symbols, definitions, relevance, and units Comma-separated entries denote names and symbols for water vapour, and then other constituents. In the ideal gas law (p c = ρ c R c T), the constituent-specific gas constant is R c and the temperature T. The saturation mixing ratio is r v,sat . Since all molecules are equal to the ideal gas law, χ c is a fraction equivalently of pressure (µPa Pa −1 ), volume (ppmv), or moles (ppm); hence, χ c facilitates the calculation of p c from the ambient pressure p. *Some researchers, particularly in ecophysiology, define the molar fraction with reference to dry air (χ c = η c /η d ) In order to distinguish such transport from diffusion due to random motions, we will term it as due to "system motion". Other acceptable terms to describe transport caused by system motion include "gross transport", "mass flow", "convective flux" and "non-diffusive flux". However, we will avoid the term "advection" because it can be defined to depend directly on scalar gradients, as a scalar product having no direction, and thus not as transport per se (Kowalski 2017). With regard to molecular diffusion in particular, let us recall the words of one of the great physicists of the twentieth century (Feynman et al. 1977): "We must be careful not to confuse diffusion of a gas with the gross transport that may occur due to convection currents." For some situations, this is obvious. A simple example is the case of a marker such as smoke emitted in a strong wind (large Péclet number). We should never interpret the lack of net upwind transport as implying zero diffusivity, because we must always assess diffusive transport relative to system motion. We shall recall Feynman's idea in Sect. 4 for a situation in which it is far more subtle, yet equally valid, and where it helps greatly to isolate diffusion's scalar determinant. The State of Knowledge Regarding Gas Diffusion This section illustrates that neither turbulent nor molecular diffusion has an unambiguous characterization to date. Turbulent Diffusion Careful examination of the issue of "Reynolds averaging" shows that micrometeorology has a long tradition of conflating diffusive and systematic transport. For decades, it has been the norm to define the average flux of air as (Priestley and Swinbank 1947), purporting total mass transport to be the sum of systematic and turbulent components, the former due to the mean flow with an alleged average velocity of w . Setting (1) to zero leads to the derivation of an average velocity in the direction of the heat flux (Priestley and Swinbank 1947); an equivalent approach, but using dry air only, underlies the derivation of the WPL corrections (Eqs. 11 and 12 of WPL80). Yet such an account defies Osborne Reynolds' specification of turbulence, which was rooted in the law of linear momentum conservation (LMC). A careful reading of Reynolds (1895) reveals relevant distinctions between the exact average and arithmetic average velocities, denoted here respectively as w and w (contrary to Reynolds' notation). When written using our modern notation, the consistent definition of Reynolds (1895) for the system velocity is based on average system momentum, as This can be rearranged by simple algebra to facilitate comparison with (1) as (3) w =w + 0, highlighting the key facts that total mass transport defines the average flow, and so there can be no net mass transport of air by turbulence. These deductions are consistent with the law of LMC for a fluid or other system of particles, which defines the system velocity using a mass-weighted average of the individual component velocities, as in any introductory physics textbook (e.g., Giancoli 1984). In setting Eq. 1 to zero and thus alleging a mean velocity in the direction of the heat flux, which underlies the WPL density corrections, micrometeorology has neglected the law of LMC. By contrast, Eq. 2 respects both LMC and Feynman's idea since it exactly defines the system velocity against which we must assess transport due to random motions. In summary, while diffusion can reorder system mass, and certainly can cause transport of different constituents in different directions, any net mass flow of the system inherently defines the average velocity and therefore represents a background against which molecular or turbulent motions erratically randomize. In addition to the correct specification of average and turbulent velocities in accordance with the law of LMC, there remains the question of which gas index from Table 2 is diffusion's determinant. Section 1 has shown that this remains unresolved at present among micrometeorologists. The basis for describing turbulent diffusion has always been an analogy with transfer by random molecular motions (Richardson 1919). Unfortunately, however, the state of knowledge regarding molecular diffusion's scalar determinant is at least as ambiguous as is that for turbulent diffusion. Molecular Diffusion Molecular diffusion is a "rather complex phenomenon" (Batchelor 1967) that many scientific disciplines describe incorrectly and furthermore as if it were straightforward and intuitive. This assertion is proven by the numerous, conflicting definitions of Fick's first law that can be found in scientific textbooks. All define the diffusive flux as proportional to (the negative of) a concentration gradient, but with "concentration" defined variously as the number density (η c ), density (ρ c ), mass fraction (f c ), or molar fraction (χ c ). Generally, the meaning of "concentration" in Fick's first law varies according to scientific discipline. Many texts in physics (Feynman et al. 1977;Giancoli 1984) and physical chemistry (Mortimer 2000; Atkins and Paula 2006; Hofmann 2018) specify the concentration gradient directing diffusion of constituent (c) as ∇η c , while scientists studying the gas exchanges of biological systems, such as plant physiologists, are likely to write Fick's law using either ∇ρ c (Monteith 1973;Jones 1983) or ∇η c (Nobel 2005). Engineers seem to be largely consistent in specifying Fick's law using ∇f c (Geankoplis 1993;Kreith et al. 1999;Lienhard and Lienhard 2000;Bird et al. 2002), whereas fluid dynamics textbooks exhibit the gamut of variability, specifying the concentration gradient as ∇ρ c (Smits 2000 . Clearly, a definitive specification of diffusion's scalar determinant is needed, and this is particularly so in the case of the atmospheric boundary layer, which is both compressible and variable in composition (hence molecular mass) such that any two of the four gradients (∇η c , ∇ρ c , ∇f c , or ∇χ c ) may have opposite signs. Analyses The aim of this section is to definitively describe diffusion, both turbulent and molecular, specifically in terms of the appropriate constituent-gas abundance index (i.e., the "concentration" that figures in flux-gradient theories such as Fick's first law). Several theoretical examples, which we term cases, are carefully defined to eliminate incorrect scalar indices from consideration. Let us keep in mind Feynman's requirement to assess diffusive effects relative to system motion, even when the stream velocity is quite small, as in the following subtle examples. General Framework The following basis will be used commonly in a set of four cases to be studied. Let us consider an isothermal system (I) that is a cubic chamber completely isolated from its environment, which is the rest of the universe (II). The components of I are its six walls (I.A) and the fluid therein contained (I.B), which is an ideal gas composed of an equal number of moles of heavier gas (I.B.1; left) and lighter gas (I.B.2; right), initially unmingled on either side of a halfway division (dashed line) as depicted in Fig. 1. If we were to suppose the division to be a barrier, with equal volume and number of molecules on either side, then the ideal gas law would imply that each component gas exert the same pressure on the barrier. Instead, however, let us exclude a barrier and examine how the two fluid components mix in the absence of any forces or fields external to I. The following analyses, valid whether diffusion is purely molecular or predominantly turbulent (whatever the Péclet number), examine the mechanisms that mix I.B. We know from both experience and the second law of thermodynamics that I.B will evolve towards a well-mixed state of equilibrium (no gradients), with uniform values for every gas species of η c , ρ c , f c , and χ c (as well as the mixing ratio, r c ) throughout the container, and one might suppose this to be an instance of pure diffusion. However, three cases of particular interest will show that this is not generally so and will furthermore help to identify the scalar concentration whose gradient determines diffusion. Subsequently, a fourth case will highlight the relevance of these analyses to the atmospheric boundary layer. Theoretical Cases The first case shows that systematic transport and diffusion generally operate in tandem to achieve mixing, even for the kinematically equilibrated framework described above. In CASE 1, both fluid components are hydrogen (H 2 ) but with an isotopic distinction, with deuterium (4 g mol −1 ) on the left and protium (2 g mol −1 ) on the right. These two gases are chosen merely for illustrative purposes, with the difference in mass being key to illustrating the transport mechanisms involved in mixing. Paying heed to Feynman, we should characterise system motion before examining the effects of diffusion. The asterisk in Fig. 1 marks the initial centre of mass of I.B, at two-thirds of the distance from the centre of mass of protium to that of deuterium. Mixing shifts the centre of mass of fluid I.B to the centre of the container, undeniably defining systematic motion to the right. In other words, a mass flow develops within the container that tends to drag along any objects embedded in the flow, including all gas molecules. This mass flow is due, not to a net flow of molecules, but rather to a net flow of H 2 neutrons: two per deuterium molecule, versus zero per protium molecule. Newtonian physics aptly describes why the fluid I.B, initially at rest, accelerates to the right, and then decelerates to come to a rest at a new position. Let a barrier (at the dashed line in Fig. 1) that is not part of system I initially separate the two gas components I.B.1 and I.B.2, but vanish such that mixing can begin at the instant t 0 . (Prior to t 0 , the pressure is uniform throughout system I.B; afterwards, the pressure cannot be defined until a new equilibrium is reached.) Upon the barrier's disappearance, the force per unit area on the left and right container walls (I.A) remains constant until some later instant t 1 marking the first collision of a (faster) protium molecule with the left container wall (rather than the right wall, where it would have hit had the barrier not been removed). This and other novel protium collisions with the left wall push the container I.A to the left (action), and by Newton's third law the fluid I.B receives a force to the right (reaction) and thus accelerates to the right, according to Newton's second law. At some later instant t 2 , the first collision of a (slower) deuterium molecule with the right wall pushes I.A to the right (action), causing a force to act on I.B to the left (reaction) and therefore its deceleration as it begins to come to a rest at its final position. Taking a step back to examine the larger picture, we note that the system (I) has no interaction with the rest of the universe (II), which is its environment. Therefore, it undergoes no accelerations, although the same is not true about its components. As noted above, the fluid (I.B) moves to the right and so the container (I.A) must move to the left, such that the system (I) remains at rest, according to Newton's first law. This illustrates why a mass-centred framework is essential for defining the fluid velocity; a molecule-centred (kinematic) framework would fail to explain the movement of the container in the context of Newton's laws. If the mass of the gas is negligible in comparison with that of the container (or if the container is fixed to a gigantic mass such as the Earth), then the container can be approximated as at rest while its contents shift to the right. This case shows how dynamics outperforms kinematics in precisely describing motion, much as Newton improved upon Galileo's description of falling bodies. A kinematic description of CASE 1, based on the ideal gas law, would erroneously suggest no net fluid motion during the swap of protium and deuterium molecules, neglecting the importance of mass in dynamics. Both the ideal gas law and the state variables p and T are kinematic in nature, being independent of molecular mass, as the following summary of kinetic theory shows. The two isothermal isotopes have identical molecular kinetic energies (i.e., Ts) when the root-mean-square velocity of the protium molecules is a factor √ 2 (about 41%) greater than that of the deuterium molecules, which have twice the mass. Similarly, when colliding with a container wall during equilibrium, the average change in momentum of a deuterium molecule is 41% greater than that of a protium molecule, but the latter collisions occur 41% more often such that the average momentum transfer per unit time and per unit wall-surface area (i.e., p) is the same for each isotope. In other words, the ideal gas law describes molecular kinematics, but is an inapt basis for appreciating dynamics. With systematic motion understood, we can begin to address the issue of random transport (diffusion). Relative to the systematic motion or mass flow described above, protium diffuses upstream and deuterium downstream. The final equilibrium is achieved for each component at the same moment, because protium diffusion is more rapid, in accordance with Graham's law. More to the point, adding traces of helium (He; 4 g mol −1 ) to the mixture and contrasting its scalar gradients will reveal the correct scalar determinant of diffusion. The second case clearly eliminates several gas indices from consideration as the scalar whose gradient determines diffusion. In CASE 2, fluid I.B has a uniform helium molar fraction χ He = 10 ppm, with the remaining 99.999% of molecules being H 2 , again with deuterium on the left and protium on the right. Regarding scalar gradients, this implies ∇χ He = 0, ∇η He = 0 and ∇ρ He = 0, but ∇f He > 0. In terms of dynamics, CASE 2 is practically identical to CASE 1; with the substitution of mere traces of He, the molecular mass of gas I.B.1 is unchanged, and that of gas I.B.2 increases only marginally. Figure 1 still serves perfectly, although the initial position of the asterisk must now shift imperceptibly to the right, versus CASE 1. Here it is very illustrative to compare transport types for He as the fluid I.B is mixed. As noted above, the mass flow (net flux of H 2 neutrons) drags He to the right. However, the overall position of He does not change, since both the initial conditions and the well-mixed equilibrium require uniform distributions of χ He , ρ He and η He throughout I.B. Therefore, the mass flow of He to the right must be offset by He diffusion to the left (i.e., the motion of He relative to fluid I.B) in order for net He transport to be zero. In short, the He remains in place by diffusing upstream. Thus, CASE 2 clearly demonstrates a situation with a negative diffusive flux due to a positive ∇f He , but null values of ∇χ He , ∇η He and ∇ρ He , demonstrating that only ∇f He can correctly determine diffusion. The third case reinforces the relevance of mass in a situation with no He diffusion. In CASE 3, fluid I.B has a uniform f He = 10 mg kg −1 , with the other 999,990 mg kg −1 being H 2 that is once again deuterium on the left and protium on the right. Now the gradients ∇χ He , ∇ρ He and ∇η He are all negative (with double values of these scalars on the left versus the right) and yet no He diffusion occurs, as is consistent with the null value of ∇f He . In fact, f He is spatially constant and remains so during the intermingling, even as deuterium diffuses downstream and protium upstream, because f He is conserved through fluid motions (conservation of a mass fraction is demonstrated mathematically in the appendix of Kowalski and Argüeso 2011), excluding the possibility of any He diffusion. Rather, the net He transport to the right is purely systematic in nature, and due simply to the drag on He caused by the net flow of H 2 neutrons. These cases highlight the need to examine diffusion against a background of mass flow for any constituent (c), and thus from the perspective of dynamics. In this context, the gradients of kinematic variables χ c (µmol mol −1 ) and η c (mol m −3 )-which arise naturally out of ideal gas law applications-fail to indicate diffusion's direction because they neglect the key role played by mass in dynamics. On the other hand, the gradients of absolute scalar amounts ρ c (kg m −3 ) and η c (again) fail to describe diffusion because they focus exclusively on the constituent of interest and neglect the relevant context of system mass. In conclusion, as the lone "concentration" variable that satisfies the requirement of quantifying the scalar relative to the overall fluid mass, the mass fraction (f c ) is diffusion's definitive determinant. A final case shows the relevance of the preceding analyses to the atmospheric boundary layer. In CASE 4, fluid I.B is composed on the left (I.B.1) of dry air (29 g mol −1 ), and on the right (I.B.2) of moist air (28.8 g mol −1 ; with a water vapour mixing ratio of r v = 10 g kg −1 ), each with a CO 2 mass fraction of f c = 600 mg kg −1 . We can readily calculate the left/right values of the CO 2 mixing ratio (r c ; mg kg −1 ) of 600/606. Similarly, the isothermal left/right values of the CO 2 molar fraction χ c (µmol mol −1 ) of 395.5/392.7 imply a gradient in the CO 2 density ρ c (mg m −3 ), which is more than 0.7% greater on the left. (Appendix 2 calculates these values.) However, from analysis of the foregoing cases, it is clear that this is a situation with no CO 2 diffusion. Thus, the answer to the question posed in Sect. 1 is that, of the two differing interpretations of the relationship between the WPL corrections and turbulent fluxes, neither is correct. In fact, the "data" in Table 1 were generated by taking f c ′ = 0 as a starting point, and thereby ensuring an eddy with no influence at all on turbulent CO 2 diffusion (hence the artificial precision). The Flaws in Previous Turbulent-Flux Specifications The above analyses highlight errors in the prevailing micrometeorological definitions of transport by turbulence, which are entangled with systematic transport by the mean flow. Fundamental principles relevant to quantifying surface exchange include conservation of both mass and momentum. While the former has been used to justify defining turbulent CO 2 diffusion based on fluctuations in ρ c ′ (Lee 1998;Paw U et al. 2000;Finnigan et al. 2003), it is the latter that must be applied to distinguish between physical transport processes. For the conditions defined in Table 1, specification based on ρ c ′ (Wyngaard 1990;Liebethal and Foken 2003;Finnigan 2009) underestimates the turbulent CO 2 flux by 11.3 µmol m −2 s −1 because it focuses exclusively on CO 2 and neglects the context of local system mass, demonstrated relevant in the previous section. By contrast, the WPL80 specification based on r c ′ overestimates the turbulent CO 2 flux by 1.1 µmol m −2 s −1 due to several errors. The first error in the WPL80 flux-partitioning scheme is the use of arithmetic averaging to infer a spurious mean velocity in the direction of the heat flux as a consequence of the erroneous mass flux partitioning in their Eq. 12, which is analogous to our Eq. 1 but for dry air only. In comparison with the most basic definition of average velocity-as displacement divided by time elapsed-this velocity has been shown to be an artefact of inexact averaging (Kowalski 2012). Alternatively, since Eq. 12 of WPL80 applies to dry air (of constant composition), the error of using arithmetic averaging to calculate the average velocity is clear when viewed in the context of statistical sampling theory. Sonic anemometers have fixed sensing volumes, and therefore sample variable population sizes: generally, the colder the eddy, the greater the number of molecules sampled. Giving all samples equal weight as in arithmetic averaging therefore admits sample bias in favour of warm eddies. Since warm eddies tend to rise and cool eddies to descend when the heat flux is positive, this bias causes over-estimation of the average vertical velocity, and is the explanation for the alleged mean vertical motion in the direction of the heat flux. A second error regards the evaporation-based velocity component defined by WPL80 in their Eq. 14. This term due to the water vapour flux is in error because it represents an attempt to describe dynamics based on the ideal gas law, expressed in Eq. 8c of WPL80. The previous section shows that such a kinematic description, neglecting the lower molecular mass of water vapour (versus dry air), leads to a bias in system momentum. In the absence of temperature fluctuations, Eq. 8c of WPL80 states that an increase in water vapour results in an equal decrease in dry air, on a molar basis. By contrast, we define turbulent perturbations on a (fractional) mass basis. The difference between the two approaches thus includes a factor of proportionality that is the ratio of molecular masses that WPL80 define as µ. For isothermal conditions (no heat flux), this µ is essentially the difference between the Stefan flow (Kowalski 2017) and Eq. 14 of WPL80. Thirdly, the exclusion of water vapour from the denominator of the CO 2 mixing ratio, which WPL80 use to define the turbulent flux, neglects the intrinsically bilateral nature of diffusion. As the dominant surface exchange process, evaporation promotes not only upward diffusion of water vapour but also downward diffusion of dry air, including CO 2 . This is aptly illustrated by considering the case of steady-state evaporation with null CO 2 exchange, such that the net CO 2 flux (F c ) is exactly zero. Because of space occupied by newly introduced water vapour, air near the evaporating surface is diluted with regard to CO 2 (or any dry air species), whose turbulent flux is therefore downward. However, evaporated water vapour not only dilutes but also displaces other gas species, and so the downward turbulent flux is accompanied by upward systematic transport of CO 2 by the Stefan flow. The two components cancel each other to yield F c = 0. The WPL80 mixing-ratiobased definition gets the net flux correct (unaffected by evaporation, r c is constant in this example), but falsely characterises the net flux as wholly turbulent. A "New" Definition of the Turbulent CO 2 Flux The following formulation presents the total CO 2 flux F c (kg m −2 s −1 ) as the sum of two independent (disentangled) fluxes, the non-diffusive flux due to transport by systematic motion (F c,ndiff ) and the turbulent or diffusive flux (F c,diff ) where w s = E − is the Stefan flow velocity (m s −1 ; Kowalski 2017), E the evaporation rate (kg m −2 s −1 ) and ̄c the average CO 2 density (kg m −3 ). Fluctuations (double primes) of both vertical wind velocity and CO 2 mass fractions are calculated about mean values obtained via density-weighted averaging (Sect. 2.1 and Eqs. 8-9 of Kowalski 2012). The final overbar on the right-hand side of (4), representing arithmetic averaging, is justified because it overlies conserved CO 2 (eddy) momentum. Since the moist air density ( ) is not directly measurable at 10 Hz, it is determined by adding the average air density ( ̄ , computed from thermohygrometer and barometer data) to fluctuations that we estimate from sonic anemometer and gas analyser data. This we do, neglecting turbulent pressure fluctuations (i.e., p = p ), via a procedure that initializes the temperature (T) using the sonic temperature (T s ) and then iterates as follows These steps are the ideal gas law for water vapour (Eq. 5a), where R v is the specific gas constant for water vapour, and the definitions of the specific humidity (Eq. 5b) and sonic temperature (Eq. 5c). For the dataset analysed here, convergence to five or more significant digits for all variables required a maximum of three iterations. From these, the virtual temperature (T v ) is defined as and the ideal gas law for moist air defines Density fluctuations (ρ′) are then determined by subtracting from ρ its arithmetic average, which is not otherwise used. An analogous formulation is proposed for estimating water vapour fluxes. However, because the computation of w s requires knowledge of the water vapour flux itself, another iterative process is necessary to estimate both E and w s simultaneously, with E initialized to the traditional WPL80 water vapour flux, using the formulation where E is equal to the net water vapour flux (F v ) that is also disentangled into its non-diffusive (F v,ndiff ) and diffusive (F v,diff ) transport components. The iteration is completed when values of both E and w s converge within a specified tolerance. Our tests show that three to five iterations are usually sufficient for convergence to five significant digits. Finally, and in a similar fashion, the sensible heat flux H (W m −2 ) can be estimated as where c p (J kg −1 K −1 ) is specific heat of moist air at constant pressure. Experimental Site and Materials The measurements used to compare the novel flux-calculation methodology with traditional (WPL80) procedures come from a Mediterranean wetland with very large flux (5b) q = 0.622 e p (5c) T = T s (1 + 0.51q) . The site combines a number of characteristics that make it very suitable for flux-tower measurements. The ground is very level (usually flooded), with an expansive patch of monospecific vegetation (the common reed; phragmites australis) providing hundreds of metres of fetch in every wind direction. The low tower height (6 m; over reeds that grow to 3 m by mid-summer) thus ensures a homogeneous tower footprint. The combination of abundant water and nutrients and southeast Spain's sunny, dry climate make for very large H 2 O and CO 2 flux magnitudes during the growing season. Finally, the site is quite windy, near the head of the Lecrín valley with its numerous wind turbines. Tower-top measurements at 10 Hz include the wind vector components and sonic temperature from a sonic anemometer (CSAT-3, Campbell Scientific, Logan, UT, USA), and densities of H 2 O and CO 2 from an open-path IRGA (LI-7500; LI-COR Inc., Lincoln, Nebraska, USA) that also measures the atmospheric pressure. Just below at 5-m, a thermohygrometer (HMP 45C, Campbell Scientific, Logan, Utah, USA) reports the air temperature and relative humidity to a data logger (CR3000, Campbell Scientific) that stores half-hour averages. We compare the traditional (WPL80) versus disentangling methodologies in a consistent coordinate system. This was determined by two sequential rotations to correct for anemometer tilt (Finnigan et al. 2003) and force arithmetic mean velocities (prior to density corrections) to zero: first w (attack angle) and then v (wind direction). Annual integrations of net ecosystem exchange estimated for both methodologies were calculated filling gaps using the marginal distribution sampling technique (Reichstein et al. 2005). We examine experimental data for 2015, with a percentage of rejected CO 2 fluxes of 45% (74% of the rejected data occurred during night-time periods). Results For representative fluxes across multiple seasons, we compare net CO 2 fluxes calculated using the new methodology with traditional calculations from the WPL80 methodology. Emphasis here is on fluxes of CO 2 , because both water vapour and sensible heat fluxes from the two methodologies were virtually indistinguishable, and because non-diffusive transport of water vapour is only a tiny fraction of total water vapour transport (Kowalski 2017). In every case, F c represents the flux calculated with the new methodology, and F WPL the traditional calculation. The net CO 2 flux calculated by the two methodologies agrees quite exactly, as shown in Fig. 2 for a full year (2015) of data. Such agreement extends to annual integration of net ecosystem exchange for the Padul dataset, which for 2015 was a net uptake of 209 gC m −2 for the disentangled methodology, versus 207 gC m −2 for the traditional WPL80 methodology. Since the two methodologies for calculating the net CO 2 flux, disentangling (F c ) and traditional (F WPL ), agree so perfectly, in the following results only F c is presented. However, in Figs. 3 and 4 the black curve can be interpreted as representing either the net flux from our methodology (F c ) or the turbulent flux defined by WPL80 (F WPL ), which are indistinguishable when plotted. By contrast, the red and blue curves are unique to the disentangling methodology, and represent transport by Stefan flow and by turbulence, respectively. The two methodologies differ regarding the physical mechanisms of transport (Fig. 3). Compared with the new methodology that is grounded in conservation law, the WPL80 methodology (black) underestimates the magnitude of the daytime turbulent flux by about 10% during early summer. At midday, the net flux of about − 20 µmol m −2 s −1 is disentangled into about − 22 µmol m −2 s −1 of turbulent transport (blue), and + 2 µmol m −2 s −1 of transport by the Stefan flow (red). Other seasons show even greater relative discrepancies between the two methods. As flux magnitudes evolve over the seasons from their summer evaporative and photosynthetic maxima, the relative decrease in CO 2 uptake overrides the percent reduction in evaporation, such that the relevance of non-diffusive transport escalates (Fig. 4). Comparing Figs. 3a and 4a, from early summer to the transition-to-senescence period that dominates the ecosystem's behaviour as an annual carbon sink (Serrano-Ortiz et al. 2020), daytime CO 2 uptake (negative F c ; black) decreases by nearly an order of magnitude, while the non-diffusive flux (approximately proportional to evaporation; red) falls by only 50%, with the result that the magnitude of non-diffusive CO 2 transport is roughly one-third that of the total CO 2 flux. In this case, regarding downward turbulent CO 2 transport, the WPL80 estimate (peaking near 3 µmol m −2 s −1 ; black) underestimates the turbulent flux (reaching 4 µmol m −2 s −1 ; blue) by about 25% (Fig. 4a). The discrepancies are even greater in late April, when the reeds begin to bud; CO 2 fluxes are then dominated by respiratory decomposition of the previous year's organic material and photosynthesis is only strong enough to reduce midday CO 2 emissions to near zero (Fig. 4b). Non-diffusive CO 2 transport (proportional to evaporation; red) is modest, but still enough to dominate net CO 2 transport (black) during the morning hours, when the turbulent flux (blue) is near zero and is extremely overestimated by the WPL80 methodology. During afternoon hours, the diffusive (blue) and non-diffusive (red) fluxes are of similar magnitude, implying that the WPL80 methodology overestimates turbulent transport by 100%. A Critique of the Webb, Pearman, and Leuning Methodology Traditional flux calculations produce accurate determinations of the net CO 2 flux, but leave the mechanisms of physical transport entangled. The net fluxes computed by the two methodologies agree because they use the same equation, however expressed differently, whether as w c in Eq. 19 of WPL80 or as wf c in our Eq. 4. The key difference lies in how that net flux is decomposed into transport by the mean flow, versus transport by turbulence. The WPL80 decomposition into turbulent and non-turbulent ("mean") contributions has neglected the law of linear momentum conservation (LMC) in two important ways. First, the average velocity should be defined consistent with LMC as by Osborne Reynolds (1895). By contrast, w is an arbitrary, near-zero value of w that has no physical relevance. Second, the scalar variable whose fluctuations define diffusive transport is neither the density (ρ c ) nor the mixing ratio (r c ), but rather the mass fraction (f c ). Nonetheless, based as it is on the ideal gas law, the WPL80 methodology provides an accurate accounting of CO 2 molecules exchanged at the surface, such that many fluxes that have been published over the last few decades will not require revision. It is interesting to note that WPL80 derived "alternative expressions" for fluxes of CO 2 and water vapour that coincide neatly with our decomposition, as long as we suppose turbulent fluctuations to be defined in terms of the mass fraction. Then, Eq. 23 of WPL80 states that the turbulent water vapour flux must be divided by a factor (1 − q) to calculate net evaporation, in agreement with the derivation by Kowalski (2017) that q is the fraction of the vapour flux that is non-diffusive. For the CO 2 flux, again following this supposition, Eq. 22 of WPL80 specifies net CO 2 exchange as the sum of the turbulent flux and the product of the average CO 2 density with a velocity that is equal to the evaporation rate divided by the average air density, mirroring our Eq. 4. Again, the key difference regards exactly how the turbulent flux is defined. Generalizing from the specific case of CO 2 to other gases, we should note that Eq. 4 specifies the non-diffusive flux as a function of the evaporation rate and the density of the gas in question. It does not depend on the exchange rate of that gas. Thus, the relative importance of non-diffusive transport depends on both gas quantity and reactivity. An extreme example here is that of O 2 , whose atmospheric abundance exceeds that of CO 2 by two orders of magnitude, yet has a comparable exchange rate. As a result, its upward transport by systematic boundary-layer motion often dwarfs its net flux (i.e., |F c,ndiff | ≫ |F c | ), making its diffusion sizeable and downward despite photosynthetic O 2 production at the surface. (Emission of water vapour is so much greater than that of O 2 , that the net effect of surface exchange that of O 2 dilution.) The examination of the relative magnitudes of F c,ndiff and F c,diff in composing F c for GHGs other than CO 2 will be addressed in a future paper. Implications for Micrometeorology If some net fluxes published using WPL80 methodologies were accurate, as the above agreement indicates, others that made use of Monin-Obukhov similarity theory (MOST) may include non-negligible errors. The scope of MOST is to relate turbulent-not netfluxes to other boundary-layer variables. Thus, flux-gradient relationships apply to the diffusive (F c,diff ) flux component only, and not strictly to the net flux. The same can be said for numerous applications of second-order moments for scalars fluctuating in turbulence (Detto and Katul 2007), which should be characterized in terms of the statistics of the mass fraction. These include spectra and cospectra, for which similarity should be expected between the temperature, e.g., and the mass fraction-but not necessarily the mixing ratio or density-of the gas of interest. Spectral corrections are a particular concern in this regard. To illustrate this, let us consider a surface with 90 W m −2 of latent heat flux (E = 2 mmol m −2 s −1 ) and 2 µmol m −2 s −1 of O 2 emission (modest photosynthesis), into air with 70% relative humidity, 20.95% O 2 , and a density of 1.22 kg m −3 . For simplicity, we will specify no sensible heat flux. Table 3 presents Table 3 Total gas fluxes and flux components according to the two methodologies for 2 μmol m −2 s −1 O 2 emissions over an evaporating surface with 90 W m −2 of latent heat flux and no sensible heat flux Dark grey rows (total fluxes) are specified from the text. Light grey rows are calculated according to each methodology. Unshaded rows are calculated simply as the difference of the previous two flux values according to the two methodologies compared in this paper, which disagree substantially regarding O 2 flux decomposition. Now let us hypothesise an open-path instrument that measures O 2 density fluctuations with a response time that is insufficient to track the smallest eddies, unlike the fast-response IRGA measuring water vapour alongside, and so would be incapable of producing the data in Table 3 without correction for high-frequency loss. Let us also suppose that empirical determinations have shown a 10% flux loss, e.g., by applying the instrument's cospectral transfer function by low-pass filtering to fluxes that are measured without losses. Frequency response corrections based on MOST cospectral similarity (e.g., Massman 2000) could be applied directly to turbulent fluxes measured with such a slow sensor, but not to any other flux or component in Table 3, including the net flux and WPL corrections. As a final implication, recent attempts to partition net gas exchanges into contributing biogeochemical processes-such as breaking net CO 2 exchange into photosynthetic and respiratory components, or separating the net water vapour flux into soil evaporation and transpiration (Thomas et al. 2008;Scanlon and Kustas 2010;Skaggs et al. 2018;Stoy et al. 2019)-are based on MOST but truly applicable only to turbulent scalar fluctuations (i.e., those of the mass fraction). The relevance of non-diffusive fluxes to such applications remains to be explored. Conclusions Surface-normal transport of CO 2 can be turbulent and/or non-diffusive, with the net flux being the sum of these two components representing distinct physical transport mechanisms. The law of linear momentum conservation must be respected when distinguishing between these two types of transport. Non-diffusive transport depends on the momentum of air due to water vapour exchange (Stefan flow), and the CO 2 density. Diffusive transport, whether turbulent or molecular, is proportional to the concentration gradient, where the "concentration" must be defined as the constituent mass fraction (f c ). The turbulent flux should be defined as the (density-weighted) covariance between f c and the vertical velocity. where R d is the specific gas constant for dry air. Finally, we can calculate the CO 2 mixing ratio (r c ) as the ratio of ρ c (measured directly) to ρ d (calculated as above) Then the eddy flux defined by perturbations in r c is straightforward as d w′r c ′, with d from (11), using data from column 2 in Table 1, w′ as specified (0.1 m s −1 ), and r c ′ determined via the difference between mixing ratios calculated for the updraft and the boundary-layer average. For context, we note the energy fluxes associated with the eddy in Table 1. Calculations for water vapour, similar to those above for CO 2 , reveal a latent heat flux of 210 W m −2 while the sensible heat flux amounts to 238 W m −2 . Appendix 2: CO 2 Concentrations for Case 4 The following calculations lead to the values of the CO 2 mixing ratio (r c ) and CO 2 molar fraction (χ c ) for CASE 4, referring to Fig. 1. For the dry air on the left side, with no water vapour present, the mass fraction and mixing ratio are equal (r c = 600 mg kg −1 ), and the molar fraction can be determined simply by a conversion of units For the moist air on the right side of Fig. 1, it is necessary to recall the relationship between the specific gas constants for moist air (R m ) and dry air (R d ), as a function of the mixing ratio (r v ), as (Rogers and Yau 1989), which derives from their molecular masses (M m and M d ) For r v = 10 g kg −1 , and using M d = 29 g mol −1 , this yields M m = 28.8 g mol −1 . The mass of moist air is the sum of the mass of dry air and water vapour and when using the definition of the water vapour mixing ratio this becomes Solving (17) for the dry air mass and substituting this into the definition of the CO 2 mixing ratio yields
2021-02-12T15:27:35.237Z
2021-02-11T00:00:00.000
{ "year": 2021, "sha1": "54fb9241a6ea568f18e2dfa66fb9b472bcf44b00", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10546-021-00605-5.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "54fb9241a6ea568f18e2dfa66fb9b472bcf44b00", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [] }
12872257
pes2o/s2orc
v3-fos-license
Association of parents’ and children’s physical activity and sedentary time in Year 4 (8–9) and change between Year 1 (5–6) and Year 4: a longitudinal study Background Parents could be important influences on child physical activity and parents are often encouraged to be more active with their child. This paper examined the association between parent and child physical activity and sedentary time in a UK cohort of children assessed when the children were in Year 1 (5–6 years old) and in Year 4 (8–9 years old). Methods One thousand two hundred twenty three children and parents provided data in Year 4 and of these 685 participated in Year 1. Children and parents wore an accelerometer for five days including a weekend. Mean minutes of sedentary time and moderate-to-vigorous intensity physical activity (MVPA) were derived. Multiple imputation was used to impute all missing data and create complete datasets. Linear regression models examined whether parent MVPA and sedentary time at Year 4 and at Year 1 predicted child MVPA and sedentary time at Year 4. Change in parent MVPA and sedentary time was used to predict change in child MVPA and sedentary time between Year 1 and Year 4. Results Imputed data showed that at Year 4, female parent sedentary time was associated with child sedentary time (0.13, 95% CI = 0.00 to 0.27 mins/day), with a similar association for male parents (0.15, 95% CI = −0.02 to 0.32 mins/day). Female parent and child MVPA at Year 4 were associated (0.16, 95% CI = 0.08 to 0.23 mins/day) with a smaller association for male parents (0.08, 95% CI = −0.01 to 0.17 mins/day). There was little evidence that either male or female parent MVPA at Year 1 predicted child MVPA at Year 4 with similar associations for sedentary time. There was little evidence that change in parent MVPA or sedentary time predicted change in child MVPA or sedentary time respectively. Conclusions Parents who were more physically active when their child was 8–9 years old had a child who was more active, but the magnitude of association was generally small. There was little evidence that parental activity from three years earlier predicted child activity at age 8–9, or that change in parent activity predicted change in child activity. Electronic supplementary material The online version of this article (doi:10.1186/s12966-017-0565-0) contains supplementary material, which is available to authorized users. Background Children who are physically active have lower levels of risk factors for cardio-metabolic disease, lower risk of obesity and improved psychological well-being [1]. The UK Chief Medical Officers have recommended that all children and adolescents should engage in at least 60 min of moderate-to-vigorous-intensity physical activity (MVPA) per day and reduce sedentary time [2]. Large national surveys from the UK [3] and USA [4,5] indicate that many children do not engage in the recommended hour per day of MVPA [2] and that both boys and girls become less active as they get older [6]. Ensuring that children are active, stay active and limit sedentary time has, therefore, been recognized as a public health priority [6]. Recent systematic reviews and meta-analyses indicate that interventions to increase physical activity and reduce sedentary time among children and adolescents have demonstrated limited efficacy [7,8]. The reviews conclude that there is still much to be learned about the origins of children's physical activity, how it could be changed and that new, improved behavior change programs are needed. Parents are often blamed for the inactivity of their children [9,10], with the media calling for parents to spend more time being active with their children. These statements can be counter-productive, leading to some parents feeling helpless as they may have insufficient time, resources and/or knowledge of how to help their children to be active [11][12][13]. Parent-child activity is, however, often promoted. For example, Sport England are currently investing £40 million in projects that promote physical activity for children with their parents [14]. The potential utility of these schemes and particularly whether promoting physical activity for parents and children together is likely to be effective is unclear. Several studies have reported associations between parent and child physical activity [15][16][17][18][19][20][21]. These associations have been interpreted as evidence of parents and children being active together and used to advocate for parent-child physical activity interventions [20]. The bulk of the studies have, however, either used self-report methods, small samples or been conducted with pre-school aged children in cross-sectional study designs [20,21]. Studies that have included older children have generally reported comparatively low associations between parent and child accelerometer-derived estimates of physical activity [15][16][17][18][19]. For example, correlations between parents' and children's MVPA were generally low (i.e., r < 0.08) [20,22], and in our previous analyses we reported that every 10 min of parental MVPA was associated with just one additional minute of child MVPA [16]. Most studies have focused on the start or end of primary (elementary) school, resulting in a paucity of information on how parent activity during the middle primary school years is associated with child activity. This gap is particularly important as children's physical activity levels progressively decline during primary school [6,23,24] and strategies are needed to stop this decline before the transition to secondary school [25]. Furthermore, there is absence of prospective data. In this paper, we examined the association between objectively-assessed MVPA and sedentary time of year old) children and their parents. We also sought to determine whether parental MVPA and sedentary time during Year 1 (5-6 years old) predicted child MVPA and sedentary time at Year 4, and if change in parental behavior was associated with change in child behavior. Finally, we examined if there were any differences in associations for male and female parents, which may suggest a need to tailor behavior change interventions to parental gender. Methods The current analyses used data from the B-PROACT1V study [16,17,26,27]. The study examined the physical activity behaviors of children and their parents as the children progressed through primary school. Between 2012 and 2013, data were collected from 1299 children from 57 schools in the greater Bristol (UK) area who were in Year 1 (5-6 years of age). Between March 2015 and July 2016, data were collected from 1223 children in 47 of the original schools. The study received ethical approval from the School for Policy Studies Ethics Committee at the University of Bristol and written parent consent was received for all participants [28]. Parent and child accelerometer measures Children and at least one of their parents wore a waistworn ActiGraph wGT3X-BT accelerometer for five days, including two weekend days, in Year 1 and then again in Year 4. Accelerometer data were processed using Kinesoft (v3.3.75; Kinesoft, Saskatchewan, Canada) in 60-s epochs. To enable comparison with international datasets [6], for inclusion in analysis, at least three valid days of data must have been provided, where a valid day was defined as at least 500 min of data, after excluding intervals of ≥60 min of zero counts allowing up to two minutes of interruptions. The average number of sedentary and MVPA minutes per day were derived using the Evenson populationspecific cut points for children (≥2296 cpm) [29], and the Troiano cut points (≥2020 cpm) for adults [30]. Parent and child characteristics Child height was measured to the nearest 0.1 cm using a SECA Leicester stadiometer (HAB International, Northampton) and weight was measured to the nearest 0.1 kg using a SECA 899 digital scale (HAB International, Northampton). These were used to derive the child's body mass index (BMI) as weight (kg)/ height (m) 2 , and this was converted to an age-and gender-specific standard deviation score [31]. Parents completed a questionnaire, which included information on the child's gender, date of birth, number of siblings and the parent's date of birth, height and weight. Where child's date of birth was missing (21% of all children), they were assigned the median age of 6.0 years at Year 1, and 9.0 years at Year 4. Indices of Multiple Deprivation (IMD) scores, based upon the English Indices of Deprivation (http://data.gov.uk/dataset/index-of-multiple-deprivation), were assigned to each family based on their reported home postcode, where higher IMD scores indicate a greater level of deprivation. Statistical analysis A description of the study design and reasons for incomplete data at the two timepoints has been reported previously [23,32]. Briefly, however, there was considerable pupil movement between schools between Year 1 and 4 and different families consenting to participate in the two different waves. To account for missing data two separate multiple imputation models were used, with the first including the 1223 children who participated in the study in Year 4 (but not necessarily in Year 1) and the second including the 685 children who participated in the study in both Year 1 and Year 4. The first imputation was used to examine the association between parent and child physical activity in Year 4. This included relevant parental exposures (female and male parent sedentary and MVPA minutes per day), child outcomes (sedentary and MVPA minutes per day) and co-variables measured at Year 4 (child age, BMI z score, IMD, and female and male parent age and BMI). The second imputation was used to examine the association between parent physical activity in As there is consistent evidence that physical activity patterns differ by gender [5,[33][34][35] both sets of imputations were run separately for boys and girls to allow for associations to differ by child gender and included a school indicator variable to account for clustering within schools. In both cases, we created 20 imputed datasets using 20 cycles of regression switching and combined regression coefficients across the imputed datasets using Rubin's rules [36]. We used linear regression models to examine the associations of interest, with robust standard errors to account for clustering within schools. We fitted models for boys and girls combined, as well as separately by gender and used compared point estimates and their 95% confidence intervals between girls and boys, as well as computing a Wald test to assess evidence of interaction by gender. In Model 1 we adjusted only for the child's gender and age. Model 2 was additionally adjusted for the child's BMI z-score, household IMD score, number of siblings and the parent's age and BMI. The covariables measured at Year 4 were used for the models in which parent's physical activity at Year 4 was the exposure. Covariables measured at Year 1 were used for models which analyzed parent's physical activity at Year 1, or change in parent's physical activity between Year 1 and Year 4, as the exposure. Regression analyses were repeated, restricting to children and parents who had complete data for all exposures, outcomes and covariables, and compared with the multiple imputation analysis. All analyses were performed in Stata version 14.0 (StataCorp, 2015). Results The characteristics of all children and parents who participated in Year 4 and those who participated in both Year 1 and Year 4 in the observed and multiple imputation datasets are shown in Table 1. The distributions of characteristics measured at Year 4 were comparable in the full set of all 1223 children who took part at Year 4 and the 685 who also took part in Year 1. Generally, the distributions of characteristics in the multiple imputation data were very similar to those in the observed data, with the exception of the change in male parents' sedentary and MVPA minutes per day between Year 1 and Year 4, for which the means differed and standard deviations were much higher in the multiple imputation data compared with the observed data. Table 2 Table S1). The associations for parent and child MVPA in the multiple imputation data are shown in Table 3. Female parent MVPA at Year 4 was strongly positively associated with child MVPA at Year 4 in unadjusted and adjusted models, with similar-sized small associations in both boys and girls. However, female parent MVPA at Year 1 was not associated with child MVPA at Year 4. There was weak evidence that male parent MVPA at Year 4 was also associated with child Discussion The findings in this paper demonstrate that there was a small association between the physical activity of parents and their Year 4 (8-9 years of age) child. Each minute of female parent MVPA was associated with an extra 10 s of child MVPA, while an extra minute of male parent MVPA was associated with only 5 extra seconds of child MVPA per day. In other words, every 10 min of female parent MVPA was associated with 1 min of child MVPA, while every 10 min of male parent MVPA was associated with 30 s. Conversely, female parents who were more sedentary at this time had children who were more sedentary, regardless of child gender, while male parents who were more sedentary specifically had more sedentary sons. These cross-sectional associations, were not replicated in longitudinal analyses. Parents who had been more physically active three years earlier, when their child was in Year 1, did not have a more active child in Year 4, and changes in parents' physical activity and sedentary time did not correlate with changes in children's behaviors over the three years. There was only weak evidence that female parents who were more sedentary three years earlier had children who were more sedentary in Year 4. Taken together these data challenge the notion that parents' engagement with physical activity is an important determinant of their child's activity levels. In this study, there was little evidence that physical activity levels correlated more strongly in parent-child pairings of the same gender (i.e., that associations of the female parent's physical activity with that of their child was stronger in girls than in boys, or that associations of male parent's physical activity with that of his child were stronger in boys). The one exception was for male parent's sedentary time when the child was in Year 4, where an extra minute of the parent sedentary time was associated with an extra 25 s of boy's sedentary time but with little difference in daughter's sedentary time. The data presented in this study for Year 4 children (8-9 years old) are broadly similar to previous crosssectional studies, which have reported correlations of around 0.1 between parents' physical activity and the physical activity patterns of pre-school and young primary school age children [20,22,26]. Collectively, these findings suggest that there are very small associations between the physical activity and sedentary time of parents and children which may be a product of shared behavior such as walking to school or shared sedentary time during meals or homework, but overall the magnitude of associations is weak. As the mean minutes of parental MVPA was 48 min for mothers and 55 min for Thus, while there is strong evidence against the null hypothesis for these associations, the magnitude of association is very small and suggests that targeting parent activity to increase the child's activity at Year 4 is unlikely to yield any potential health benefit at either the individual or population level. It is important to recognize that other forms of parental influence, such as providing logistic support for physical activity by enrolling children in activity programs and creating activity opportunities for children, have consistently been associated with a larger magnitude of increased physical activity among both boys and girls [12,[37][38][39][40][41][42]. Findings therefore suggest, simple strategies that focus on encouraging parents to be active at the same time, together with their child are unlikely to be sufficient to increase child physical activity [43]. More sophisticated strategies that take account of the key variables that influence both parent and child physical activity are likely to be required to change both behaviors. The data presented in this paper suggest that there is no evidence of long-term association between the physical activity or sedentary time of children and their parents, and that change in parent behavior is not associated with change in child behavior. The lack of association could be because children and parents do not spend large amounts of time active together, with one GPS study reporting that parents and children spend only 2.4 min per day doing activity at the same time [18]. For example, parents may get the majority of their activity from walking and commuting while child activity may occur separately at school, in sport groups or more general active play [43][44][45][46][47]. The time that parents spend together may be very good for their relationship but it is likely to get greatly diluted by a range of other activities that they do separately. These findings do not downplay the potential importance of parent-child activity time as a source of fun, bonding, learning about rules and social development but may suggest that is not a big contributor to overall activity from a health perspective. The evidence presented in this paper highlight a need to study the broader ways in which parents may influence their children's physical activity. Potential mechanisms could be parenting practices (what a parent does), parenting styles (how messages are delivered), as well as a wide range of environmental factors such as access to green space, and psychosocial factors such as positive reinforcement and modelling of active behaviors. This wide range of variables may not be captured by individual theories of behavior change and are likely to require the development of more nuanced, parent-based models of physical activity promotion. The Family Ecological Model is one such model that has been applied to obesity prevention [48] and holds promise as a potential framework which could be adapted to focus specifically on understanding the ways in which parents influence child physical activity. As such, for the field to progress there is a need for the key elements of the framework, for which there is sufficient evidence, and the key evidence gaps, for which more empirical work is required, to either support or refute each variable's role as a potential key predictor of child physical activity. In addition, there is also a need to develop new analytical frameworks for the assessment of these complex interactions which may not be immediately amenable for assessments via current methods. For example, it has recently been suggested that the lack of success of individual-focused interventions (such as physical activity) could be due to the failure to take account of the broader systems-level influences on behavior, and the ability of the system to adapt to interventions, thereby mitigating any effect that might be identified by current methodologies [49,50]. This more complex and theoretically challenging work is likely to be needed to understand the very sophisticated and multi-layered human interactions between parents and their children which support or undermine physical activity. Strengths and limitations The major strength of this study is the objectivelyassessed physical activity data for children and their parents at two time points (Year 1 and Year 4). This has facilitated an examination of how parental MVPA and sedentary behavior at Year 1 is associated with child behavior at Year 4, as well as advancing the cross-sectional information by providing new information on Year 4 children. The study is however, limited by the provision of data from a single UK city area. We are unable to state that the data are representative of this region as we do not have data from non-responding schools, which limits our ability to generalize to other countries and contexts. As with all longitudinal studies, a proportion of the data were missing and this was higher for analyses involving both time points of data collection. We used multiple imputation to increase precision and potentially reduce bias in our estimates compared with analysis restricting to individuals with complete data. This assumes that data are missing at random, i.e., that any reasons for missingness can be explained by observed data [51]. It is not possible to test this assumption, but have included all exposures, outcomes, covariables and any variables that are predictive of missingness in our imputation models in order to increase the plausibility that it is correct. Finally, we used a hip worn accelerometer to identify sedentary time. There is currently a debate within the field [52] as to whether more nuanced definitions of forms of sedentary behavior are required, but as specific forms of behavior cannot be detected by accelerometer, further partitioning of the data into forms of sedentary behavior was not possible in this study. Conclusions Our results challenge the notion that parental activity levels will influence their child's physical activity and sedentary time, and suggest that interventions that aim to increase children's activity levels by increasing their parent's levels are unlikely to have marked impact on improving population levels of childhood activity. Additional file Additional file 1:
2018-04-03T04:12:44.189Z
2017-08-17T00:00:00.000
{ "year": 2017, "sha1": "f05bd3b4c972020a563d09d74bf8cda943b82f45", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12966-017-0565-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f05bd3b4c972020a563d09d74bf8cda943b82f45", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238532969
pes2o/s2orc
v3-fos-license
True Aneurysm on Posterior Tibial Artery as Late Complication of SARS-CoV-2 We describe the story of a 70- year-old Italian male that almost 4 months later respiratory infection by SARS-CoV-2 presented a rapid evolution of a true aneurism of the right posterior tibial artery (PTA). INTRODUCTION Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a global pandemic. The development of an acquired thrombophilia with activation of the coagulation cascade in response to the inflammatory process has been described. SARS-CoV-2 appears to have an affinity for vascular endothelium due to angiotensin-converting enzyme 2 receptors that seem to be downregulated, which may drive the proinflammatory and/or prothrombotic and vascular consequences. 1 We describe the story of a 70-year-old Italian patient who presented 4 months after COVID-19 infection with a true aneurysm of the posterior tibial artery (PTA). CASE PRESENTATION A 70-year-old male was referred to the vascular outpatient clinic with history of a pulsatile lump behind his right ankle over the last 2 months. Apart from the presence of a lump, he denied any other complaints including any violent or repeated trauma. He also suffered from hypertension. He was not taking any antiplatelet or anticoagulation medications and denied any use of tobacco. He was hospitalized on April 2020, as the result of acute hypoxic respiratory failure due to communityacquired pneumonia, with positive results for SAR-CoV-2 infection in patient's reverse transcriptionpolymerase chain reaction (RT-PCR) test from nasopharyngeal swab. Doppler at that time had not revealed superficial or deep venous thrombosis or other abnormalities. On At physical examination a 3 × 4 cm size lump just behind the right medial malleolus was identified, which was non-tender and pulsatile. The patient confirmed a rapid increase of the dimensions during the last month. The peripheral pulses were easily palpable on either side. There was no evidence of aneurysm anywhere else in the body on clinical examination. He underwent Doppler ultrasound examination which confirmed 1.6 cm size pseudoaneurysm of PTA with presence of circumferential mural thrombus. The distal and proximal parts of PTA, anterior tibial artery and popliteal artery were normal. He underwent an operation in the form of excision of aneurysm ( Fig. 1 ) followed by endto-end anastomosis of PTA ( Fig. 1 ). Histology confirmed true aneurysm of PTA with mural thrombus attached to the intima of the vessel. The arterial wall did not show evidence of connective tissue disorders. Pathology specimen was examined using hematoxylin and eosin (H&E) stain. The specimens were assessed for inflammatory cells associated with endothelium and/or apoptotic bodies, mononuclear cells, small vessel congestion, and lymphocytic endotheliitis, mainly T CD3 + . The bacteriology examination did not reveal any organisms nor grown any organisms in the culture media. He recovered well postoperatively and was discharged on forth postoperative day. At follow up at one year, he did not develop any complications and color Doppler revealed patent PTA. DISCUSSION Aneurysms are more common in the proximal arteries such as femoral and popliteal arteries compared to distal small vessels. It is a rare pathology of the infra-popliteal region, where false aneurysms are more common and are usually associated with trauma. 2 The true aneurysms of infrapopliteal region are extremely rare and most of them have been reported to be associated with trauma, collagen vascular pathology, fibromuscular dysplasia, inflammation, infection, and atherosclerosis. 2 False aneurysms are more common in comparison to the true aneurysms even in infrapatellar blood vessels. The precise etiological factors are not identified, but trauma is suggested as possible causality. 2 Italy was the first western country to be stricken by the coronavirus pandemic. The most common COVID-19 symptoms of the disease at onset were fever, fatigue, dry cough, dyspnea, runny nose, or other upper respiratory tract symptoms. Ageusia and anosmia were also found to be characteristic symptoms, albeit with more rare presentation, while gastrointestinal symptoms, focal neurological deficits as a result of strokes arising from thrombosis and thromboembolism, account for a minority of cases. Cutaneous manifestations with an array of morphologies have also been documented. 3 The demonstration of endothelial cell injury across vascular beds of different organs gives light to unexplained symptoms and clinical courses described in early reports of the COVID-19 pandemic. In particular, histological analysis revealed that the presence of the virus within endothelial cells was associated with clusters of inflammatory cells. This finding suggests that SARS-CoV-2 infection initiates endothelial inflammation throughout the entire human organism, as well as apoptosis, something that explains the systemic macro and microcirculatory involvement in different vascular beds and their clinical sequelae in patients with COVID-19. 4 Moreover, evidence of viral endothelial injury helps to explain why patients with pre-existing cardiovascular disease are particularly associated with adverse outcomes in COVID-19. Since the initial reports, an increase in circulating D-dimer levels has been reported. The addition of systemic proinflammatory cytokines release as a consequence of endothelial inflammation, as well as the expression of the ACE2 receptors for SARS-CoV-2 on the membrane of the vascular muscle and endothelial cells, may help to explain why COVID-19 patients are also susceptible to arterial thrombosis, even in young non-arteriosclerotic individuals. 4 Vascular biomarkers confirm that COVID-19, a disease initially thought to be exclusively an interstitial pneumonia with varying degrees of severity, can also be considered a vascular disease. 4 In case reports published of PTA aneurysms, false and true, etiology suggested was trauma, infections or unknown. Degenerative changes lesions secondary to mycotic infection, polyarteritis nodosa are also described as responsible for true aneurysm. In our patient, we could not find any etiological factor, such as trauma, atherosclerosis or connective disorders 2 , 5 The only recent inflammatory event in our patient was the SARS-CoV-2 infection. We are living an historical era due to the ongoing global pandemic, and we are all inclined to correlate situations without a certain causality to SARS-CoV-2. This patient is one of those cases in which we find no other etiological correlation with the recent onset of the PTA pseudoaneurysm. On the other hand, however, the presence of lymphocytic infiltrate in the wall of the PTA, already highlighted as a histological finding in certain cases of vascular damage in SARS-CoV-2, and not reported in other cases of pseudoaneurysms in literature, can confirm our doubt. 6 Additionally, the negative bacteriology is important in determining this diagnosis, since mycotic aneurysms, which is on the differential, tend to be positive for bacteria in most cases. The management options vary from conservative approach to surgical excision followed by reconstitution of PTA. Due to very limited number of published cases, a standard treatment has not been defined. Although ligation of posterior tibial artery may be performed, especially in emergency settings, surgical excision with posterior tibial artery reconstitution either by primary repair or by interposition vein graft is the preferred treatment. Endovascular embolization and percutaneous occlusion of aneurysm with various modalities are more commonly used in pseudo-aneurysms and are associated with risk of limb ischemia. 2 , 5 In this case, patient underwent surgical excision followed by end-to-end anastomosis. The anterior tibial and pedidis arteries were intact and one might question the need for operation in this report, however we believe that those aneurysms should be treated irrespective of symptomatology due to the risk of embolization, thrombosis, and rupture leading to potential ischemia and amputation. The vascular community should be aware of this new complication in critically ill patients with COVID-19. The finding in this patient is likely an infectious angiitis due to COVID 19, and surgery showed to be a valuable treatment option.
2021-10-11T13:07:49.038Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "98aae841c6e6dcb4be6a61178b89732630eb066c", "oa_license": null, "oa_url": "http://www.annalsofvascularsurgery.com/article/S0890509621006841/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "edf9029c909464bfe1826c9c8627862771f6312f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
13197369
pes2o/s2orc
v3-fos-license
Spirituality and mood pathology in severe skin conditions: a prospective observational study Although the association between spirituality and parameters of psychological health and disease has been investigated extensively, little evidence is available for its potential role in dermatology. In a single-centre observational prospective study, 149 outpatients (107 women) with systemic sclerosis (SSc; n = 44), lupus erythematosus (LE; n = 48), or early stage malignant melanoma (MM; n = 57) were investigated using the multidimensional inventory for religious/spiritual well-being together with the Brief Symptom Inventory for psychiatric symptoms (BSI-18). SSc patients reported the highest amount of Somatization in comparison with LE and MM patients (p < 0.05). Furthermore, in line with the previous research, spiritual dimensions, such as Hope for a better future (p < 0.01) or Hope for a better afterlife (p < 0.01), proved to be especially negatively predictive for the global amount of psychiatric symptom burden in these dermatological patient groups. Our findings suggest that greater attention should be given to spiritual issues, such as encouraging patients, imbuing them with optimism, and offering interventions that address spiritual well-being. Introduction In recent years, there has been a growing interest in finding the link between religious/spiritual issues and improved quality of life in severe illness and their expected salutogenic effect on well-being in this setting [15]. Accordingly, it has been suggested that the bio-psycho-social model of health and disease might be fruitfully augmented by considering a religious/spiritual component in handling patients [11]. This notion, however, has also been considered speculative or its role exaggerated [17]. In any case, the influence of the spiritual dimension in skin disease, as being related to different parameters of mental health, is still poorly investigated, although this area of research seems to have great potential [8]. Religion as being related to various parameters of mental health and illness was prominently described as ''the search for significance in ways related to the sacred'' (p. 11) [13]. Consequently, spirituality, as one of the key elements of religion, was overwhelmingly confirmed to be negatively related with mood pathology [12]. Mood disorders as being detrimental to quality of life, in turn, were reported in approximately 30 % of dermatology patients [1,7]. More recently, it was reported in a large multicentre study performed in 13 European countries that of the 3635 dermatological patients who participated in the study, depression was present in 10.1 %, anxiety in 17.2 %, and suicidal ideation in 12.7 % of the cases [5]. Given this data, the purpose of the present study was to do pioneering work in addressing the anti-depressive potential of spirituality with regard to dermatologic patients. We were particularly interested in finding out which of the various dimensions of religion and spirituality, as examined by the multidimensional inventory for religious/spiritual well-being, were related to mood pathology and psychiatric symptoms. Study design and participants With the expertise of authors specialising in dermatology, we conducted a prospective single-centre observational study with consecutive patients from the outpatient clinic for autoimmune disease, oncology, and the day clinic at the department of dermatology and venereology, Medical University of Graz, between March and October 2013. Ethics approval was obtained from the ethics committee of the Medical University of Graz, Austria (25-280 ex12/ 13). Patients with systemic sclerosis (SSc) and lupus erythematosus (LE) (both chronic and potentially lifethreatening diseases), and those with early stages of malignant melanoma (MM) without metastasis but with an unknown disease course were administered questionnaires by two members (ML and SS) of our research group and were asked to complete them after written consent was obtained immediately after their doctor's appointment. The participants received no remuneration or reward for participation. SSc was classified according to Le Roy's criteria [10] and LE according to the Duesseldorfer classification and the American College of Rheumatology criteria (ACR) [9,20]. MM patients had stage I-II melanoma diagnosed by histologic criteria and sentinel node biopsy. Psychometric measures The Austrian-German standardized and normed multidimensional inventory of religious/spiritual well-being (MI-RSWB) [23] measures spiritual well-being which is a subjective phenomenon, consisting in equal measures of existential well-being (EWB) for the immanent area of perception, such as Hope immanent (HI), forgiveness (FO) and experiences of sense and meaning (SM), and religious well-being (RWB) for the transcendent area, such as general religiosity (GR), connectedness (CO), and hope transcendent (HT). In addition, marker items are given as examples to illustrate the meaning of the different dimensions: general religiosity: ''My faith gives me a feeling of security''; connectedness: ''I have experienced the feeling of being absorbed into something greater''; forgiveness: ''There are things which I cannot forgive'' (coded reversely); experiences of sense and meaning: ''I have experienced true (authentic) feelings''; hope immanent: ''I view the future with optimism''; hope transcendent: ''I often think about the fact that I will have to leave behind my loved ones'' (coded reversely). The total amount of all six sub-scales, Religious/Spiritual Well-Being (RSWB), has been defined as ''the ability to experience and integrate meaning and purpose in existence through a connectedness with self, others, or a power greater than oneself'' (p. 117) [21]. Each sub-scale consists of six items (thus a total score of 48 items) and has to be answered on a six-point Likert scale ranging from 1 to 6. In the previous research, Cronbach's a was found to be at least 0.68 for all the sub-scales and 0.89 for the RSWB total score [23]. A full item list for the English version of the scale together with a short manual can be retrieved from Unterrainer et al. [22]. The Brief Symptom Inventory-18 (BSI-18) is a short version of the highly established Symptom Checklist SCL-90-R [6]. The amount of psychiatric burden in three dimensions of psychiatric symptoms (somatization, depression, and anxiety) for the preceding 7 days is assessed by means of 18 items (six items for each subscale). The BSI-18 includes a five-point rating form ranging from 1 (absolutely not) to 5 (very strong). It is also possible to collate the 18 items into a total score: the Global Severity Index (GSI) of psychiatric symptoms. In the previous research, Cronbach's a was observed to be at least 0.79 for all the sub-dimensions and 0.91 for total Global Severity Index score (GSI) [6]. Statistical methods Univariate and multivariate general linear models were conducted to investigate differences in mood pathology between the different dermatological groups. The correlation between the RSWB dimensions and mood pathology was tested by Pearson's correlation statistics. For in-depth analysis, linear regression modelling was used for those RSWB dimensions predicting the global psychiatric symptom burden (GSI). Due to the exploratory nature of the study, the alpha level of significance was set to 0.05. The subtypes of SSc were limited (59 %), diffuse (16 %), undifferentiated (14 %), and primary Raynaud's phenomenon (11 %). Of LE patients, 71 % had cutaneous LE and 29 % had systemic LE. Sentinel node biopsy was performed on 62 % of MM patients, and no metastases were detected. More detailed patient data were published in a recent paper [14]. As shown in Table 1, we did not observe any differences in global psychiatric symptom burden as determined by the BSI-18 among the patient groups; however, SSc patients exhibited a higher amount of Somatization compared to MM patients (p \ 0.05). Compared to a group of inpatients undergoing psychotherapy [6], our group of skin disease patients showed, in general, a substantially lower amount of mood pathology (p \ 0.001). As revealed by Table 2, hope for a better future (HI; p \ 0.01) was observed to be the strongest negative predictor of global psychiatric symptom burden (GSI). General religiosity (GR; p \ 0.01) and hope transcendent (HT; p \ 0.01) showed a minor but still significant impact on patients' condition. Notably, there was a positive correlation between the Connectedness dimension and mood pathology (p \ 0.01). Overall, the assumption of a negative correlation between RSWB and the GSI was confirmed; however, this is mainly due to the contribution of more existentially oriented dimensions of well-being (EWB; p \ 0.01), while more religiously oriented parameters (RWB) did not show any significant effect (p [ 0.05). Discussion In this study, we focused on the role of spirituality and its function in the mood stabilization of different patient groups with severe skin diseases. Our main finding was that all parameters of mood pathology were negatively associated with existential and, to a minor extent, with religious well-being. Most significantly, hope immanent (for a better future) was confirmed as being the strongest negative correlate of mood pathology. More unexpectedly, hope transcendent (for a better afterlife) also turned out as a distinct predictor of more adequate adjustment to a chronic/ life-threatening skin disease. In line with the previous findings [1], we observed an increased amount of somatization in SSc compared to MM patients. This could also be the cause of reduced physical well-being as published recently [24]. Interestingly, we found that the dimension of connectedness (feeling connected with the universe) was positively related to all parameters of psychiatric symptom burden. This is in contrast to recent research which mostly demonstrated the salutogenic effect of the feeling of ''being connected'' in anxious/depressive inpatients, assessed by means of different parameters, such as sense of coherence or more adequate coping strategies. However, in conjunction with the previous work, we assumed that feeling connected to the universe might mirror feelings of alienation and isolation from the real world in this specific patient group [23]. In more recent studies, we could show that patients with a skin disease exhibited a substantially lower level of experiences of sense and meaning (p \ 0.001) in comparison with the general population. Patients with LE had a lower religious/spiritual well-being (p \ 0.001), while patients with LE and MM had a lower level of general religiosity (p \ 0.001) and religious wellbeing (p \ 0.001). By contrast, MM patients exhibited a higher level of forgiveness (p \ 0.01) and hope transcendent (p \ 0.01) and a lower level of connectedness (p \ 0.001). No differences were found for hope immanent and the amount of existential well-being in skin disease patients compared to the general population [24]. In line with the previous research [23], we conclude that our initial results, concerning different groups of patients with severe skin conditions, confirm the hypothesis that there exists a high therapeutic potential of the spiritual dimension in clinical treatment. However, these results still have to be confirmed by employing larger samples incorporating centres in other countries and different dermatologic patient groups, such as metastasizing melanoma, psoriasis, or atopic dermatitis. Further studies might also consider the fact that other variables, such as personality traits, could mediate or moderate the correlations in addition to poor QoL that might have also influenced the outcome of the current study in SSc and LE patients [3]. Therefore, more research is needed to make a more general statement about the role of spirituality in skin diseases. The dimension of hope, especially, seems to be of central relevance for the mood stability of patients dealing with these severe skin diseases. Further research might focus now on the development of spiritually integrated therapeutic interventions, such as supplying patients, with hope or discussing existential questions as a potential treatment for dermatological patients [2]. In fact, there is already a lot of literature on how to integrate spiritual dimensions, such as hope or sense, of meaning most effectively into patient treatment [19]. Hope therapy [18] was reported to increase some psychological strengths and reduce some symptoms of psychopathology [4]. Furthermore, the core-dimension of existential therapy [16] of finding a meaning in life for psychological well-being can be found prominently discussed in the literature [25].
2017-08-02T19:41:14.474Z
2016-07-04T00:00:00.000
{ "year": 2016, "sha1": "36e165fd6ca18160c1a99117f028354745954681", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00403-016-1672-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "36e165fd6ca18160c1a99117f028354745954681", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
4460894
pes2o/s2orc
v3-fos-license
Universal Fermi liquid crossover and quantum criticality in a mesoscopic device Quantum critical systems derive their finite temperature properties from the influence of a zero temperature quantum phase transition. The paradigm is essential for understanding unconventional high-Tc superconductors and the non-Fermi liquid properties of heavy fermion compounds. However, the microscopic origins of quantum phase transitions in complex materials are often debated. Here we demonstrate experimentally, with support from numerical renormalization group calculations, a universal crossover from quantum critical non-Fermi liquid behavior to distinct Fermi liquid ground states in a highly controllable quantum dot device. Our device realizes the non-Fermi liquid two-channel Kondo state, based on a spin-1/2 impurity exchange-coupled equally to two independent electronic reservoirs. Arbitrarily small detuning of the exchange couplings results in conventional screening of the spin by the more strongly coupled channel for energies below a Fermi liquid scale T*. We extract a quadratic dependence of T* on gate voltage close to criticality and validate an asymptotically exact description of the universal crossover between strongly correlated non-Fermi liquid and Fermi liquid states. materials, notably CeCu 6−x Au x and YbRh 2 Si 2 , defy easy description in this scheme, and their quantum critical behavior instead appears to be related to the breakdown of Kondo screening. 7 Distinctive non-Fermi liquid behaviors appear above a so-called Fermi liquid (FL) scale that vanishes at the quantum critical point (QCP); away from the QCP, a crossover from non-FL to FL behavior is observed at low energies. A diverging effective mass m * at the QCP, seen in both materials, signifies the absence of quasiparticles at the Fermi surface. 8 In many heavy fermion materials and in high-T c superconductors, the relevant degrees of freedom and the effective Hamiltonian can be controversial. We aim to understand quantitatively a second-order QPT outside the usual order-parameter-fluctuation description. Quantum dots provide an experimental framework for realizing known quantum impurity Hamiltonians that can feature tunable second-order QPTs. 9, 10 However, QCPs are challenging to reach even in engineered systems, since perturbations that steer away from quantum criticality may be inherently uncontrolled, as in two-impurity Kondo experiments to date. [11][12][13] At the QCP of a two-channel Kondo (2CK) system, a single overscreened spin yields a non-FL state with no quasiparticles (i.e. only collective excitations) at the Fermi surface. An order parameter is typically not invoked; rather, the critical behavior is owing to the single spin. A FL scale T * results from several relevant perturbations: Zeeman splitting, difference in exchange couplings, and charge transfer between the two channels. Requiring that all these perturbations be small would seem to diminish prospects for observing the QCP in bulk systems. Nonetheless, two-channel Kondo physics has been invoked to explain experiments on heavy fermion materials 14-16 and two-level tunneling centers. 17-19 A 2CK state has been predicted 2 and observed 3 in a quantum dot tunnel-coupled to a "metallic grain," an electron reservoir big enough to have a small level spacing ∆ kT but small enough to retain a charging energy E C kT , at temperatures of interest. The metallic grain provides an independent screening channel, as the grain's charging energy strongly suppresses inter-channel charge transfer. Non-FL behavior was observed, as were the FL single-channel Kondo states far from the QCP, but the crossover to those FL states was not explored. The universal crossover functions were however calculated by NRG. 20 Recently, a description of the crossover has been found using Abelian bosonization and conformal field theoretical methods, yielding asymptotically exact predictions for conductance in the regime where V, T, T * T K . 5,6 In this work, we show how fine control over the 2CK state in a mesoscopic device allows direct comparison to exact results in the crossover regime, yielding T * as a function of gate detuning away from the QCP. The device (Fig. 1a) is fabricated by lithographically patterning gate electrodes on a GaAs/Al 0.3 Ga 0.7 As heterostructure hosting a two-dimensional electron gas. The device is abstracted in Fig. 1b. Despite the number of gates, the device is conceptually simple (Fig. 1c): a metallic grain (red) and two leads (blue) are each tunnelcoupled to a quantum dot (green) at rates Γ G and Γ, respectively. The charging energy is U (E C ) for the dot (grain) (full Hamiltonian in supp. info). In this experiment, two-terminal conductance G = dI/dV sd is measured between the pair of leads (Methods sec. 1). We use V BWT (V LP ) to tune the grain level φ (dot level ε). (b) Schematic of the device with labeled gate electrodes. Gates BWT, BP, and BWB define the grain (red) along with LBT and LBB; the last two also control the dot-grain coupling. Gates LWT, LP, and LWB define the dot (green), along with LBT and LBB. Gates BR are used to isolate the dot measurement circuit. Other gates are held at a fixed voltage throughout the experiment. Conductance is measured between source and drain leads (blue). The four gray stars indicate additional ohmic contacts which are floated during measurement. (c) Model of the system used for the NRG calculations. Γ g is the dot-grain coupling, Γ the total dot-lead coupling (sum of couplings to source and drain leads). The source and drain leads together act as one channel in the spin 2CK regime, and the Coulomb-blockaded grain acts as an independent channel. Full Hamiltonian in supp. info. We first identify the set of QCPs in the (−ε/U, −φ/E C ) plane for fixed Γ, Γ G . For our model Hamiltonian, quantum critical "2CK lines" periodic in the grain charge are expected instead of isolated QCPs. 2,21,22 Figure 2a shows the 2CK lines overlaid on numerical renormalization group (NRG) calculations of G(−ε/U, −φ/E C ) using realistic device parameters. We focus on the spin 2CK regime, though charge fluctuations may be important elsewhere. 23 To directly compare to the experimentally measured conductance data of Fig. 2c, Fig. 2b adjusts the NRG calculations of Fig. 2a to account for the cross-capacitance between V LP -379 Fig. 3d. d) NRG calculations of the equilibrium spectral functions A(ω, T ) for ε, Φ as marked in b). The black trace is the spectral function A 2CK (ω, T, δ P ) from CFT (δ P = −0.029π, T K = 19 µeV). e) Measured G(V sd , T ) for V LP , V BWT as marked in c). The black trace is Y 2CK (ω/T, δ P )/ √ T K , rescaled based on an estimate of source-drain coupling asymmetry (δ P = −0.016π, T K = 50 µeV). The range in (eV sd /kT ) decreases as temperature increases because we measure a fixed range in V sd . and the grain. To identify transport signatures of quantum criticality along the 2CK line, we look for the characteristic square-root scaling of G(V sd , T ) derived from the CFT of Affleck and Ludwig. 24 The CFT yields temperature-dependent spectral functions A 2CK (ω, T, δ P ), where δ P is a phase shift from potential scattering. These are closely related to G(V sd , T ) for ω → −eV sd (Methods sec. 3). A scaling collapse of G(V sd , T ) is expected: where T K is a scale below which the 2CK physics is observed and Y 2CK (−eV sd /kT, δ P ) a universal function closely related to A 2CK (ω, T, δ P ) (Methods sec. 4). Having identified the 2CK lines in Fig. 2, we consider how to perturb the quantum critical state. In the 2CK model, a single FL scale T * suffices to describe any combination of symmetry-breaking perturbations. 5 The limit ω, T, T * T K permits an exact expression for the scattering T -matrix in the low-temperature 2CK crossover, found by Sela, Mitchell, and Fritz. 5,6 In our experimental configuration the T -matrix is diagonal: with the universal complex-valued function G ω T * , T T * encoding the crossover physics. These diagonal elements relate to A(ω, T ) and thus to experimental G = dI/dV sd for highly asymmetric source-drain coupling. ν is the bare density of states per spin in the leads, σ is the spin index, and α = 1 (-1) labels electrons in the leads (grain). The S-matrix gives a (spin and channel dependent) scattering phase shift that is a function of the relative strengths of any perturbations present. Negligible charge transfer between channels and zero magnetic field yields S σα,σα = ±α, with +(−) indicating the dot is more strongly exchange-coupled to the grain (leads). The factor e 2iδ P accounts for additional spin-independent phase shifts from potential scattering. We fix S σα,σα = α and let δ P jump by π/2 to account for sign changes. To observe the FL crossover experimentally, we fix V LP = −236 mV (dashed line in Fig. 2c) and detune the exchange couplings using V BWT . Moving slightly away from the QCP so that T * ∼ T e , we still measure a √ T scaling collapse for T > 50 mK (Fig. 3a). These high T data are fit nicely using the Affleck-Ludwig CFT result with small δ P (black line). The clear scaling behavior at high-T can only be observed for V BW T in a small neighborhood around the QCP. Below 50 mK, prominent deviations from 2CK scaling develop, which we attribute to a crossover into a FL state where the grain screens the dot spin. Near zero bias these low-T traces are fit by the crossover theory with similar, small δ P (Fig. 3b). We stress this is a non-trivial regime since T * ∼ T e ; asymptotics of the FL fixed point are insufficient to describe the observed behavior. For larger |eV sd /kT | 1/2 , the |eV sd | 1/2 dependence of G(V sd ), appearing linear on these axes, heralds a return to 2CK behavior. Generically, T * should depend quadratically on the strength of symmetry-breaking perturbations near the QCP 5, 6 ( Fig. 3c). Measured G(V sd ,V BWT ) reveals periodic zero bias dips that transition sharply to zero bias peaks as V BWT is increased (Fig. 3d, top). The zero bias dip (peak) corresponds to a T = 0 ground state where the grain (lead) screens the dot spin; these are separated by a QCP. In Fig. 3d (middle), T * depends quadratically on V BWT away from the QCP, although the curvature differs between the two sides of the QCP, which have different ground states. This quadratic behavior holds over a larger range of V BWT than we might have expected considering that generically the exchange couplings do not depend linearly on gate voltage. The phase shift δ P ∼ 0 on one side of the QCP, and appears to approach π/2 on the other (Fig. 3d, bottom). Between QCPs, δ P varies smoothly. T * and δ P are not plotted directly to the right of each QCP, reflecting the ambiguity of fitting a small crossover peak on top of the 2CK peak. Both T * and δ P are insensitive to small changes in the range of V sd used for fitting (supplemental info). Many features of these observations are corroborated by fitting the crossover theory to spectral functions from NRG, which yield conductance via Eq. 3 (Methods). The NRG conductance (Fig. 3e, top) shows zero bias dips transitioning into peaks, as well as the shift of the peak toward positive −ω, as in transport spectroscopy (Fig. 3d). The φ-dependence of T * (Fig. 3e, middle) shows asymmetric parabolas like in the measurements. The extracted δ P (Fig. 3e, bottom) reproduces the rapid π/2 phase shift across a QCP, with an otherwise smooth φ-dependence. The π/2 shift reflects a sign flip in S σα,σα between distinct FL ground b) Same data in a). G(V sd ) at low energies is fit to thermally broadened spectral functions from the crossover theory (top: 20 mK, bottom: 40 mK; δ P = −0.045π, T * = 0.5 µeV). Fitting details in Methods. c) Quantum criticality occurs for energies above the Fermi liquid scale T * (gray paraboloid), which should depend quadratically on the coupling asymmetry J 1 − J 2 between the two channels as well as on the Zeeman splitting E Z . We vary T * by tuning J 1 − J 2 (cut along red parabola). d) Extraction of T * and δ P from measurements. The triangle denotes V BWT for a) and b). Top: G(V sd , T = 20 mK). Middle: T * from crossover theory fits to experimental G(V sd , T ). Red traces are parabolas with T * = 0 at the QCP and unequal scale factors on either side of the QCP. The largest T * values may not be much less than T K , so the crossover theory is not strictly valid for all V BWT . Labels indicate approximate QCP locations. Bottom: δ P from the crossover theory fits. Error bars reflect 1 s.d. confidence intervals from the fits. e) Extraction of T * and δ P from NRG calculations. Parameters as in Fig. 2. Top: G(−ω), rescaled to match maximum G of d). Middle: T * . Bottom: δ P . states, where either the grain or leads screen the dot spin. 25 A perfect correspondence between experiment and NRG should not be expected, since only U and E C may be extracted directly from measurements. Yet both experiment and NRG are well described by the crossover theory, and key experimental features are reproduced in the NRG calculations. The experimental and numerical corroboration of analytical results in the vicinity of a QCP is a milestone in our understanding of correlated electron systems, with implications for high-T c superconductivity and heavy fermions. An essentially identical universal crossover The biased source lead in any source-drain bias spectroscopy was determined to be weakly coupled to the dot: At zero bias, we pinch off the source lead's coupling to the dot Γ s (e.g. using V LWT ) and observe a decrease in the overall conductance scale, without appreciable changes in the conductance features after accounting for capacitive shifts from gating. This implies that the unbiased drain lead's coupling to the dot Γ d largely determines the total dot-lead coupling rate, since Γ = Γ s + Γ d ≈ Γ d , i.e. the dot was nearly in equilibrium. In comparing Fig. 2a and Fig. 2c, the ratio of maximum conductances is 0.464. If these numerical and experimental data are assumed to be directly comparable, then the asymmetry It is well known that applying source-drain bias will cause unintentional gating as a secondary effect. This would be deleterious to observing quantum critical behavior, which depends sensitively on the dot and grain levels. We compensate for shifts in the grain level by compensating changes in V sd with changes in V BWT . This compensation can be determined easily in the regime ε/U > 0 or ε/U < −1. We expect the grain level to be much more sensitive than the dot level for the same change in energy since E C U . Fitting range When fitting the crossover theory to experimental data, we fit G(V sd , T ) only in a small window of V sd of ±6 µV around zero, regardless of temperature. A priori, T * is unknown and it only makes sense to fit V sd < a few T * . Additionally, thermal broadening of high energy features can in principle spoil the scaling of the low energy features, even for otherwise sensible ranges of V sd . At minimum the 20 and 40 mK traces are used for fitting, but sometimes also the 52 mK and possibly the 70 mK traces, provided T T * (the fitting process is somewhat iterative in this respect). Once the temperatures to be used in fitting are decided for a given value of V BWT , the fitting considers data from all of those temperatures simultaneously. Fitting the crossover theory to NRG calculations is done analogously (window of ω of ±6 µeV about zero). 3 Relationship of G(V sd , T ) to spectral functions The differential conductance G = dI/dV sd measured from source to drain lead through the small dot is a function of source-drain bias and temperature and can be compared directly to NRG calculations in case of a strongly asymmetrical source-drain coupling. In the case of weak coupling to the biased source electrode (Γ s Γ d ), the differential conductance can be related to the equilibrium spectral function as The asymmetry prefactor is a function of the source and drain couplings, Γ s and Γ d , and is assumed to be much smaller than one. Either lead may assume the role of source or drain. The derivative of the Fermi-Dirac distribution f (ω, T ) is convolved with a spectral function A(ω, T ) from the 2CK or crossover descriptions. The spectral function can be related to the T -matrix: where ν is the bare density of states in the leads, σ is a spin index, and α is a channel index (we fix α = −1 for the source and drain leads). The T -matrix represents the scattering between different states induced by the interaction part of the Hamiltonian and can be computed numerically exactly by NRG. It is related to the quasiparticle self-energy. 31 Fitting expressions for 2CK In equilibrium, the conduction electrons' scattering T -matrix is proportional to the selfenergy. In case of the quantum dot system considered here, the latter quantity translates to the Green's function of the d-level of the small dot. This allows us to use the exact S-matrix at the 2CK fixed point 24 and express the equilibrium spectral function of the small dot in the limit T * ω, T T K as where 2 F 1 (a, b; c, z) is the Gauss hypergeometric function, β is inverse temperature, and δ P is the scattering phase shift. We fix the dimensionless parameter λ = −0.09 so that the spectral function drops to half of its ω = 0 value at ω = T K in the limit T → 0. 20, 32 Equation (5) immediately implies that (A 2CK (0, T, δ P ) − A 2CK (ω, T, δ P )) T K /T is a universal function of ω/T , which when convolved with a Fermi function gives the function Y 2CK (−eV sd /kT, δ P ) of equation (1). We stress that this ω/T scaling is a special property of the 2CK fixed point. When fitting the experimental data, we shall assume an asymmetrical coupling to the leads (see Methods sec. 3). Fitting expressions for crossover At frequencies and temperatures far below the two-channel Kondo temperature T K , we can use the crossover form of the T -matrix derived in Refs. 5, 6 to express the d-level's equilibrium spectral function. Here we obtain the following expression: where δ P ≈ 0 (δ P ≈ π/2) in case the dot is coupled more strongly to the grain (leads), and is a universal function of rescaled energyω = ω/T * and temperatureT = T /T * . For equation (7) only, Γ is the gamma function, not a tunnel rate. Again, when fitting to experimental data, the spectral function must be thermally broadened (see Methods sec. 3). Competing financial interests The authors declare no competing financial interests. S6 Sensitivity of T * and δ P to fitting range 9 Figure S1: SEM micrograph of a nominally identical device (5 kV acceleration voltage). The device is tilted 40 • with respect to normal incidence. S1 Device The 2DEG is 50 nm deep and has an electron density n = 3.3 × 10 11 cm −2 and mobility µ = 1.2 × 10 6 cm 2 /Vs. Figure 1a shows a top-down SEM micrograph of the device. This view is appropriate for labeling the gate electrodes and explaining the function of each gate, but the air bridges are hard to see. In Fig. S1 we show a view of the device at a 40 • tilt with respect to normal incidence. The five air bridges clearly rise above the gate electrodes underneath. The device is rotated approximately 180 • with respect to the orientation of Fig. 1a. As initially fabricated, the bridges did not make good electrical contact to the gates. This problem was remedied with an in-situ platinum deposition procedure to be described in a forthcoming publication. 1 S2 Hamiltonian In our numerical calculations the dot-grain system is modeled by the following Hamiltonian: where describes the dot, with ε the on-site energy, d † σ the creation operator of an electron with spin σ andn σ = d † σ d σ representing the occupation number operator. U is the correlation energy between the two electrons residing in the dot. The grain is described by The creation operator of a spin-σ electron with momentum p and energy ε p in the grain is denoted by a † pσ , E C is the charging energy of the grain, while −φ plays the role of a gate voltage.n g is the electron number operator of the grain,n g = p,σ : a † pσ a pσ :, and N 0 denotes the number of excess electrons in the electrically neutral grain (φ = 0). The non-interacting quasiparticles in the leads are described by: where c † αkσ is the creation operator of a spin-σ electron with momentum k and energy ε αk in the upper (α = U ) or lower (α = L) lead. The last term in (1) is the tunneling Hamiltonian, which is given by The tunnel matrix elements between the leads (grain) and the dot are denoted by t α (t G ) and are assumed to be independent of momentum. The strengths of the couplings are given by Γ G = πν G |t G | 2 and Γ α = πν α |t α | 2 , respectively, where ν α (ν G ) is the density of states for lead α (grain). In the numerical renormalization group (NRG) calculations the energy spectrum of the grain is assumed to be continuous and the densities of states for leads and grain are taken to be constant and equal: ν α = ν G = ν = 1/(2D), with D ≡ 1 being the band halfwidth used as the energy unit in NRG calculations. In Hamiltonian (1) we have neglected the dot-grain capacitive coupling, which can give rise to a term of the form, U dg (n g − N 0 )n d , wheren d =n ↑ +n ↓ . An estimate for U dg is extracted experimentally in Section S4, and is believed to play no role for the present analysis. S3 Summary of the NRG calculations S3.1 NRG calculations To solve the Hamiltonian (1) we use the numerical renormalization group method. 2, 3 First, we introduce the collective charge operators (bosonic operators) for the grain, 4, 5 Strictly speaking, the identity,N =n g , must be fulfilled, but within the NRG approach this constraint can be relaxed by treatingN as an independent quantity. This is possible as the spectral properties of the system are not sensitive to the exact number of conduction electrons present in the grain in the limit of infinitely small level spacing. To extract the finite size spectrum and determine the location of the two-channel Kondo (2CK) lines, however, a projection to the physical subspace was necessary. In our calculations we took into account seven charges in the grain. Using the above charge operators, the grain part of the Hamiltonian (3) can be rewritten as TheN ± operators capture the charging transitions of the grain and enter explicitly in the tunneling Hamiltonian (5), which now reads The first term in (8) describes the dot-grain tunneling, while the second term accounts for the dot-leads coupling. This second term is obtained by performing an orthogonal transformation 6 from the two-lead basis to an effective single lead with resultant coupling Γ = Γ L + Γ U . The resulting Hamiltonian consists then of two conduction bands coupled to a complex impurity composed of the grain (N ) and dot degrees of freedom. The core of the NRG procedure is the logarithmic discretization of the conduction band with discretization parameter Λ and mapping of the conduction band onto a semi-infinite chain with exponentially decreasing hoppings. The Hamiltonian can then be diagonalized in an iterative fashion. In our calculations we used discretization parameter Λ = 2 and kept 4000 states at each iteration. We also exploited the SU (2) symmetry of the total spin and two U (1) symmetries forN 1 =n d +n cb +n g andN 2 =n g −N , wheren cb is the electron number operator in the first conduction channel (leads coupled to the dot) andn g is the electron number operator in the second channel (the grain). We performed the full density-matrix numerical renormalization group calculations (fDM-NRG), 2, 7-9 employing the Budapest Flexible DM-NRG code, 3 to compute the normalized dimensionless spectral is the spectral function for the d † σ operators that describe the dot level. The linear conductance through the small dot can be then determined with the equation where f (ω, T ) is the Fermi-Dirac distribution function. Fig. 2b In Fig. 2b we incorporate a linear ε-dependent shift into φ to obtain agreement between NRG calculations and experiment (Fig. 2c). The agreement is obtained by first rescaling the NRG calculations so that the maximum value of G is the same. The global scaling takes into account the source-drain coupling asymmetry, as explained in Methods. Then, the sharp features in the cut taken at V LP = -260 mV are compared with NRG calculations to establish V LP = −260 mV ∼ −ε/U = 0.55. Finally, another cut for fixed V LP is taken to establish a linear relationship between V LP and −ε/U . The two points give a linear relationship between V LP and −ε/U . One global offset in −φ/E C then suffices to give good agreement everywhere. S3.2 Shifting of NRG calculations in Using this method we find −Φ/E C = −φ/E C −3.1(−ε/U −1.5), where −Φ/E C is the vertical axis of Fig. 2b. Physically, the linear dependence of Φ on −ε/U can be understood as a consequence of the indirect capacitive coupling between V LP and the grain. S4 Extracting device parameters From measurements in the Coulomb blockade regime we are able to determine the dot charging energy U and the grain charging energy E C . In determining E C we justify treatment of the grain as a continuum. We also determine bounds on any dot-grain charging energy U dg neglected in the model. The dot charging energy U = 2.9 meV is determined from source-drain bias spectroscopy of the dot (Fig. S2). In previous cooldowns U has varied between 1 and 3 meV, perhaps owing to how U depends sensitively on the number of electrons in the few electron regime. We use U = 2 meV as the model parameter in NRG calculations, and note that the calculations should be relatively insensitive when U > D = 1 meV, the electronic half bandwidth used in calculations. This value of D corresponds roughly to the internal level spacing on the small dot, providing a high energy cut-off. The grain charging energy E C = e 2 /C = 160 µeV is measured by source-drain bias spectroscopy of the grain (Fig. S3). We compare this measurement to geometric estimates. A common rule of thumb is that upon gate depletion, the extent of the depletion region extends as far from the gate laterally as the 2DEG is deep. This means that the area of the grain should be the area outlined by the gates, less some area in a ∼ 50 nm dead region GaAs and the effective r = 0.62 µm. This gives an expected E C = 280 µeV, which is within a factor of two of the measurement. In a previous cooldown of the same device we measured E C = 150 µeV, which we use as the model parameter in NRG calculations. In designing the device we aimed for as large an E C as possible while still being able to imagine a near continuum of states in the grain. The level spacing may be estimated by considering a particle in a 2D box. The level spacing ∆ = 2 π 2 /2mA, where A is the area of the box and m = 0.067m e is the effective mass in GaAs. Using the design area A = 1.2 µm 2 we find ∆ = 4.6 µeV = 2.6kT e , where T e = 20 mK. If we instead take A = 0.93 µm 2 inferred from measurement of E C and the approximation for the capacitance, we find ∆ = 6.0 µeV= 3.4kT e . In either case, ∆ is no more than factors of a few times T e , keeping in mind that the width of the Fermi-Dirac distribution is approximately 3.5kT e . This implies that the grain is indeed acting as a metallic grain at all measured temperatures. In Fig. S3, it appears that the typical level spacing (spacing in V sd between diagonal lines) is larger than anticipated. The peak conductance can differ significantly for each level, which reflects a distribution in source-drain coupling asymmetry from level to level. Some levels may not be visible if their source-drain coupling asymmetry is strong. Measurements of G(V BWT , V LP ) yield U dg = 21 µeV (Fig. S4). This analysis considers the dot-grain system as a capacitively-coupled double quantum dot. The U dg we extract should be thought of as an upper bound-the gate voltages are set to a very different regime where Γ G is negligible, unlike in the paper. When tuning between this regime and the regime where Γ G ∼ Γ, it appears as if the splitting of the lines in Fig. S4 goes to zero long before Γ G becomes a significant fraction of Γ, perhaps implying that U dg → 0. S5 Scaling along the quantum critical lines In Fig. 2e we demonstrate that measured G(V sd , T ) falls onto a scaling curve derived from the conformal field theory (CFT) results of Affleck and Ludwig. 10 In Fig. S5 we demonstrate the scaling at other points along the 2CK lines. In most examples, the 2CK scaling behavior is faithfully reproduced by the data except perhaps at T = 20 mK. A priori, the deviations could be attributed to finite T * , perhaps because data were not taken finely enough. However, it seems that the deviations typically appear most pronounced on the positive-V sd side. Another possibility could be that true zero V sd drifted slightly over the course of the measurement. Typically some small applied V sd is required to compensate for an offset voltage at the current amplifier input, but at base temperature the quality of the scaling collapse is sensitive to errors of just 1 µV in identification of true zero V sd . S6 Sensitivity of T * and δ P to fitting range In Fig. 3 we use the crossover CFT to fit experimental data and thereby extract the Fermi liquid scale T * and the scattering phase shift δ P . The fitting procedure uses a limited range of V sd (±6 µV) and it is argued that this is a conservative approach. In Fig. S6 we show that the fitting is insensitive to small changes in the fitting range. At each value of V BWT , we try and extract T * and δ P for nine different ranges of bias voltage, which we obtain by starting with (-6, +6 µV) and adding or subtracting a point on either end, e.g.: (-7.5, +6), (-6, +6), (-6, +7.5), (-4.5, +6), etc. It is important not to add so many points that data outside the validity of the theory are included. However, subtracting too many points may degrade the fit quality. After finding T * and the error in T * reported by the fits, we consider all nine fitting ranges to give independent estimates of T * , and find the weighted mean T * , weighted by the Figure S6: Sensitivity of T * and δ P to fitting range. Top panel: source-drain bias spectroscopy at T = 20 mK exactly as in Fig. 3. Middle and bottom panels: The black points and red curves in the T * and δ P panels are exactly as in Fig. 3. The blue points correspond to the weighted mean of extracted T * and δ P for an ensemble of fitting ranges. The error bars on the blue points correspond to the standard deviation of the weighted mean. errors from each fit. The error bars show the standard deviation of the weighted mean, and indicate the spread of T * values returned by the fits. We do the same for δ P . Varying the fitting range by small amounts does not seem to contribute significantly to the uncertainty in T * and δ P .
2015-04-28T19:59:31.000Z
2015-04-28T00:00:00.000
{ "year": 2015, "sha1": "66dc630fe24d2df99372e8f795b7141d6e97f60c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1504.07620", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4d2000cff38a7ddb84ed40ae368e969a8373f08a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
56228371
pes2o/s2orc
v3-fos-license
Identification, Characterization and Quantification of Elastomeric/Plastomeric Waste for Sustainable Waste Minimization The present study focuses on the importance of characterization of waste/post consumer polymers. In recent years sophisticated instrumental techniques played an important role in identification and characterization of polymers. Advances in computer techniques have been combined with analytical instruments to give analytical speed, resolution and minimal sample requirements unimagined a few years ago. The present study deals with the development of modified binder formulations from plastomer and elastomer type waste with an aim to minimize nonbiodegradable the post consumer polymer waste as well as environmental hazard, to meet this objective ten different samples have been picked up from several kinds of waste so as to cover different categories of polymeric waste from the domestic, industrial as well as medical waste. The samples were characterized using thermal characterization techniques like DSC (differential scanning calorimetry) and TGA (thermo gravimetric analysis). The melting and oxidative degradation behavior of polymer waste helped in sustainable waste management through developing the various modified bitumen formulations of commercial importance for highway industry. Modified binder formulations were initially characterized as per the relevant standards (code of practice) to ascertaining their suitability for above said application. The physical properties of modified binders are within the specified limits. Marshall stability, indirect tensile strength and creep modulus behaviour have been evaluated and discussed in this study to prove their dual benefits like waste minimization and suitability of such binders to be used for durable roads. INTRODUCTION Developing countries such as India are undergoing a massive migration of their population from rural to urban centres. India will have more than 40 per cent, i.e. over 400 million people, clustered in cities over the next thirty years [1] . Modern urban living brings on the problem of waste, which increases in quantity and changes in composition with each passing day [2] . The creation of non-decaying waste materials of municipal solid waste has resulted in a waste disposal crisis. India is already facing plastic disposal problems of the kind faced in the developed world, which is fast running out of space for landfills to control nonbiodegradable waste. One conventional solution to this crisis lies in recycling waste in to useful products. According to manufacturers, almost all these types of waste can be recycled up to four or five times. However, the quality of the recyclate deteriorates and additives or virgin material are added to give it strength. Consumption, linked to per capita income, has a strong relationship with waste generation. As per capita income rises, more savings are spent on goods and services. India will probably see a rise in waste generation from less than 40,000 metric tones per year to over 125,000 metric tones by the year 2030. The new Municipal Solid Waste Management Rules 2000, which came into effect from January 2004, fail even to manage waste in a cyclic process. First, it does not address mechanisms which will be needed for promoting recycling, or waste minimization. Secondly, there is no provision for any public participation [3] . Waste management still is a linear system of collection and disposal, creating health and environmental hazards [4] . However, new and expensive technologies are being pushed to deal with our urban waste problem, ignoring their environmental and social implications. For example, the United States has not been able to install a new incinerator for the past five years, while costs for burning garbage have escalated astronomically with rising environmental standards in Europe [5] . Waste minimization includes the series of steps from identification, recycling, reduce, reuse elimination etc. Thus, identification and characterization achieve the first rank among waste minimization steps. The environmental and the socio-economic impacts of waste management can be significant and wide-ranging; thus waste management is central to the sustainable development agenda. The following were the techniques adopted for identification and characterization of polymeric waste. Thermal analysis comprises a group of instrumental analytical techniques which are particularly powerful tools for the identification and characterization of polymer materials. Essentially a thermal analytical method is one which follows the change in property in a sample whilst that sample is subjected to a predetermined temperature regime. MATERIALS AND METHODS Two types of waste materials were considered for sustainable waste management. 1) Plastomeric type e.g. polythene bags from packaging industry & used glucose bottles segregated from bio-medical waste. 2)Elastomeric type e.g. waste rubber powder from automobile industry. The sieve analysis results of the powder obtained from these elastomers indicated 98-100% passing through 600micron sieve and 5% passing through 150 micron. The binder used to accommodate these wastes is 80/100 grade bitumen conforming to Bureau of Indian standards specifications with the penetration 86 and softening point 45ºC . Plastic wastes were characterized using thermal techniques prior to incorporating them with the road binder. The sample weights taken to run the test for glucose bottles, polyethylene bags, waste tire rubber and gasket were1.0, 3.0, 9.6 and 15.0 mg, respectively and the heating rate was kept 5ºC min 1 . Thermo gravimetric analysis (TGA) was run using the inert atmosphere and a heating rate of 20ºC min 1 . A Sample weight, 6.412 mg (glucose bottle), 1.117 mg (polyethylene bags), 5.848 mg (waste tire rubber) and 8.320 mg (gasket) was taken to run the test. RESULTS AND DISCUSSION The results of DSC are given in Table 1. The results of TGA are given in Table 2. Various formulations of commercial importance were prepared by adding varying quantity of individual plastomer (5%) or elastomer (10%) to 80/100 penetration grade bitumen under specified mixing time and temperature using a high speed stirrer. For example the blending temperature was kept 20-30ºC higher than the melting temperature of plastomers (as obtained from DSC results of different plastomeric waste samples) and minimum 50ºC lower than the initial decomposition temperature (as obtained from thermogravimetric analysis). Elastomeric wastes were blended with bitumen at 160ºC well below their initial decomposition temperatures which were 210 and 220ºC, respectively for tire rubber and gasket powder. The designation of modified binders and relevant blending parameters are given in Table 3. Formulations A&B contain 5% modifier to the weight of bitumen. Formulation C contains 10% rubber & gasket powder to the weight of bitumen. Formulation D contains 10% mother dairy and parag milk pouches to the weight of bitumen. Formulation E contains 5% of hard glass to the weight of bitumen. Physical properties of the modified bitumen formulations are given below which met the limits as specified in IS: 15462:2004 [6] (Table 4A and B). The physical properties of Delhi quartzite aggregates used in this study are given in Table 5. The Marshall method of the mix design as per ASTM D1559 was used for determination of the optimum binder content. Aggregate gradation as indicated in Fig. 1 was used. Marshall Specimens were also cast at optimum binder content (5.6% by wt. of aggregates) using modified binder formulations. Their properties are given in Table 6. The test results of indirect tensile strength obtained as per ASTM 4123 are also given in Table 6. CONCLUSION Based on the characterization data of modifiers and modified formulations the following conclusions can be drawn: 1. The data obtained through characterization technique is helpful in selecting the optimum blending conditions so that dispersion of waste is uniform and no overheating is done. For example the blending temperature is generally kept 20-30ºC higher than its melting point and 50ºC lower than its initial decomposition temperatures. 3. Therefore disposing them in to bituminous binder will result in dual benefit: (a) Minimization of waste in environment. (b) Development of modified bitumen of commercial importance. 4. Bituminous mixes containing 5% of plastomer modified binder and 10% of elastomer modified binder were found to have improved stability and indirect tensile strength. 5. Significant improvement in Marshall stability has been found in case where plastomers e.g. polythene bags, milk pouches and mother dairy pouches and elastomers e.g. tire rubber powder, gasket powder were used. However in case of glucose bottles and hard glasses (made of polypropylene) stability was marginally improved. 6. Indirect tensile strength of bituminous mixes was significantly improved in case of modified binder containing 5% of glucose bottles or 5% of hard glasses made of polypropylene. A two fold improvement was obtained in indirect tensile strength of modified bituminous mixes containing gasket/tire rubber powder or polythene bags in comparison to the conventional mixes.
2019-04-13T13:10:08.075Z
2005-09-30T00:00:00.000
{ "year": 2005, "sha1": "7fa10cfc6c19e889ff2bb32e01b463199351bf6b", "oa_license": "CCBY", "oa_url": "http://thescipub.com/pdf/10.3844/ajessp.2005.202.205", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "db8f608c6e2ac3928baa3c868759bd292510b625", "s2fieldsofstudy": [ "Materials Science", "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
256840655
pes2o/s2orc
v3-fos-license
LiftPose3D, a deep learning-based approach for transforming 2D to 3D pose in laboratory animals Markerless 3D pose estimation has become an indispensable tool for kinematic studies of laboratory animals. Most current methods recover 3D pose by multi-view triangulation of deep network-based 2D pose estimates. However, triangulation requires multiple, synchronized cameras and elaborate calibration protocols that hinder its widespread adoption in laboratory studies. Here we describe LiftPose3D, a deep network-based method that overcomes these barriers by reconstructing 3D poses from a single 2D camera view. We illustrate LiftPose3D’s versatility by applying it to multiple experimental systems using flies, mice, rats, and macaque monkeys and in circumstances where 3D triangulation is impractical or impossible. Our framework achieves accurate lifting for stereotyped and non-stereotyped behaviors from different camera angles. Thus, LiftPose3D permits high-quality 3D pose estimation in the absence of complex camera arrays, tedious calibration procedures, and despite occluded body parts in freely behaving animals. Introduction without knowing the camera's orientation. First, we ensured that the output of LiftPose3D was translation invariant by predicting the keypoints of the respective legs relative to six "root" immobile thorax-coxa joints (green circles, Figure 1B). Second, to avoid the network having to learn perspective distortion, we assumed that the focal length (intrinsic matrix) of the camera and the animal-to-camera distance were known, or that one of them is large enough to assume weak perspective effects. In the latter case, we normalized 2D input poses by their Frobenius norm during both training and testing. Third, to facilitate lifting from any angle, we assumed that camera extrinsic matrices, which could be obtained by calibration, might also be unknown. Instead, we parametrized them by Euler angles ψ z , ψ y , ψ x representing ordered rotations around the z, y and x axes of a coordinate system centered around the fly ( Figure 1D). During training, we took as inputs 2D poses (from 3D poses randomly projected to virtual camera planes, rather than 2D pose estimates), and as outputs 3D poses triangulated from three cameras. To measure lifting accuracy, we tested the network on software-annotated 2D poses ( Figure 1B) from two independent animals and computed the mean absolute error (MAE), e j te , for each joint j as well as the MAE across all joints e te = (1/n)∑ j e j te relative to triangulated 3D poses. We found that LiftPose3D could predict 3D poses using only one camera per side ( Figure 1C). When the virtual projections during training were performed using known intrinsic and extrinsic matrices, the network's accuracy was at least as good as triangulation using two cameras per keypoint ( Figure 1E, white). Surprisingly, the accuracy did not suffer when the network was trained (i) using virtual 2D projections around an approximate camera location ( Figure 1E, green, narrow range) rather than with known instrinsic matrices, and (ii) using normalized 2D poses rather than with known intrinsic matrices. Accuracy remained excellent when virtual projections extended to all possible angles around the meridian ( Figure 1E, red, wide-range). Lifting could be performed for optogenetically-induced backward walking (Video 1), antennal grooming (Video 2), and spontaneous, irregular limb movements (Video 3). Because the network predicts joint coordinates with respect to thoracic root joints, the MAE was larger for distal joints that move within a larger kinematic volume. By contrast, the error for triangulation depended only on the accuracy of 2D annotations because it treats each keypoint independently. We also assessed camera-angle dependence for our wide angle-range network by lifting virtual 2D poses projected onto the meridian of the unit sphere, or 2D poses captured from each of the six cameras ( Figure 1F). The test MAE was low (< 0.05 mm) and had no camera-angle dependence. Because we make no assumptions about camera placement when training our angle-invariant networks, these pretrained networks might also be used to predict accurate 3D poses for tethered Drosophila recorded in other laboratories. We next explored how the similarity between animal behaviors used for training and testing might influence lifting accuracy. Our tethered Drosophila dataset contained optogeneticallyinduced antennal grooming (aDN), and backward walking (MDN), as well as spontaneous behaviors like forward walking (control). We trained a network using poses from only one behavior (not including rest frames) and evaluated it on all three behaviors while keeping the amount of training data fixed (2.5 × 10 4 poses). As expected, the MAE was higher when test data included untrained behaviors than when test data included trained behaviors ( Figure 1G). Furthermore, training on all three behaviors led to comparable or lower MAE ( Figure 1E, orange) than training and testing on one single behavior ( Figure 1G). Thus, higher training data diversity improves lifting accuracy. To illustrate the advantage of using lifted 3D poses versus 2D poses in downstream analyses, we derived joint angles during forward walking from lifted 3D poses and from 2D poses projected from 3D poses in the ventral plane (Extended Data Figure 1, green). Joint angles derived from lifted and triangulated 3D poses were in close agreement. On the other hand, we found spurious dynamics in the distal joints when viewed from a projected plane, likely due to rotations upstream in the kinematic chain (proximal joints) that cause movements of the whole leg. Thus, 3D poses predicted by LiftPose3D can help to decouple underlying physical degrees-of-freedom. We also tested LiftPose3D in freely behaving animals where the effective camera angle dynamically changes, and in animals without exoskeletons whose neighboring keypoints are less constrained. Specifically, we considered freely behaving macaque monkeys [4] where 3D poses were triangulated using 2D poses from 62 synchronized cameras ( Figure 1H). After training LiftPose3D with only 6'571 3D poses, we could lift 3D poses from test images with diverse animal poses (Video 4), acquired from any camera ( Figure 1I), and with relatively low body length-normalized MAE ( Figure 1J). Taken together, these results demonstrate that, using simple data preprocessing and a relatively small but diverse training dataset, LiftPose3D can reduce the number of cameras required to perform accurate 3D pose estimation. Predicting 3D pose despite occluded keypoints In freely behaving animals, keypoints are often missing from certain camera angles due to self-occlusions and, therefore, only partial 3D ground truth can be obtained by triangulation. We asked how the global nature of lifting-all keypoints are lifted simultaneously-might be leveraged to reconstruct information lost by occlusions, allowing one to predict full 3D poses. To address this question, we built an experimental system similar to others used for flies and mice [14,35,36] that consisted of a transparent enclosure coupled to a right-angle prism mirror and with a camera beneath to record ventral and side views of a freely behaving fly ( Figure 2A). Due to the right-angle prism and the long focal length camera (i.e., negligible perspective effects), the ventral and side views are orthographic projections of the true 3D pose. Triangulation thus consisted of estimating the z-axis depth of keypoints from the side view. Although keypoints closer to the prism were simultaneously visible in both views and could be triangulated, other joints had only ventral 2D information. We therefore aligned flies in the same reference frame in the ventral view ( Figure 2B), turning lifting into a regression problem similar to that for tethered animals. During training we took ventral view 2D poses as inputs, but penalized only those keypoints with complete 3D information. By also aligning these data, we found that the network could implicitly augment unseen coordinates by learning geometric relationships between keypoints. The network could predict 3D positions for every joint at test time, including those occluded in the side view ( Figure 2D and Video 5). Notably, owing to the high spatial resolution of this setup, the accuracy, based on available triangulation-derived 3D positions ( Figure 2E), was better than that obtained for tethered flies triangulated using four cameras ( Figure 1E). Thus, LiftPose3D can estimate 3D poses from 2D images in cases where keypoints are occluded and cannot be triangulated. These results suggested an opportunity to apply lifting to potentially correct inaccurate 3D poses obtained using other tracking approaches. To test this, we used a dataset consisting of freely behaving mice traversing a narrow corridor [14] and tracked using the LocoMouse software from ventral and side views [14]. We triangulated and aligned incomplete 3D ground truth poses as we did for Drosophila and then trained a LiftPose3D network using ventral 2D poses as inputs. Predictions were in good agreement with the LocoMouse's side view tracking ( Figure 2E and Video 6) and could recover expected cycloid-like kinematics between strides ( Figure 2F). Remarkably, LiftPose3D predictions could also correct poorly labeled or missing side-view poses ( Figure 2F, bottom, white arrowheads). However, lifting accuracy depended on the fidelity of input 2D poses: incorrect ventral 2D poses generated false side view predictions ( Figure 2F, bottom, white asterisks). These errors were always localized to the joint-of-interest and were relatively infrequent. Overall, LiftPose3D and LocoMouse performed similarly compared with manual human annotation ( Figure 2G) demonstrating that LiftPose3D can be used to test the consistency of ground truth datasets. To assess how well spatial relationships learned by LiftPose3D could generalize to animals with more complex behaviors and larger variations in body proportions, we next considered the CAPTURE dataset of six cameras recording freely behaving rats within a circular arena [37] ( Figure 2H, left). Animal poses were intermittently self-occluded during a variety of complex behaviors ( Figure 2I). Therefore, to allow the network to learn the skeletal geometry, we aligned animals in the camera-coordinate frame and replaced missing input data with zeros. Furthermore, to make the network robust to bone length variability within and across animals ( Figure 2J) we assumed that bone lengths were normally distributed and generated, for each triangulated 3D pose, rescaled 3D poses by sampling from bone-length distributions while preserving joint angles. Then, we obtained corresponding 2D poses via a virtual projection within the Euler angle range of ±10° with respect to the known camera locations (to augment the range of camera-to-animal angles). Finally, we normalized 2D poses by their Frobenius norm, as before, assuming a large enough camera-to-animal distance. To show that the network generalizes across new experimental setups, we used two experiments from this dataset (i.e., two animals and two camera arrangements) for training and tested with a third experiment (a different animal, camera focal length, and animal-tocamera distance). By replacing low confidence or missing coordinates with zeros, LiftPose3D could accurately predict the nonzero coordinates ( Figure 2H, K and Video 7). Thus, this is a viable way to correct for erroneous input keypoints and makes our network directly applicable to other rat movement studies. Lifting diverse experimental data without 3D ground truth Although our angle-invariant networks for lifting 3D poses in tethered flies ( Figure 1D-F) and freely behaving rats ( Figure 2H-K) can already be used in similar experimental systems without the need for additional training data, small variations resulting from camera distortions or postural differences may limit the accuracy of lifted poses. Therefore, we explored how domain adaptation might enable pretrained networks to lift poses in new experimental systems despite small postural variations. We assessed the possibility of domain adaptation by training a network in domain Atethered flies-and predicting 3D poses in domain B-freely-moving flies ( Figure 3A). To do so, we identified two linear transformations d 2 and d 3 . To demonstrate the full potential of linear domain adaptation, we next lifted Drosophila 2D poses from a single ventral camera. This experimental system is common due to its simplicity, low cost, and increased throughput and has been used to study C. elegans [38], larval zebrafish [39], larval Drosophila [40], adult Drosophila [41], and mice [42]. Because depth sensors [43,44] cannot resolve small laboratory animals, 3D pose estimation from a single 2D view remains unsolved, but has the potential to enrich behavioral datasets and improve downstream analysis. We developed an experimental system with a square-shaped arena in which multiple freelybehaving flies were recorded ventrally using a single camera ( Figure 3F, left) at four-fold lower spatial resolution (26 px mm −1 ) than in our prism-mirror system. We pretrained a network using prism-mirror training data for keypoints present in both datasets and then augmented these data using a Gaussian noise term with standard deviation of ~ 4. We adapted annotated 2D poses into the network's domain before lifting ( Figure 3B). We found that the network could predict physiologically realistic 3D poses in this new dataset using only ventral 2D poses ( Figure 3G and Video 8). This is remarkable because ventrally-viewed swing and stance phases are difficult to distinguish, particularly at lower resolution. During walking, 2D tracking of the tarsal claws traced out stereotypical trajectories in the x-y plane ( Figure 3H, top) [45] and circular movements in the unmeasured x-z plane ( Figure 3H, bottom) whose amplitudes were consistent with real kinematic measurements during forward walking [46]. Another exciting possibility offered by LiftPose3D is to 'resurrect' previously published 2D pose data for new 3D kinematic analyses. We applied our network that was trained on prismmirror data to lift published video data of a fly walking through a capsule-shaped arena [16] ( Figure 3I). Using a similar processing pipeline as before ( Figure 3B,F,G), including registration and domain adaptation but not noise perturbations (the target data were of similarly high resolution as the training data), LiftPose3D could predict 3D poses from this dataset ( Figure 3J). We again observed physiologically realistic cyclical movements of the pretarsi during forward walking ( Figure 3K, bottom; Video 9). These data illustrate that linear domain-adaptation and LiftPose3D can be combined to lift 3D poses from previously published 2D video data for which 3D triangulation would be otherwise impossible. Drosophila LiftPose3D station These domain adaptation results suggested that one could make 3D pose acquisition cheaper and more accessible by designing a "Drosophila LiftPose3D station"-an inexpensive ($ 150) open-source hardware system including a 3D printed rig supporting a rectangular arena (Extended Data Figure 3, Supplementary Note 1). A common hardware solution like this overcomes potential variability across different experimental systems that arise from camera distortions and perspective effects. Using pre-trained DeepLabCut and LiftPose3D networks we found that one can effectively lift Drosophila 3D poses with this system (Video 10). We envision that a similar low-cost approach might, in the future, also be taken to facilitate cross-laboratory 3D lifting of mouse 2D poses from a single camera. Discussion Here we have introduced LiftPose3D, a deep learning-based tool that dramatically simplifies 3D pose estimation across a wide variety of laboratory contexts. LiftPose3D can take as inputs 2D poses from any of a variety of annotation softwares [2,3]. Through input data preprocessing, training augmentation, and domain adaptation one can train a lifting network [30] with several orders of magnitude less data as well as incomplete or innacurate ground truth poses. LiftPose3D is invariant to camera hardware and positioning, making it possible to use the same networks across laboratories and experimental systems. We provide an intuitive Python notebook that serves as an interface for data preprocessing, network training, 3D predictions, and data visualization. Several factors must be considered when optimizing LiftPose3D for new experimental systems. First, because predicting depth from a 2D projection depends on comparing the lengths of body parts, input poses must be sufficiently well-resolved to discriminate between 3D poses with similar 2D projections. Second, prediction accuracy depends on training data diversity: previously untrained behaviors may not be as accurately lifted. Further work may improve LiftPose3D by constraining 3D poses using body priors [47][48][49][50][51] and temporal information [31]. Using our domain adaptation methodology, networks with the largest and most diverse training data, like those for the tethered fly, may be sufficiently robust to accurately lift 2D to 3D pose in other laboratories. In the future, similarly robust lifting networks might be generated for other animals through a cross-laboratory aggregation of diverse 3D pose ground truth datasets. In summary, LiftPose3D can accelerate 3D pose estimation in laboratory research by reducing the need for complex and expensive synchronized multicamera systems, and arduous calibration procedures. This, enables the acquisition of rich behavioral data and can accelerate our understanding of the neuromechanical control of behavior. Theoretical basis for LiftPose3D LiftPose3D aims to estimate the 3D pose X = (X 1 , …, X n ), i.e., an ensemble of keypoints, by learning a nonlinear mapping between triangulated ground truth 3D poses and corresponding 2D poses x c = (x c,1 , …, x c,n ). Formally, this operation is encoded in a lifting function f mapping a 2D pose from any camera c to their corresponding 3D pose in cameracentered coordinates, Y c = f(x c ), and a camera transformation ϕ c , encoding a rotation and translation operation (see Eq. (2)), mapping from camera-centered coordinates to world The lifting function f can be approximated using a deep neural network F(x c ; Θ), where Θ represents the network weights controlling the behavior of F. In a specific application, Θ are trained by minimizing the discrepancy between 3D poses predicted by lifting from any camera and ground truth 3D poses, where χ V c ( j) is an indicator function of the set V c of visible points from camera c. For F(x c ; Θ), we adapted a network architecture from [57] composed of fully connected layers regularized by batch-norm and dropout [58] and linked with skip connections ( Figure 1B). This network was developed to perform human-pose estimation following training on approximately 10 6 fully annotated 2D-3D human pose pairs for many different behaviors. We demonstrate that training augmentation methods allow this network to (i) work with a vastly smaller training dataset (between 10 3 -10 4 poses acquired automatically using 2D pose estimation approaches [52,59]), (ii) predict 3D poses from a single camera view at arbitrary angles, (iii) be trained with only partially annotated ground truth 3D poses suffering from occlusions, and (iv) generalize a single pretrained network across experimental systems and domains by linear domain adaptation. Note that our approach implicitly assumes that the network learns two operations: lifting the 2D pose x c to camera-centered 3D coordinates Y c by predicting the depth component of the pose, and learning perspective effects encoded in the animal-to-camera distance and the intrinsic camera matrix (see Eqs. (2)-(5)). Notably, the intrinsic camera matrix is cameraspecific, suggesting that a trained network can only lift poses from cameras used during training and that application to new settings with strong perspective effects (short focal lengths) may require camera calibration. We show that this is not necessarily the case and that one can generalize pre-trained networks to new settings by weakening perspective effects. This can be accomplished by either using a large focal length camera, or by increasing the animal-to-camera distance and normalizing the scale of 2D poses. We demonstrate that a weak perspective assumption can, in many practical scenarios, enable lifting 2D poses from different cameras without calibration. These contributions enable 3D pose estimation in otherwise inaccessible experimental scenarios. Obtaining 3D pose ground truth data by triangulation Triangulated 3D positions served as ground truth data for assessing the accuracy of LiftPose3D. If a keypoint j of interest is visible from at least two cameras, with corresponding 2D coordinates x c, j ∈ ℝ 2 in camera c and camera parameters (extrinsic and intrinsic matrices), then its 3D coordinates X j ∈ ℝ 3 in a global world reference frame can be obtained by triangulation. Let us express X j = (x j 3 ) in homogeneous coordinates as The projection from the 3D points in the global coordinate system to 2D points in a local coordinate system centered on camera c is performed by the function This function can be expressed as a composition π c = proj 1,2 ∘ ϕ c of an affine transformation ϕ c : ℝ 4 ℝ 4 from global coordinates to cameracentered coordinates and a projection proj 1, 2 : ℝ 4 ℝ 3 to the first two coordinates. Both functions can be parametrized using the pinhole camera model [61]. On the one hand, we have where C c is the extrinsic camera matrix corresponding to the ϕ c and can be written as where R c ∈ ℝ 3 × 3 is a matrix corresponding to rotation around the origin and T c ∈ ℝ 3 is a translation vector representing the distance of the origin of the world coordinate system to the camera center. Likewise, the projection function can be expressed as where f x , f y denote the focal lengths and c x , c y denote the image center. The coordinates projected to the camera plane can be obtained by converting back to Euclidean coordinates Triangulation of the coordinate X j of joint j with respect to π c is obtained by minimizing the reprojection error, that is, the discrepancy between the 2D camera coordinate, x c,j , and the 3D coordinate projected to the camera frame, π c (X j ). Let V c be the set of visible joints from camera c. The reprojection error for joint j is taken to be where χ V c (·) is the indicator function of set V c of visible keypoints from camera c. The camera projection functions π c are initially unknown. To avoid having to use a calibration grid, we jointly minimize with respect to the 3D location of all joints and to the camera parameters, a procedure known as bundle adjustment [61]. Given a set of 2D observations, we seek using a second-order optimization method. For further details, we refer the interested reader to [3]. LiftPose3D network architecture and optimization The core LiftPose3D network architecture is similar to the one of [57] and is depicted in Figure 1B. Its main module includes two linear layers of dimension 1024 rectified linear units (ReLU [62]), dropout [58] and residual connections [63]. The inputs and outputs of each block are connected during each forward pass using a skip connection. The model contains 4 × 10 6 trainable parameters, which are optimized by stochastic gradient descent using the Adam optimizer [64]. We also perform batch normalization [65]. In all cases, the parameters were set using Kaiming initialization [63] and the optimizer was run until convergence-typically within 30 epochs-with the following training hyperparameters: Batch-size of 64 and an initial learning rate of 10 -3 that was dropped by 4% every 5000 steps. We implemented our network in PyTorch on a desktop workstation running on an Intel Core i9-7900X CPU with 32 GB of DDR4 RAM, and a GeForce RTX 2080 Ti Dual O11G GPU. Training time was less than 10 minutes for all cases studied. Weak perspective augmentation To project 2D poses from 3D poses, one needs to know the camera transformation ϕ c (Eq. (2)), encoded by the extrinsic matix C c (Eq. (3)) and the projection function projļ 2 (Eq. (4)), encoded by the intrinsic matrix K (Eq. (5)). In the previous section, we described how to deal with the case when C c is unknown. In addition, K may also be unknown a priori at test time. Alternatively, one may want to use one of our pre-trained networks on a novel dataset without having to match the camera positioning (focal length, camera-to-animal distance) used to collect the training data. In this case, one may still be able to predict the 3D pose in a fixed camera-centered coordinate frame by assuming that either the camera-toanimal distance or the focal length are large enough to neglect perspective effects and by normalizing the scale of 2D poses. Following Ref. [60], we chose the Frobenius norm to perform normalization on the input 2D poses x c,j /‖x c,j ‖ F , which is the diagonal distance of the smallest bounding box around the 2D pose. Note, that if the 2D poses are obtained via projections, one may use the unit intrinsic matrix Eq. (5) with f x = f y and c x = c y = 0 before performing normalization. Here, using c x = c y =0 assumes that the 2D poses are centered, which in each of our examples is achieved by considering coordinates relative to root joints placed at the origin. Importantly, the 2D poses must be normalized both at training and test times. Camera-angle augmentation The object-to-camera orientation is encoded by the extrinsic matrix C c of Eq. (3). When it is unavailable, one can still use our framework by taking 3D poses from the ground truth library and, during training, performing virtual 2D projections around the approximate camera location or for all possible angles. To this end, we assume that the rotation matrix R is unknown, but that the intrinsic matrix K and the object-to-camera distance d are known such that we may take T = (0,0, d) T . When K or d are also unknown, or dynamically changing, one can make the weak-perspective assumption as described in the next section. Then, instead of training the LiftPose3D network with pairs of 3D poses and 2D poses at fixed angles, we perform random 2D projections of the 3D pose to obtain virtual camera planes whose centers c x , c y lie on the sphere of radius d. To define the projections we require a parametric representation of the rotations. Rotating a point in 3D space can be achieved using three consecutive rotations around the three Cartesian coordinate axes x, y, z commonly referred to as Euler angles and denoted by ψ x , ψ y , and ψ y . The rotation matrix can then be written as Linear domain adaptation Here we describe the process of adapting a network trained on data from experiment A to lift 2D poses in experiment B. Domain adaptation is also useful if the camera parameters or the distance from the camera are not known and the weak perspective assumption cannot be invoked. Before performing domain adaptation, we first estimate 2D poses from ventral images in domain B, as before. This allowed us to circumvent the difficulties arising from We now describe how to obtain the functions d 2 and d 3 , which we denote collectively as d. To find d, we assume that poses in domain B can be obtained by small perturbations of poses in domain A. This allows us to set up a matching between the two domains by finding nearest neighbor 2D poses in domain A for each 2D pose in domain B, We use 2D rather than 3D poses to find a match because 3D poses may not always be available in domain B. Moreover, the nearest poses in 3D space will necessarily be among the nearest poses in 2D space. Specifically, for each x i B , we find a set of k nearest poses in Transposing this linear equation yields the linear problem (x tr T . Given that the p training poses are different, x tr B has linearly independent columns and this problem is overdetermined as long as kp > dn. Thus, by least-squares minimization, we obtain Experimental systems and conditions All adult Drosophila melanogaster experiments were performed on female flies raised at 25°C on a 12 h light/dark cycle at 2-3 days post-eclosion (dpe). Before each experiment, wild-type (PR) animals were anaesthetized using CO 2 or in ice-cooled vials and left to acclimate for 10 min. DeepFly3D tethered fly data were taken from [52]. OpenMonkeyStudio macaque data were taken from [53]. LocoMouse mouse data were taken from [54]. CAPTURE rat data were taken from [55]. FlyLimbTracker freely-behaving fly data were taken from [56]. See these publications for detailed experimental procedures. For more information on the datasets including the number of keypoints, poses, animals, resolution, framerate we refer the reader to Table 1. Freely behaving Drosophila recorded from two high-resolution views using one camera and a right-angle prism mirror-We constructed a transparent arena coupled to a right-angle prism mirror [35,69]. The enclosed arena consists of three vertically stacked layers of 1/16" thick acrylic sheets laser-cut to be 15 mm long, 3 mm wide, and 1.6 mm high. The arena ceiling and walls were coated with Sigmacote (Sigma-Aldrich, Merck, Darmstadt, Germany) to discourage animals from climbing onto the walls and ceilings. One side of the enclosure was physically coupled to a right-angled prism (Thorlabs PS915). The arena and prism were placed on a kinematic mounting platform (Thorlabs KM100B/M), permitting their 3D adjustment with respect to a camera (Basler acA1920-150um) outfitted with a lens (Computar MLM3X-MP, Cary, NC USA). Data were acquired using the Basler Pylon software (pylon Application 1.2.0.8206, pylon Viewer 6.2.0.8206). The camera was oriented vertically upwards below the arena to provide two views of the fly: a direct ventral view, and an indirect, prism mirror-reflected side view. The arena was illuminated by four Infrared LEDs (Thorlabs, fibre-coupled LED M850F2 with driver LEDD1B T-Cube and collimator F810SMA-780): two from above and two from below. To elicit locomotor activity, the platform was acoustically and mechanically stimulated using a mobile phone speaker. Flies were then allowed to behave freely, without optogenetic stimulation. Freely behaving Drosophila recorded from one ventral view at lowresolution- We constructed a square arena consisting of three vertically stacked layers of 1/16" thick acrylic sheets laser-cut to be 30 mm long, 30 mm wide, and 1.6 mm high. This arena can house multiple flies at once, increasing throughput at the expense of spatial resolution (26 px mm -1 ). Before each experiment the arena ceiling was coated with 10 uL Sigmacote (Sigma-Aldrich, Merck, Darmstadt, Germany) to discourage animals from climbing onto the ceiling. A camera (pco.panda 4.2 M-USB-PCO, Gloor Instruments, Switzerland, with a Milvus 2/100M ZF.2 lens, Zeiss, Switzerland) was oriented with respect to a 45 ° mirror below the arena to capture a ventral view of the fly. An 850 nm infrared LED ring light (CCS Inc. LDR2-74IR2-850-LA) was placed above the arena to provide illumination. Although the experiment contained optogenetically elicited behaviors interspersed with periods of spontaneous behavior, here we focused only on spontaneously generated forward walking. The positions and orientations of individual flies were tracked using custom software including a modified version of Tracktor [70]. Using these data, a 138 × 138 px image was cropped around each fly and registered for subsequent analyses. 2D pose estimation DeepFly3D 2D poses were taken from [52]. OpenMonkeyStudio 2D poses were taken from [53]. CAPTURE 2D poses were taken from [55]. LocoMouse 2D poses were taken from [54]. See these publications for detailed 2D pose estimation procedures. In the prism-mirror setup, we split the data acquired from a single camera into ventral and side view images. We hand-annotated the location of all 30 leg joints (five joints per leg) on 640 images from the ventral view and up to 15 visible unilateral joints on 640 images of the side view. We used these manual annotations to train two separate DeepLabCut [59] 2D pose estimation networks (root-mean-squared errors for training and testing were 0.02 mm and 0.04 mm for ventral and side views, respectively). We ignored frames in which flies were climbing the enclosure walls (thus exhibiting large yaw and roll orientation angles). We also removed keypoints with < 0.95 DeepLabCut confidence and higher than 10 px mismatch along the xcoordinate of ventral and side views. FlyLimbTracker data [56] was manually annotated. Images acquired in the new low-resolution ventral view setup were annotated using DeepLabCut [59] trained on 160 hand-annotated images. Due to the low resolution of images, the coxa-femur joints were not distinguishable. Therefore, we treated the thoraxcoxa and coxa-femur joints as a single entity. Training the LiftPose3D network An important step in constructing LiftPose3D training data is to choose r root joints (see the specific use cases below for how these root joints were selected), and a target set corresponding to each root joint. The location of joints in the target set are predicted relative to the root joint to ensure translation invariance of the 2D poses. The training dataset consisted of input-output pose pairs (x c tr , X tr ) with dimensionality equal to the number of keypoints visible from a given camera c minus the number of root joints r, . Then, the training data was standardized with respect to the mean and standard deviation of a given keypoint across all poses. Tethered Drosophila melanogaster -Of the 38 original keypoints in Ref. [52], here we focused on the 30 leg joints. Specifically, for each leg we estimated 3D position for the thorax-coxa, coxa-femur, femur-tibia, and tibia-tarsus joints and the tarsal tips (claws). Thus, the training data consisted of input-output coordinate pairs for 24 joints (30 minus six thorax-coxa root joints) from all cameras. The training convergence is shown on Extended Data Figure 2A). Freely behaving macaque monkeys-The OpenMonkeyStudio dataset [53] consists of images of freely behaving macaque monkeys inside a 2.45 × 2.45 × 2.75 m arena in which 62 cameras are equidistant horizontally at two heights along the arena perimeter. We extracted all five available experiments (7, 9, 9a, 9b and 11) for training and testing. Since 2D pose annotations were not available for all cameras, we augmented this dataset during training by projecting triangulated 3D poses onto cameras lacking 2D annotation using the provided camera matrix. We removed fisheye lens-related distortions of 2D poses using the provided radial distortion parameters. We normalized each 2D pose to unit length, by dividing it by its Euclidean norm as well as the 3D pose with respect to bone lengths to reduce the large scale variability of the OpenMonkeyStudio annotations (animals ranged between 5.5 and 12 kg). We set the neck as the root joint during training. We compare our absolute errors to the total body length, calculated as the sum of the mean lengths of the nose-neck, neck-hip, hip-knee, knee-foot joints pairs. Over multiple epochs, we observed rapid convergence of our trained network (Extended Data Figure 2B). Freely behaving mice and Drosophila recorded from two views using a right-angle mirror-Freely behaving mouse data [54] consisted of recordings of animals traversing a 66.5 cm long, 4.5 cm wide, and 20 cm high glass corridor. A 45° mirror was used to obtain both ventral and side views with a single camera beneath the corridor. 2D keypoint positions were previously tracked using the LocoMouse software [54]. We considered six major keypoints-the four paws, the proximal tail, and the nose. Keypoint positions were taken relative to a virtual root keypoint placed on the ground midway between the nose and the tail. The networks were trained on partial ground truth data following pose alignment, as described in the main text. The networks for Drosophila and mouse training data converged within 30 and 10 training epochs (Extended Data Figure 2C,D). Freely behaving rat in a naturalistic enclosure- The CAPTURE dataset contains recordings of freely behaving rats in a 2-foot diameter cylindrical enclosure video recorded using six cameras. Motion capture markers on the animal were tracked using a commercial motion capture acquisition program [55] to obtain 2D poses. Out of 20 possible joints, we limited our scope to the 15 joints that were not redundant and provided most of the information about the animal's pose. The dataset includes 4 experiments recording 3 rats from two different camera setups. Before using LiftPose3D, we removed the distortion from 2D poses using radial distortion parameters provided by the authors. The CAPTURE dataset has many missing 3D pose instances which we handle by not computing the loss corresponding to these keypoints during back-propagation. We selected the neck joint as the single root joint and predicted all of the other joints with respect to this root joint. We observed that LiftPose3D converged within 15 training epochs (Extended Data Figure 2E). Freely behaving adult Drosophila melanogaster recorded from one ventral camera view-For both the newly acquired low-resolution and previously published high-resolution [56] images of freely behaving flies taken using one ventral view camera, we trained a LiftPose3D network on partial ground truth data acquired from the prism mirror system. For the high-resolution data, we considered the thorax-coxa joints as roots. For the low resolution data, the coxa-femur joints were imperceptible. Therefore, the thorax-coxa joints were selected as roots. The training dataset consisted of coordinate pairs Gaussian noise term with a joint-independent standard deviation of 4 px. The role of this noise term was to account for the keypoint position degeneracy inherent in the transformation from high-resolution prism training data to lower-resolution testing data. For the high resolution dataset this noise term was set to zero. Comparing joint angles derived from lifted 3D and 2D poses To illustrate the benefits of using lifted 3D coordinates versus 2D coordinates for kinematic analyis, we derived the joint angles obtained from 3D coordinates along with projected 2D coordinates. Consider the (2D or 3D) coordinates of three consecutive joints in the kinematic chain of one leg with coordinates u, v, w. Then, vectors s 1 = u -v and s 2 = u -w describe adjacent bones. Their enclosed angle is found by the cosine rule, cos -1 (s 1 · s 2 /(‖s 1 ‖ ‖s 2 ‖)). Due to the uncertainty of 2D and 3D pose estimation, we assumed that keypoint coordinates are Gaussian distributed around the estimated coordinate. As a proxy for the variance we took the variation of bone lengths ||s 1 || and ||s 2 || because they are expected to remain approximately constant owing to the low mechanical compliance of the fly's exoskeleton (with the exception of the flexible tarsal segments). This allowed us to predict 3D joint angles by Monte Carlo sampling (using 5 × 10 3 samples), drawing one sample from each of three distributions and then computing the corresponding joint angle by the cosine rule. The joint angles derived from lifted and triangulated 3D poses were in close agreement (Extended Data Figure 1, red and blue). The errors were low when comparing angle estimate variances to the amount of joint rotation during locomotor cycles. This shows that that our network learned and preserved body proportions-a remarkable fact given the absence of any skeletal constraints, or temporal information. Furthermore, when comparing the joint angles derived from 3D and 2D poses, we found that the predicted coxa-femur 3D joint angles, β, in the front and hindlegs were of larger amplitude than β', derived from projected 2D poses. This is expected since the action of these joints has a large out-of-plane component relative to the x-y plane during walking. In the front leg, the predicted tibiatarsus 3D joint angles, ω, were of smaller amplitude than ω'. Indeed, rotations upstream in the kinematic chain (proximal joints) cause the movement of the whole leg, introducing spurious dynamics in the distal joints when viewed from a projected plane. These results illustrate that 3D poses predicted by LiftPose3D can decouple the underlying physical degrees-of-freedom and avoid spurious correlations introduced by 2D projected joint angles. Extended Data Extended data figure 1. Joint angles resulting from lifting compared with 3D triangulated ground truth and 2D projections. (1), mid (2), or hind (3) positions. D LiftPose3D can be trained using virtual camera projections of 3D poses to lift from cameras within the angles ψ z , ψ y , ψ x (representing ordered yaw, roll, pitch rotations). E Error of 3D poses relative to triangulation using three cameras per keypoint. We compare triangulation error using 2 cameras per keypoint (white), test error for a network trained with known camera parameters (orange) and two angle-invariant networks with narrow (green, ψ z = ±10°, ψ y = ±5°, ψ x = ±5° with respect to a known camera orientation), or wide ranges (red, ψ z = ±180°, ψ y = ±5°, ψ x = ±5°). F Error of lifted 3D poses at different virtual camera orientations of the wide-range lifter network and a network with known camera parameters. Blue dots represent lifting errors for a given projected 2D pose. Orange circles represent averages over the test dataset for a given camera. G Error of estimated 3D poses for a network trained and tested on different combinations behavioral data including optogenetically-induced backward walking (MDN, left), antennal grooming (aDN, middle), or spontaneous, unstimulated behaviors (control, right). H Two representative images from the OpenMonkeyStudio dataset. 2D poses are superimposed (black). I 3D poses obtained by triangulating up to 62 cameras (red lines), or using a single camera and LiftPose3D (dashed black lines). J Absolute errors for different body parts with respect to total body length. Violin plots represent Gaussian kernel density estimates with bandwidth 0.5, truncated at the 99th percentile and superimposed with the median (gray dot), 25th, and 50th percentiles (black line).
2020-09-25T13:06:33.539Z
2020-09-20T00:00:00.000
{ "year": 2021, "sha1": "75b1b0bbc1f18022112e4a4e47ec0208d4ce207f", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7611544", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "b3bf23d890eea69f0f07fecf308fd2ded4fae5c5", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology", "Computer Science" ] }
2196840
pes2o/s2orc
v3-fos-license
Ultrastructural Exploration on the Histopathological Change in Phenacoccus fraxinus Infected with Lecanicillium lecanii The histopathological changes of the second instar nymph of the mealybug Phenacoccus fraxinus infected with Lecanicillium lecanii strain 3.4505 were investigated using light, scanning and transmission electron microscopy. The results demonstrated that L. lecanii 3.4505 could infect P. fraxinus in a short period. At 24 h post-inoculation, the conidia of L. lecanii 3.4505 adhered to the indented gloves or intersegmental folds of the insect body surface. Subsequently, the germinated conidia produced germ-tubes, appressoria and extended hyphae, which tightly adhered to the cuticle. Penetration of cuticle could be achieved either by peg form appressoria or directly by hyphae. Also, the conidia and hyphae could secrete massive mucilages causing visible damage to the host cuticle. After 48 h, the body wall, tissues and organs, including cuticle, trachea, fat body, muscle, Malpighian tubules and nerve ganglion, were destroyed by ramification of hyphae as a result of infection. The endoplasmic reticulum hypertrophied and formed obvious fingerprint agglomerates, and the mitochondria swelled and deformed in the haemocytes. Finally, the mycelium fully occupied the entire haemocoel. The entire bodies were wrapped in a white mycelium, with the mycelium extending radically outward. Introduction The mealybug Phenacoccus fraxinus Tang (Hemiptera: Pseudococcidae) is an important pest on the ash tree Fraxinus spp., causing rampant harm in several provinces and cities in China and seriously affecting the growth of trees and city landscapes [1]. The control effect of chemical pesticides is restricted because this insect secretes abundant protective wax. Furthermore, pesticides cause environmental pollution and injure the natural enemies of pests. Biological control is a potentially important means, and the application of entomopathogenic fungi would be highly desirable in the wider natural ecosystems. Lecanicillium lecanii is a well-know entomopathogenic fungus [2]. It was originally isolated from Lecanii coffeae Walker, in Ceylon by Nivter in 1861. In recent years, most studies concerning this pathogen have focused on its use for the control of aphids, whiteflies, and mites. However, L. lecanii has not been reported in the biological control of P. fraxinus. Moreover, the infection mechanism of L. lecanii on P. fraxinus is not known. In the present study, we investigated the infection process and histopathological changes of P. fraxinus infected by L. lecanii. The results of this research will provide support for this fungal application as a potential biological control agent. Ethics Statement The collection of the second instar nymphs of P. fraxinus was permitted by the Bureau of Parks and Woods of Taiyuan County, Shanxi Province, China. Entomopathogenic fungus and test insects The second instar nymphs of P. fraxinus were collected from the ash trees Fraxinus chinensis Roxburgh (Oleaceae) at Taiyuan city (E112˚53ˊ, N37˚87ˊ) in Shanxi Province in China. One hundred individuals of the living nymphs were used for the experiment. The entomopathogenic fungus L. lecanii strain 3.4505 originally isolated from a species of scale insect was purchased from China General Microbiological Culture Collection Center, and was selected for the trial. The suspension concentration was 5 × 10 7 spores/mL. The methods of fungal cultivation, conidial harvest and of insect inoculation were previously described by Liu [3]. Observation of external symptoms The external symptoms of the scale insects infected with the fungus were directly observed at 24, 48, 72, and 96 h post inoculation using a stereomicroscope (OLYMPUS SZ-ST). The infection characteristics and the number of infected insects were recorded, and the photographs were taken using an Olympus C5050Z digital camera (OLYMPUS OPTICAL Co., Ltd). Histopathological observations Sample fixation. At 24, 48, 72, and 96 h post inoculation, approximately 20 scale insects for each observation period were collected and immersed in 4% (v/v) glutaraldehyde (pH 7.2, 0.2 M phosphate buffer) for 48 h at 4°C, respectively. After rinsing thrice with 0.2 M phosphate buffer, the samples were ready for further processing for light microscopy (LM), scanning electron microscopy (SEM) and transmission electron microscopy (TEM). SEM observation. The rinsed samples were dehydrated in a gradient of acetone solutions (10% to 100%) for 10 min at each level. The samples dried using supercritical drying apparatus EMS 850 were fixed on microscope slides and sputter-coated with gold about 20 μm and observed using a SEM (JSM-840 model JEOL Ltd., Japan) operated at 15 kV. Micrographs were taken using a Canon EOS 350D digital camera. TEM observation. The samples for TEM were post-fixed in 1% (v/v) osmium tetroxide (in phosphate buffer) for 3 h at 4°C, dehydrated in an ethanol series (10% to 100%), and embedded in Epon 812. Semithin sections (1 μm) were mounted on glass slides and stained with 1% (v/v) toluidine blue and observed using a LM (Olympus BX-51). Ultrathin sections (0.08 μm) were cut using a Reichert Jung ultramicrotome, collected on copper grids, and counterstained with uranyl acetate and lead citrate. The ultrathin sections were observed using a TEM (JEM-1200EX, accelerating voltage 80 kV). Micrographs were taken on Lucky film and they were scanned using a N-TEK Nuscan 700 scanner. Results and Discussion External characteristics of hyphae on the surface of the host P. fraxinus can generate and excrete various wax substances on its body surface [4]. The observation showed that, despite the wax protection, L. lecanii strain 3.4505 could successfully infect the insects. There were some hyphae on the body surface, particularly around the body marginal regions at 24 h post-inoculation (Fig. 1A). The hyphae grew so rapidly that the white mycelium completely covered the insect body at 96 h post-inoculation, and the insect bodies were shriveled (Fig. 1B). Observation under scanning electron microscope revealed that the hyphae could easily pass through the waxy filaments (Fig. 1C) and even through the wet waxy (Fig. 1D). The intersegmental folds were also easily invaded, with many conidia adhering and producing hyphae (Fig. 1E). In a magnified view, many spores scattered among the mycelia (Fig. 1F). Compared to Ceroplastes japonicus, the hyphae appeared earlier and grew more rapidly on the nymphs of P. fraxinus [3]. For example, the nymphs of P. fraxinus were completely covered with thick hyphae at 96 h post-inoculation while 144 h post-inoculation were necessary in C. japonicus. This difference could be due to the marked characteristics and ultrastructural difference of the wax filaments of P. fraxinus compared with those thick wax textures of C. japonicus [4,5]. Additionally, the inhibition of the wax filaments to hyphae invasion in the former was less than that of the thick wax texture of the latter. Fungal adhesion and infection on the host surface Under scanning electron microscopy, at 24 h post-inoculation, few conidia were observed on the intersegmental folds ( Fig. 2A) and in grooves of the body surface (Fig. 2B). The conidia germinated within 24 h after inoculation and subsequently produced germ-tubes and appressoria. In Fig. 2C, one germinated conidium showed a comparatively short germ-tube, and another conidia directly produced an appressorium-like structure, which tightly adhere onto the cuticle. Some of the germ-tube continued growing and developed into hyphae. During the growth period, a specialized cone-shaped appressorium was produced at the end of the hyphae (Fig. 2D). The elongated hypha directly penetrated the cuticle of the scale insect (Fig. 2E). With the action of hyphae during the infection, some cracks were produced on the cuticle. As shown in Fig. 2F, a hypha directly penetrated into the cuticle via a crack. On the leg of the scale insect, massive mucilage was secreted by the hyphae, which in turn might strengthen the adhesion and penetration due to the cuticle degrading enzymes they contain (Fig. 2G-H). The mode of fungal infection of a host is primarily via cuticle invasion [6], which is in agreement with the present study. Similar observations were also made by [3] on L. lecanii against C. japonicus. Additionally, the characteristics of the conidia adhesion, germination, hyphae growth and penetration are similar to the ones observed on C. japonicus infected by L. lecanii [3]. However, in the present study, our investigation clearly showed that the hyphae spread on the insect host cuticle often coated with massive dense mucilage, which might contain extracellular cuticle degrading enzymes, e.g., proteinase, chitinase and lipase that may help the entomopathogen to dissolve and penetrate the cuticle. Some other entomopathogens, i.e., Metarhizium anisopliae, M. flavoviridae and Beauveria bassiana, have been shown to secret an array of similar lytic enzymes [7,8]. Our previous study showed that the activities of extracellular protease and chitinase from L. lecanii significantly increased when the body materials of scale insects were added into PDA medium, and their activities had a linear correlation with the mortalities of the scale insects [9]. Another study indicated that the protease secreted by L. lecanii played important roles in decomposing the cuticle protein and in exposing chitin during the infection against the scale insect [10]. Integument penetration Under transmission electron microscopy, hyphae aggregated in the intersegmental folds and colonized inside the integument (Fig. 3A). In a magnified view, the procuticle structure was destroyed and disappeared near the hyphae (Fig. 3B). In addition, we first observed many hyphae entering the insect trachea in the cuticle, which resulted in tracheal wall damage (Fig. 3C). In Fig. 3D, one hypha is penetrating through the tracheal wall. The foreside of this hypha entered trachea, whereas the middle part was constricted to make it sharper and easier to penetrate the tracheal wall. Therefore, the pathogenic fungus L. lecanii could attack the tracheal system of the scale insect by hyphae penetrating the tracheal wall. The muscle tissue was also heavily destroyed. The hyphae penetrated into the muscle and made its cell membranes break into pieces. The cell nuclei were damaged, with part of the nuclear membrane disappearing and with the nuclear chromatin slightly condensing and escaping (Fig. 4D). In a magnified view, the myofibrils were loose and irregular; parts of myofilaments were broken and mitochondria in myocytes seriously damaged (Fig. 4E). Under light microscopy, fat body cells were abundant in the haemocoel of 2nd-instar P. fraxinus. The normal fat bodies had clear outlines, whereas the infected fat bodies near the cuticle had dim margins or blurred outlines (Fig. 5A). The nerve ganglion was severely damaged in the neural lamella endo-perineurium sheath, in the nerve cell bodies around the inner edge, and in the nerve fibers in the center (Fig. 5B). Fungal infection in fat bodies and nerve ganglion was observed in P. fraxinus. In our study, the normal intestinal cells arranged in order, whereas the damaged intestinal cells were deformed, broken, and even separated from the intestinal muscle layers (Fig. 5C). The basal membrane of the Malpighian tubules separated from the cells, and the compact cell layer was loose and irregular (Fig. 5D). The digestive and excretory systems were destroyed whereas no hyphae were observed inside the intestines. Nevertheless, eventually the pathogen completely colonized all the organs and tissues in the infested host. Conclusions Through the above observations and analysis, the L. lecanii strain 3.4505 could successfully infect and cause death to the second-instar nymph of P. fraxinus. The infection process was similar to that observed on C. japonicus; however, the results indicated that L. lecanii 3.4505 could easily infect P. fraxinus in a short time. Tests on the pathogenicity of L. lecanii against P. fraxinus in the field need to be carried in the future to support data obtained by ultrastructural observations. This new evidence is meaningful for better understanding the fungal infection of scale insects and for using this fungus as a potential biological control agent.
2018-04-03T04:28:25.730Z
2015-01-28T00:00:00.000
{ "year": 2015, "sha1": "4fbf0478cd270cf6af26ab9b0f632bb4b3fddcc7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0117428&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4fbf0478cd270cf6af26ab9b0f632bb4b3fddcc7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
147646598
pes2o/s2orc
v3-fos-license
Media blind spot over West Papua Indonesia is trying to build an international reputation as a nascent democracy and is proud of having been re-elected in 2007 to the United Nations Human Rights Council for a three-year term. But the problems in West Papua make this democratic reform story questionable. While Indonesia keeps this troubled province off limits to foreign journalists and human rights investigators, Indonesia's human rights credibility should be critically examined. I NDONESIA is trying to build an international reputation as a nascent democracy and is proud of having been re-elected in 2007 to the United Nations Human Rights Council for a three-year term.But the problems in West Papua 1 make this democratic reform story questionable.While human rights investigators, Indonesia's human rights credibility should be critically examined. Indonesia's incorporation of West Papua has been contested ever since it took control in 1963.West Papua's fate was sealed by a 1969 'Act of Free Choice' which is known as the 'Act of No Choice' by the Papuans, since it was carried out under extreme duress and only 1022 men were allowed to vote (Saltford, 2003).The province remains heavily militarised and opposition to Indonesia's rule persists.Grave human rights abuses have been exposed, especially in the post-Suharto years when Papuan nationalists have begun to work with an international solidarity movement to publicise their problems.Some academics consider the survival of the West Papuan people is under threat because of cumulative impacts including poverty, HIV/AIDs, loss of life-sustaining forests, and uncontrolled migration (Wing & King, 2005). Papua.While the West Papua story stays out of international headlines, public realisation and concern remains low.There is little awareness that this Melanesian territory shares the island of New Guinea with its Pacific neighbour Papua New Guinea. Solidarity activists have a struggle to make the issue better known outside the human rights community.As a New Zealand activist, I have found that when we host a West Papuan human rights leader, it is hard to Commentary time to interview our guest.For example, when Reverend Socrates Sofyan Yoman, head of the Papuan Baptist Churches, toured New Zealand in 2006 no metropolitan daily published an interview or quoted his views.Radio New Zealand international is an exception to this pattern.It frequently interviews West Papuan representatives with varying political views both those who are on the ground as well as those on exile. In a departure from previous practice, Indonesia allowed two United Nations rapporteurs to visit West Papua in 2007.Hina Jilani, the UN Special Representative to the Secretary-General on Human Rights Defenders, and Manfred Nowak, UN Special Rapporteur on Torture.A third rapporteur, Philip Jilani (United Nations, 2007) highlighted her concerns about the military and police harassment and intimidation of human rights activists.She referred to 'credible reports' of arbitrary detention, torture, and harassment of those who sought to investigate human rights violations.Some of the people she met with reported being targeted with death threats and intimidation after her departure.These reports were not covered by mainstream media in New Zealand. In September 2007, Lucy Williamson, a BBC Jakarta correspondent, gained a rare permit to visit the Central Highlands to report on the opening of an independent radio network, Newsroom 68H, and an associated hydro-electric dam.She also gave a graphic account of underdevelopment and extreme poverty in the area and reported on allegations of human rights abuses. Some West Papuans believe that access may have tightened up again since the BBC visit.Access did not seem to be a problem for Jakarta Post.She was hosted by the giant gold and copper mine Freeport McMoran and her articles were human interest-small Papuan woman drives an enormous earth moving truck-rather than dealing with political or environmental issues (The Jakarta Post, 25 November 2007). However, when West Papuan academic Father Neles Tebay wrote an opinion article entitled 'Papuan peace lovers want troops to leave' and calling for greater respect for Papuan rights and demilitarisation (The Jakarta Post, (known by its acronym TNI) wrote that Tebay's op-ed 'harmed the institution of the TNI and negated Indonesian integrity'.The TNI spokesperson, Vice Marshall Sagom Tamboen, sounded an intimidating note when he accused Tebay of supporting 'those wishing to see Indonesia's disintegration' (The Jakarta Post, 1 December, 2007). One of the strongest international advocates for West Papuan selfdetermination in recent years has been United States Congressman Eni Faleomavaega, a Samoan.He has successfully rallied support from Congressional colleagues, particularly those in the Black Caucus and he has promoted the international campaign for the UN to review its role in the 'Act of Free Choice'.He currently holds the post of Chairman of the Foreign so when he was refused permission to visit West Papua in July, the snub was highly publicised.But Congressman Faleomavaega himself adopted a conciliatory approach and in November the Indonesian authorities facilitated a brief tour for him and the US Ambassador. Brief is the operative word.In the course of two days the US guests were rushed in and out of Timika-where Freeport McMoran mine is located-Biak and Manokwari.The itinerary did not include the capital, Jayapura, and it certainly did not include any interviews with waiting at the places they believed he would visit.However, they were outmaneuvered by police who managed to ensure the US guests had time only for formal meetings with the governor and a handful of other political leaders.In Manokwari the authorities were so concerned to 'protect' their guests that they transported the US pair along a badly potholed back road from the airport into town.According to an emailed report from West Papua, a few determined Papuans managed to garland him as he was ushered through the airport tarmac with banners as the plane was taking off (Warinussy, 2007).story remains untold. New Zealand foreign policy and West Papua There is another interlinked story which is also not being told-about New Zealand's role in West Papua and New Zealand's ongoing support for Indonesia.It is interesting to note that New Zealand diplomats do not get the run around when they request permission to visit West Papua.New Zealand puts high value on its good bilateral relationship with Indonesia and has worked to put the relationship on a good footing again after a period of instability following East Timor's liberation and the horrendous Indonesian military and militia violence which preceded it.New Zealand policing for the West Papuan police (Peters, 2007). The New Zealand government has chosen a strategy of 'engagement' government is careful to signal its continuing support for Indonesia's 'territorial integrity'.The military ties which were suspended in 1999 in response to the cataclysmic violence in East Timor were quietly resumed in 2006 (Peters, 2006), and the government has said little on the issue of impunity despite the fact that many of those responsible for crimes against humanity in East Timor Australian academic and writer on West Papua, Dr Peter King (2006), describes both the Indonesian military and police as 'quasi states' to illustrate their deep involvement in corruption and lack of accountability to the civilian government.In West Papua he notes that of the 137 cases brought by the police in a logging scandal in 2005, not one has resulted in conviction. In 2000 there was a tantalising window on New Zealand's West Papua diplomacy when the government seemed prepared to risk the possibility of Indonesian disapproval.In October that year, the Indonesia Human Rights Committee hosted the international representative of the Free West Papua Movement (OPM), John Ondawame.NZ Foreign Minister Phil Goff agreed arrived at Parliament on October 18, Green Party Foreign Affairs spokesperson Keith Locke staged a colourful welcome as he and several MPs-including Goff said he was also willing to meet privately with the Papua Presidium's charismatic leader Chief Theys Eluay (MFAT, 2000b), although this visit never eventuated before Theys Eluay's death at the hands of the Indonesia military in November 2001. At the UN Millennium Summit in New York the leaders of Nauru and Vanuatu had both spoken in support of self-determination for their fellow Melanesians in West Papua.(Sope, 2000;and Dowiyogo, 2000) and were to the concern of Indonesia. For a few months New Zealand was the focus of a diplomatic offensive from both Indonesia and West Papuan representatives.Indonesia sent a suggests that New Zealand was contemplating taking a more robust stand.In his discussions with the Indonesian delegation, the then Minister referred to recent violence in Papua and stressed that while New Zealand wished to see a 'stable, democratic, prosperous and united Indonesia.Indonesia's unity was dependent on how Jakarta sought to resolve separatist tensions, rather than external statements about Indonesia's territorial integrity' (MFAT, 2000c). Presidium called on Foreign Minister Goff to ask New Zealand to play a mediation role as a 'neutral third party'.While the delegation was told this was unlikely to happen 'given Indonesia's views on outside involvement in matters of territorial integrity' (MFAT, 2000d), New Zealand's 'offer' to help with dialogue was discussed in newspaper articles in Australia and New Zealand (West Papua requires cautious approach, 2000; Indonesia plans regional summit, 2000). By the end of 2000.the Papuan 'spring' was over-key independence leaders, including Theys Eluay, had been detained-and Brimob police had coming to an end (New Zealand Herald, 8 October 2002).New Zealand warmly welcomed the 2001 Special Autonomy legislation for Papua, and ever in Papua' is the full implementation of the special autonomy package (Goff, 2003).Not only has New Zealand chosen to work with the wrong people, there seems any group or person connected to the self-determination cause. A heavily censored copy of the December 2006 report by the Second Secretary of the New Zealand Embassy has been released to IHRC under the . the full implementation of the Special Autonomy Law on Papua (OTSUS) in the context of our commitment to the territorial integrity of the Republic of Indonesia (NKRI) and to do what we can to encourage stronger adherence to basic human rights standards in the province'. distanced itself from 'some' NGOs.'We fully appreciate that in a free society New Zealand NGOs have every right to make their views known and that some will continue to support the Papuan separatist cause and inevitably irritate the Indonesian authorities.2007). In recent years, the West Papuan movement has united around the call for peaceful dialogue with Indonesia, and a recent leaders' summit formed a new alliance, the West Papua Coalition for National Liberation (WPCNL) (Makabory, 2007).The new umbrella organisation includes the Free Papua Movement (OPM) and is calling for the involvement of an internationally recognised mediator.Vanuatu is proposing to call on the United Nations Decolonisation Committee to re-inscribe West Papua on its list of nonself-governing territories.New Zealand is not supporting these initiatives, and the Green Party is the only Parliamentary voice which consistently takes a stand for West Papua's right to self-determination and in support of Melanesian initiatives. 2 During the Indonesian occupation of East Timor, New Zealand played a key role in the United Nations supporting the Indonesian position, and now the same scenario is playing out over West Papua.There are exceptions, including the work of Scoop writer Joseph Barratt (2007), but the New Zealand media is essentially ignoring this important story. Notes 1 . In 2001, Indonesian President Abdurrahman Wahid conferred the name 'Papua' on the province which had previously been known as Irian Jaya.However, Papuan nationalists and their supporters use 'West Papua', the name chosen in 1961 by the At the same time, care is needed to ensure that there to NGO activities that support Papuan separatism and thus undermine New Zealand government policy'(NZ Embassy, 2007).NewZealandnow has privileged and rare access to West Papua.This the controversial Freeport McMoran Mine (NZ Embassy, 2006) and have witnessed illegal logging inside Wasur National Park' (Van der Vloodt,
2019-05-08T13:27:02.030Z
2008-04-01T00:00:00.000
{ "year": 2008, "sha1": "d7ed314304737cb52bc10233ac9a89a6a6bf7e0d", "oa_license": "CCBYNC", "oa_url": "https://ojs.aut.ac.nz/pacific-journalism-review/article/download/932/1131", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "d7ed314304737cb52bc10233ac9a89a6a6bf7e0d", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Sociology" ] }
255328245
pes2o/s2orc
v3-fos-license
Common ABCB1 SNP, C3435T could affect systemic exposure of dapagliflozin in healthy subject P-glycoprotein (P-gp) is a transporter that plays an excretory role in epithelial cells. It is encoded by ABCB1, and single nucleotide polymorphisms (SNPs) in this gene can affect systemic drug exposure. Dapagliflozin and sitagliptin, used in type 2 diabetes treatment, are P-gp substrates. Here, we aimed to investigate whether ABCB1 polymorphisms affect dapagliflozin and sitagliptin pharmacokinetics (PK) in healthy Korean subjects. The study population consisted of 100 healthy Korean subjects (94 men and 6 women) who participated in four different clinical trials and received dapagliflozin and sitagliptin doses of 10 and 100 mg, respectively. We determined ABCB1 genotypes for the C3435T, C1236T, and G2677T/A SNPs. The relationship between the genotypes and dapagliflozin PKs was examined. Dapagliflozin and sitagliptin PK parameters were not statistically significantly affected by ABCB1 SNP genotypes. However, homozygous 3435TT subjects showed higher dapagliflozin PK parameters than CT and CC subjects. In subjects with the 3435TT and those with 3435CC and 3435CT genotypes, mean Cmax, AUCinf, and AUC0-1 values of dapagliflozin were 223.06 ng/mL and 194.81 ng /mL (p = 0.2767), 673.58 ng*h/mL and 573.96 ng*h/mL (p = 0.0492), and 128.53 ng*h/mL and 104.61 ng*h/mL (p = 0.2678), respectively. In summary, dapagliflozin and sitagliptin PK parameters were not significantly different between individuals with C1236T and C2677T/A ABCB1 genetic polymorphisms. Dapagliflozin exhibited higher systemic exposure in 3435TT subjects than in CC/CT subjects. INTRODUCTION The concentrations and clinical effects of most drugs on the market differ between individuals due to several factors. To provide more effective treatment and prevention strategies against diseases, the concept of personalized drug therapy i.e., precision medicine, has emerged. To achieve the goals of precision medicine, it is essential to identify diseaseand drug metabolism-related genetic or genomic factors [1,2]. According to the US FDA early phase clinical trial guidelines, genetic differences between individuals can affect factors associated with several disease and its treatment, including the rate of disease occurrence and the risk of disease progression or recurrence. Therefore, blood and/or urine DNA sample collection from healthy subjects, as well as the collection of information on metabolic or transporter gene polymorphisms that influence drug exposure and response, are recommended [3]. ICH E17 guidelines suggest that the impact of genetic differences, such as polymorphisms in drug-metabolizing enzymes or drug molecular targets, on the effects of treatments, could be explored in an early phase clinical trials. Furthermore, the definition of terms used in pharmacogenomics, as well as sample handling methods, are described in the E15 and E18 guidelines [4,5]. Sitagliptin was the first dipeptidyl peptidase 4 (DPP4) inhibitor to be approved by US FDA. DPP4 inhibitors prolong the action of incretin hormones, such as glucagon-like peptide 1 and glucose-dependent insulinotropic pepttide, which stimulate insulin secretion and suppress glucagon secretion, respectively [6,7]. Thus, DPP4 inhibitors are widely used as monotherapies or in combination with other antihyperglycemic drugs for type 2 diabetes mellitus (T2DM) treatment [8]. Dapagliflozin is a potent and selective sodium-glucose co-transporter 2 (SGLT2) inhibitor which improves glycemic control in T2DM patients [9,10]. As dapagliflozin has been shown to lower the risk of cardiovascular disease that is a major T2DM complication, and improves hyperglycemia independently of insulin, it is an attractive option for T2DM treatment in combination with DPP4 inhibitors [11]. Until present, several studies have demonstrated the effectiveness of the combined use of SGLT2 and DPP4 inhibitors for T2DM treatment [12,13]. Dapagliflozin is mainly metabolized via direct UGT1A9-mediated glucuronide conjugation; however, sitagliptin is barely metabolized and over 80% of the drug is eliminated as unchanged compound in the urine [14,15]. Therefore, there is no PK interaction between dapagliflozin and sitagliptin [16]. Despite differences in metabolism, both these drugs are P-glycoprotein (P-gp) substrates [14,17]. P-gp, also called ATP-binding cassette transporter B1 (ABCB1) or multidrug resistance protein 1, is a transporter protein expressed in the membranes of several tissues, including colonic, small intestinal, pancreatic and bile ductile, blood-brain barrier, and kidney proximal tubular tissues. On the apical surfaces of intestinal or bile duct enterocytes and renal tubular cells, P-gp plays an excretory role [18,19]. Single-nucleotide polymorphisms (SNPs) of ABCB1, which encodes P-gp, have been widely studied as they affect the systemic concentrations of some drugs. Among these SNPs, the most well-studied include C1236T (rs1128503) on exon 12, G2677T/A (rs2032582) on exon 21, and C3435T (rs1045642) on exon 26. G2677T/A is a non-synonymous SNP that directly influences amino acid substitutions. Although C1236T and C3435T are considered silent SNPs, some studies have reported that mutations in their sequences are associated with changes in the characteristics of their corresponding proteins [20][21][22]. This study therefore aimed to investigate the relationship between dapagliflozin/sitagliptin pharmacokinetics (PK) and ABCB1 SNPs in healthy Korean subjects who received single oral dapagliflozin and sitagliptin doses. https://tcpharm.org Subjects The study population consisted of 100 healthy Korean subjects (94 men and 6 women) who participated in 4 different clinical trials performed at Clinical Trial Center of Chungbuk National University Hospital. All studies were approved by the Korean Ministry of Food and Drug Safety, as well as the Institutional Review Board of Chungbuk National University Hospital (IRB No.: H2020-07-016, H2021-06-012, H2021-09-031, H2022-04,010). In addition, the studies were conducted in accordance with the Declaration of Helsinki and within the tenets of Korean Good Clinical Practices. Participants provided written informed consent during the screening period before any study-related procedures were performed. Included in this study were subjects aged 19 years or more, with body mass indices (BMI) ranging from 18.0-30.0 kg/m 2 . Subjects were excluded from the study if they had a clinically significant history or were suffering from any disease that could affect dapagliflozin and sitagliptin safety or PK. In addition, subjects with estimated glomerular filtration rates less than 60 mL/min/1.73 m 2 , as determined using the CKD-EPI equation, as well as those with AST, ALT, GGT, or total bilirubin levels greater than 2 times the upper normal limit were excluded from the study. Study design All 4 trials had a randomized, single-dose, two-sequence, two-period, crossover design. Subjects were orally administered 10 mg of dapagliflozin (Forxiga Tab 10 mg) and 100 mg of sitagliptin (Januvia Tab 100 mg) with 150 mL of water at a period determined based on the randomization method. Washout periods were at least 7 days. PK evaluation Plasma dapagliflozin concentrations were determined before drug administration and at 10, 20, 30, and 45 minutes, and 1, 1.5, 2, 3, 4, 6, 8, 12, 24, and 48 hours following drug administration. In two of the clinical trials, blood was collected at 15-min intervals rather than 20-min intervals, and additional samples were collected 5 and 7 hours following drug administration. Plasma sitagliptin concentrations were evaluated from blood samples collected before drug administration and at 15, 30, and 45 minutes, and 1, 1.5, 2, 3, 4, 5, 6, 8, 12, 24, and 48 hours after administration. In two of the clinical trials, additional blood samples were collected 10 minutes and 72 hours following drug administration. At each blood sampling point, 5-8 mL of blood was collected in a sodium heparin tube and centrifuged at 3,000 rpm for 10 minutes at 4°C. Then, the supernatant was collected in Eppendorf tubes and stored at −70°C until analysis. In each analysis in a separate study, the concentration of the plasma dapagliflozin and sitagliptin was measured using high-performance liquid chromatography tandem mass spectrometry (LC-MS/MS) in negative-ion and positive-ion electrospray mode, respectively. The entire analytical procedure was validated in terms of linearity, accuracy, precision, limit of detection and limit of quantification, interference, carryover, selectivity and matrix effect on its performance, recovery, dilution integrity, and stability. Single-dose dapagliflozin (10 mg) and sitagliptin (100 mg) PK parameters were determined through a non-compartmental method using Phoenix ® WinNonlin ® 8.1 (Certara, L.P., St. Louis, MO, USA). The PK parameters evaluated included the maximum plasma concentration (C max ), which was directly determined from the individual drug plasma concentration-time profiles. The areas under the concentration-time curve from time 0 extrapolated to infinite, https://tcpharm.org from time 0 to 1, and from time 0 to 2 (AUC inf , AUC 0-1 , and AUC 0-2 ) were calculated using linear-up log-down trapezoidal rules. Genotyping Blood samples (3 mL) were collected from each subject before drug administration. The DNA for ABCB1 genotyping was isolated from 100 μL of peripheral whole blood using a Maxwell ® CSC Blood DNA Kit and Maxwell ® CSC Instrument (Promega, Madison, WI, USA). The genotyping was conducted using TaqMan allelic discrimination assays on a real time-polymerase chain reaction (PCR) System (Applied Biosystems ® , Foster City, CA, USA). The PCR reaction mixture comprised 5 μL of 2X TaqMan Universal Master Mix II, 0.5 μL of 20X Drug Metabolism Genotyping Assay Mix, 1 μL of DNA, and 3.5 μL of DNase-free water. The genotyping for the ABCB1 SNPs rs1128503 (1236C>T, assay ID: C___7586662_10), rs2032582 (2677G>T/A, assay ID: C__11711720C_30, C__11711720D_40), and rs1045642 (3435C>T, assay ID: C___7586657_20) were performed with validated TaqMan Genotyping Assays purchased from Applied Biosystems. The PCR reactions were carried out as follows: initial denaturation at 95°C for 10 minutes, 40 cycles for denaturation at 95°C for 15 seconds, and anneal/extension at 60°C for 1 minute. After the amplification, the allelic discrimination results were obtained by performing an end-point read using a QuantStudio™ 3 Real-Time PCR Systems software version 1.5.2 (Applied Biosystems). Statistical analysis Statistical analyses were performed using the SAS software (version 9.4; SAS Institute, Inc., Cary, NC, USA). All descriptive data are presented as mean ± standard deviation (SD) for continuous variables and as frequencies and percentages for categorical variables. To compare the demographic and PK parameters of each ABCB1 mutation, one-way analysis of variance (ANOVA) or the Kruskal-Wallis test was used, depending on the normality, as determined using the Shapiro-Wilk and Kolmogorov-Smirnov tests. ABCB1 genotype distribution in Korean subjects The genotype and allele frequencies for the C1236T, G2677T/A, and C3435T SNPs in the 100 Korean subjects are shown in Table 1. No statistically significant genotype distributions were observed (p = 0.4924, 0.4289, and 1.0000 for C1236T, G2677T/A, and C3435T, respectively), and the population distribution for all three SNPs was in Hardy-Weinberg equilibrium. The C1236T, G2677T, G2677A, and C3435T mutations had variant allele frequencies of 54.5%, 20.5%, and 30.0%, respectively. The distributions for each genotype were similar to those previously published in Japanese and Chinese populations [23,24]. The mean (SD) age, height, weight, and BMI of the subjects were 25.42 (6.90) years, 172.76 (5.96) cm, 71.83 (9.55) kg, and 24.02 (2.54) kg/m 2 , respectively, and none of these factors was found to affect the genotype distribution (Supplementary Table 1). Influence of individual SNPs on dapagliflozin PK Subjects were grouped according to SNP genotype, and the influence of SNPs on dapagliflozin PK was evaluated. Following the administration of a single 10-mg oral dose of dapagliflozin, there was a significant difference in AUC inf between subjects of the 3435CC and 3435CT mutations (n = 91) and those of 3435TT mutation (n = 9), with observed mean (SD) AUC inf values of 573.96 (154.80) ng*h/mL and 673.58 (118.21) ng*h/ mL, respectively (p = 0.0492) (Fig. 1, Table 2). However, although both C max and AUC inf of plasma dapagliflozin were found to be higher for the 3435TT genotype than for the 3435CC and 3435CT genotypes, with their mean (SD) C max Influence of individual SNPs on sitagliptin PK Subjects were grouped by SNP genotype, and the effects of the three SNPs on sitagliptin PK were evaluated. The 2677AA SNP was detected but not included in the investigation due to its low frequency. No significant differences in sitagliptin PK were observed between subjects with mutations at positions of 1236, 2677 and 3435, respectively, and their dominant/recessive models (Tables 2 and 3, Fig. 4). However, its mean (SD) C max in subjects with the 1236CT and 1236TT genotypes were 446.62 (111.28) ng/mL and 473.77 (144.98) ng*h/mL, respectively, and these values were lower than that in subjects with the 1236CC mutation, in whom a mean AUCinf of dapagliflozin (ng*h/mL) (SD) C max of 506.87 (117.07) ng/mL was observed (p = 0.1762) (Table 3, Fig. 5). For C3435T polymorphisms, no trends were observed for mean C max , AUC inf , and AUC 0-2 . DISCUSSION This is the first study to explore the effect of ABCB1 SNP on PK in dapagliflozin. Since dapagliflozin and sitagliptin are P-gp substrates, we expected ABCB1 polymorphisms to AUCinf of sitagliptin (ng*h/mL) influence their PK. However, we found no statistically significant relationship between ABCB1 SNPs and PK parameters of sitagliptin and dapagliflozin. That notwithstanding, lower dapagliflozin systemic exposure was observed in subjects with the 3435TT genotype as compared to those with the CC + CT genotypes. Based on the known mode of action of P-gp, diverse studies have investigated whether ABCB1 SNPs can induce a clinically significant change in drug systemic concentrations and effects. Several studies have investigated the functional significance of the C3435T SNP in the disposition of digoxin, a well-known P-gp substrate. In a study involving 34 Caucasian subjects, digoxin concentrations were found to be 20% higher in 3435TT homozygous subjects than in CC and CT subjects [25]. Similarly, using a population PK approach, digoxin clearance was found to be reduced by 26.6% in subjects with TT alleles as compared to CC and CT subjects [26]. However, Becquemont et al. found no relationship between the C3435T SNP and digoxin concentrations following the administration of a single digoxin dose [27]. Contrary to these findings, a Japanese study involving 114 subjects reported a 20% lower AUC for digoxin in subjects with the 3435 TT genotype than in subjects with the CC and CT genotypes [28]. As seen in the case of digoxin, controversial findings have been reported even for the same drug as concerns the relationship between ABCB1 SNPs and its concentrations [29,30]. Therefore, the current state of knowledge does not permit reliable predictions as concerns ABCB1 SNP-induced changes in systemic drug concentrations and effectiveness. Determining the effects of ABCB1 SNPs on drug PK is challenging. First, most drugs pass through multiple pathways during the disposition process. Dapagliflozin is mainly metabolized by UGT1A9, but other CYP enzymes and OAT3 also contribute to its excretion process [14]. Sitagliptin metabolism and excretion involve CYP2C8 and transporters such as OAT3 and OATP4C1 [15]. Therefore, these other pathways could be confounding factors that interfere with the analysis of the relationship between P-gp and drug PK [31]. In addition, previous study has found P-gp expression levels to be 1.5-2 times lower in subjects with the 3435TT genotype than in those with the CC and CT genotypes [32]. As only a single dose was administered in this study, it may have been difficult to observe a clear difference in PK due to insufficient amount of substrate for P-gp saturation in the intestinal lumen or renal proximal tubule [33]. This study had some limitations. First, P-gp expression may vary depending on sex and race. In a study that involved the administration of cyclosporin to Caucasians, significant differences in both ABCB1 expression and blood cyclosporin concentrations were observed between male and female subjects [34]. In our study, gender-specific PK differences were not observed for dapagliflozin and sitagliptin (Supplementary Table 2). Since this study involved 94 men and only 6 women, the number of female subjects were not sufficient to verify possible sex-related differences. Second, the frequencies of the homozygous 2677AA and 3435TT genotypes, which are associated with differences in drug PK and efficacy, were relatively low. Therefore, it was difficult to evaluate differences in drug PK with respect to SNPs. However, the participants of this study were considered to represent the standard population as they satisfied the Hardy-Weinberg equilibrium requirement and SNP frequency distribution in this population was not significantly different from that reported in other Asian populations [23,24]. Third, pharmacodynamic biomarkers, such as blood glucose, were not assessed in this study, and the relationship between ABCB1 SNPs and PD or clinical outcomes was not evaluated. Further prospective studies are necessary to solve these problems. https://tcpharm.org In summary, dapagliflozin showed higher systemic exposure in subjects with the 3435TT genotype as compared to those with the CC or CT genotypes. But C1236T and C2677T/A SNP did not affect PK parameters of dapagliflozin and sitagliptin. SUPPLEMENTARY MATERIALS Supplementary Table 1 Demographic characteristics of the study participants with respect to ABCB1 SNP genotype Click here to view
2023-01-01T16:18:25.306Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "8477a8bd302ed1e6fe593f312bdf7b3e434de10c", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.12793/tcp.2022.30.e23", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa52a5b2ff4bbb4d3b528d4a9a7c4b8be250ede3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
271305091
pes2o/s2orc
v3-fos-license
Russia–Ukraine Propaganda on Social Media: A Bibliometric Analysis : This study presents a systematic review of the scholarly literature on Russia–Ukraine Propaganda on Social Media over the last ten years. This study performs a bibliometric analysis of articles published in the last ten years (2012–2022) and acquired from the Scopus database, followed by a brief content analysis of top articles from leading sources. Furthermore, the study aims to find gaps in the literature and identify the research area that could be developed in this context. VOSviewer application was used for data mining and data visualization from Microsoft Excel. Some interesting facts were found in the bibliometric analysis regarding research and other perspectives. Though the study was related to the propaganda of Russia and Ukraine, the USA is identified as the most attentive country in terms of research and publication on the topic. On the other hand, Russia published many articles regarding its own propaganda on social media. Introduction Social media refers to the processes by which individuals generate, share, and/or exchange information and ideas within virtual communities and networks.This interactive media technology promotes the creation and dissemination of information, ideas, and other forms of expression (Kietzmann et al. 2011;Obar and Wildman 2015).Nowadays, a large number of people are connected via social media, with 4.8 billion people, to be exact, in 2021 (Dean 2021).Life has become easier because of social media as it has become a medium of interaction with friends and families, sharing opinions and expressions, receiving information, posting media, doing business, and so on (Cleveland et al. 2023;Khaola et al. 2022).Yet, social media has become a double-edged sword as it also exacerbates divisions, helps spread conspiracy theories, allows people to exist in 'echo chambers ', etc. (D'souza et al. 2021).Media houses, independent media, and other sources of information are now exceedingly active in social media to spread real-time news.There has been huge competition in spreading the news as fast as possible to reach more users.For those who have the privilege of an internet connection, almost every sector is connected to social media.Not only certified media houses or agencies but also government officials and every sector of the internet-privileged world exist in different types of social media to spread information.Here comes the real challenge.Information is not in some specific organizations/agencies' hands.Anyone can share any information on social media within a second, whether it is verified or not.The considerable amount of unverified information creates misinformation or false information, which diverts the actual purpose of information (Allcott and Gentzkow 2017;Jabeen et al. 2023;Kim 2023;M. Wu and Pei 2022). False information can be categorized based on its intent.Disinformation is false information deliberately spread to deceive people, often with political motives, and is a form of propaganda.Misinformation, on the other hand, is false information spread without harmful intent (Bertolami 2022;Hasan 2023;Petratos 2021).Both disinformation and misinformation exist in the current world's social media.Social media is saturated with misinformation and disinformation.The use and availability are uneven though.Some countries in the world even specialize in it.Academics have recently paid great attention to the massive spread of disinformation through social media.Social media has facilitated the rapid dissemination of rumors and false information to a large audience (Wu et al. 2019). Besides social media disinformation, Russia and Ukraine are significant parts of the geopolitical politics of the world.Russia and Ukraine were part of the USSR (Union of Soviet Socialist Republics) before Russia changed its political and economic systems-Ukraine became independent in the 1990s.Russia and Ukraine have a close history related to their borders, economics, culture, and family ties (Masters 2022).The intention between Russia and Ukraine is central to world politics as there have been few conflicts between those countries in this 21st century.The biggest one was the armed conflict in eastern Ukraine, where Russia annexed the Crimea peninsula of Ukraine in 2014.Since Russia's invasion began on 24 February 2022, the deadliest war in Europe since World War II is going on between these two countries. The Kremlin's propaganda has a huge historical background to shape people's views.There has been a war that is already known as a social media war or the first TikTok war; the terms disinformation, propaganda, and misinformation on social media are being discussed everywhere (Liñán 2010;Scriver 2015), and there has been huge manipulation of information in social media by both parties.As modern research focuses on social media fake news or disinformation (Shu and Liu 2019), there is a lot of research related to propaganda on social media, with multiple studies focusing on the Russia and Ukraine context.This study aims to identify existing literature on Russia-Ukraine Propaganda on Social Media, analyze its origins, and find gaps for future research. Methods and Materials This systematic review adhered to PRISMA (Preferred Reporting Items for Systematic Review and Meta-Analysis) 2020 guidelines, and has been registered in the SSRN database (Page et al. 2021).This study systematically reviews the scholarly literature on Russia-Ukraine propaganda on social media.It identifies and describes the connection between Russian and Ukrainian propaganda appearing on social media.A systematic literature review is a scientific procedure that identifies gaps in the relevant literature and develops a potential research topic by identifying themes, trends, and weak points (Wright et al. 2007).Figure 1 also represents the data found in the Scopus database related to the research titled as PRISMA flow chart. All fundamental descriptions with details of the data used in this research are delineated in Table 1.The study comprised 456 documents published from January 2012 to October 2022.The author analyzed the research related to Russia-Ukraine and Social Media Propaganda over the last 10 years.Among them, a vast majority (83.33%) are articles and academic conference papers, which are accordingly 56.8% and 26.53%.The remaining 16.67% of documents are book chapters (12.06%) and review papers (3.07%).There are a limited number of letters and editorials (see Table 2).A total of 758 authors make a scientific contribution to Russia-Ukraine-related propaganda appearing on social media.To be more precise, of those 456 documents, 211 documents are single authored.Of the remaining multi-authored documents, 547 authors have contributed.The average citations that are used per document are relatively high, at 185.30. Figure 2 depicts the sample selection procedures used in this study in detail.This study gathered data from 512 documents in the Scopus database in Excel format.This study included publications from January 2012 to October 2022.The keywords used in Scopus searches were "Russia Ukraine", "Propaganda", and "Social Media".Only materials produced in the English language were included in the analysis (journals, articles, books, conference papers, letters, reports, reviews, and notes).VOSviewer software version 1.6.18(www.vosviewer.com,accessed on 10 November 2022) was used for bibliometric analysis.VOSviewer is a tool for designing and visualizing bibliometric networks.These networks, which may include journals, researchers, or individual articles, can be constructed based on citation, bibliographic coupling, co-citation, or co-authorship links.VOSviewer also has text mining tools for creating and visualizing co-occurrence networks of key phrases collected from the scientific literature (VOSviewer 2022).Thematic analysis was performed using MS Excel.A manual selection of topics in the VOSviewer was used to generate network and density visualizations for the purpose of analyzing a variety of data characteristics. Bibliometric Findings The research assessed the co-occurrence of all keywords used in the literature on Russia-Ukraine, Propaganda, and Social Media.In this study, the most frequently cited sources, authors, and documents were examined.In addition, bibliographic coupling was used to identify common sources between articles.In the following section, bibliometric findings are presented. Co-Occurrence of Keywords Because the topic of Russia-Ukraine, Propaganda, and Social Media literature is diverse, there are 1376 keywords that have been engaged frequently throughout the literature.The study screened the keywords by assigning a minimum frequency of ten occurrences per key term, and 18 of 1376 met the criteria.The size of the nodes in Table 3 and Figure 3 represents the frequency of the keyword.Later in this study, these types of bibliometric analysis were performed: • Co-occurrence of keywords; • Most highly cited sources, authors, and documents; • Bibliographic coupling. In the next section, the study shows the results and discussion of that analysis.In the last part, the study concludes with the future research prospects and limitations of this study. Bibliometric Findings The research assessed the co-occurrence of all keywords used in the literature on Russia-Ukraine, Propaganda, and Social Media.In this study, the most frequently cited sources, authors, and documents were examined.In addition, bibliographic coupling was used to identify common sources between articles.In the following section, bibliometric findings are presented. Co-Occurrence of Keywords Because the topic of Russia-Ukraine, Propaganda, and Social Media literature is diverse, there are 1376 keywords that have been engaged frequently throughout the literature.The study screened the keywords by assigning a minimum frequency of ten occurrences per key term, and 18 of 1376 met the criteria.The size of the nodes in Table 3 and Figure 3 represents the frequency of the keyword.However, the data suggests that "Social Media" was the most commonly used phrase of the 71 keywords.Other randomly used words comprise "Russia (68 Keywords)", "Ukraine (52 Keywords)", "Disinformation (44 Keywords)", "Propaganda (36 Keywords)", "Russian Federation (28 Keywords)", "Social Networking (24 Keywords)", "Fake News (23 Keywords)", "Information Warfare (19 Keywords)", "Twitter (17 Keywords)", "Foreign Policy (13 Keywords)", "Misinformation (12 Keywords)", "War (12 Keywords)", "Media Role (10 Keywords)", "Internet (10 Keywords)", "Authoritarianism (10 Keywords)", "Internet Relations (10 Keywords)", and "Conflict (10 Keywords)."The findings also suggested that the most crucial correlation was found between the phrases "Social Media" and "Disinformation." Additionally, "Social Media" was discovered to be closely related to "Russia", "Ukraine", "Disinformation", and "Propaganda."These data indicate that in the Russia-Ukraine conflict, misinformation, propaganda, and fake news through social media are the top concern as they are closely interconnected. Most Influential Documents In order to determine the most frequently cited publications in this field, the analysis was limited to publications cited at least 25 times in Russia-Ukraine, Propaganda, and Social Media.A total of 36 of the 456 documents fulfill the criteria Table 4 lists the 24 most cited publications in Russia, Ukraine, Propaganda, and Social Media literature.Figure 4 depicts the network map based on the most frequently cited publications.The study found that the publication Tufekci ( 2017) is the most cited document; however, it is not connected to all the publications, as shown in Figure 4. Similarly, (Quandt 2018), (Bleiker 2018), (Ketchley 2017), (Repnikova 2017), (Roberts 2017), and (Grimme et al. 2017) Most Influential Sources This section discusses the most frequently cited sources in the Russia-Ukraine, Propaganda, and Social Media literature, as well as the number of publications that use them.The analysis used two documents and twenty-five citations as the minimum number of documents and citations for a source, respectively, to sort the data.After filtering, only 19 of the 314 sources met the threshold.Table 5 provides an overview of the essential and impactful sources.In the analysis of Propaganda and Social Media regarding the Russia-Ukraine conflict, certain sources are cited more frequently than others, illustrating key nodes of influence and dissemination (see Figure 5). Most Documented Countries on Russia-Ukraine, Propaganda and Social Media Literature It has been determined that the United States is the country most frequently mentioned in the literature on Russia-Ukraine, Propaganda, and Social Media, with 128 references to the United States (see Table 6).The United Kingdom and Russian Federation are in second and third places with 74 and 33 documents, respectively.However, Germany, Sweden, and Australia are the other three nations that have produced more than 20 documents.On the other hand, Ukraine has 17 documents.Figure 6 depicts a density representation of the most documented countries in relation to Russia-Ukraine, Propaganda, and Social Media. Most Documented Countries on Russia-Ukraine, Propaganda and Social Media Literature It has been determined that the United States is the country most frequently mentioned in the literature on Russia-Ukraine, Propaganda, and Social Media, with 128 references to the United States (see Table 6).The United Kingdom and Russian Federation are in second and third places with 74 and 33 documents, respectively.However, Germany, Sweden, and Australia are the other three nations that have produced more than 20 documents.On the other hand, Ukraine has 17 documents.Figure 6 depicts a density representation of the most documented countries in relation to Russia-Ukraine, Propaganda, and Social Media. Bibliographic Document Coupling Bibliographic coupling findings, according to the designers of VOSviewer, demonstrate the overlap of references between publications.The greater the relationship between two works, the greater the number of common references.In this study, the analysis was filtered by specifying a minimum of 25 document citations, obtaining 36 papers.However, only 77 documents were found to be related to the large sample.Figure 7 depicts the bibliographic coupling of documents in the Russia-Ukraine, Propaganda, and Social Media categories. Bibliographic Document Coupling Bibliographic coupling findings, according to the designers of VOSviewer, demonstrate the overlap of references between publications.The greater the relationship between two works, the greater the number of common references.In this study, the analysis was filtered by specifying a minimum of 25 document citations, obtaining 36 papers.However, only 77 documents were found to be related to the large sample.Figure 7 depicts the bibliographic coupling of documents in the Russia-Ukraine, Propaganda, and Social Media categories. Discussion The discussion chapter of this bibliometric analysis on Russia-Ukraine Propaganda via Social Media delves into the complexities uncovered by the study, highlighting pronounced global engagement, particularly from the United States.This interest transcends academic circles, reflecting the wider international implications and the strategic signifi- Discussion The discussion chapter of this bibliometric analysis on Russia-Ukraine Propaganda via Social Media delves into the complexities uncovered by the study, highlighting pronounced global engagement, particularly from the United States.This interest transcends academic circles, reflecting the wider international implications and the strategic significance of information warfare in contemporary geopolitics.The convergence of themes around disinformation and the dual role of social media-as both a facilitator and battleground for propaganda-aligns with existing literature, suggesting an evolving academic focus towards the dynamics of digital influence and statecraft. Our analysis identifies a foundational corpus of scholarly works that have significantly shaped discourse on digital propaganda, marking an intellectual trajectory towards understanding the complexities of information manipulation in the digital age.Key texts such as Tufekci (2017), Bennett and Livingston (2018), and Khaldarova and Pantti (2016) have been pivotal in defining the contours of this discourse.Tufekci's work on the power and fragility of networked protest highlights how social media platforms can be both a tool for grassroots mobilization and a conduit for misinformation.Bennett and Livingston discuss the broader implications of the disinformation order on democratic institutions, while Khaldarova and Pantti provide insights into the narrative battles over the Ukrainian conflict. The implications of these findings extend beyond academia, offering pivotal insights for policymakers and social media platforms grappling with the proliferation of misinformation.For instance, the integration of computational propaganda into the analysis underscores the role of digital tools and autonomous agents in shaping public opinion.Bolsover and Howard (2017), Woolley andHoward (2016), andBenkler et al. (2018) discuss how big data and computational propaganda are utilized in political communication, providing a critical lens through which to view the manipulation of information in the digital age.Audience participation in spreading propaganda through reposting and sharing is another critical aspect.Hyzen (2023) and Wanless and Berk (2020) explore how participatory propaganda amplifies the reach of disinformation, emphasizing the role of audiences in the propagation of false information.This highlights the importance of understanding the mechanisms of information dissemination and the active role played by users in the digital ecosystem. Integrating our study within the broader spectrum of related research highlights a shared concern over foreign influence operations and their implications for democratic societies.The active participation of the United States in researching and countering misinformation campaigns underscores a broader, policy-driven engagement with state-led information warfare.This aligns with scholarly and policy efforts to scrutinize and mitigate Russia's strategic use of social media for propaganda purposes within its geopolitical pursuits-an endeavor that has garnered increasing attention and action in Western nations. The thematic focus on disinformation, fake news, and strategic social media use by Russia and Ukraine is parallel to investigations into the broader impacts of digital misinformation on public opinion, electoral integrity, and international diplomacy.These studies underscore the need for enhanced digital literacy, regulatory interventions, and global cooperation to counteract the adverse effects of online propaganda.Furthermore, the call for more empirical research into the efficacy of propaganda and its countermeasures reflects a growing academic dialogue aiming to dissect not only the dissemination and content of misinformation but also its psychological and societal repercussions.This suggests a move towards a multidisciplinary approach, integrating insights from communications, psychology, political science, and information technology to address the complex nature of digital disinformation. Conclusions With half of the world's population having access to the internet and social media (Kemp 2022), concerns around misinformation and disinformation on these platforms are escalating.Social media has become a potent medium for disseminating disinformation and propaganda at a low cost.Government officials, individuals, interest groups, and organizations have seized this opportunity to spread misleading information to manipulate public opinion.Russia and Ukraine are notable examples of this trend, engaging in significant state-sponsored propaganda campaigns.This study reveals that, despite the specific focus on Russia and Ukraine, the issue commands global attention due to the considerable geopolitical history and implications these nations hold for the rest of the world.The pressing need for an informed and multifaceted response to this contemporary challenge is evident, underscoring the importance of continued scholarly attention and strategic policy-making to safeguard the integrity of public discourse in the face of evolving technological and geopolitical landscapes. Limitations and Future Research Prospects This review identifies a significant gap in empirical research concerning the role of propaganda on social media in the context of Russia and Ukraine.Through this study, the author has refined the methodology for determining the study's focus, presenting it as an initial exploration into a complex area that warrants further investigation.The author plans to extend research through a content analysis that will delve into more detailed research areas within this sector.In particular, the forthcoming content analysis will highlight the nature of the content published in these studies and identify potential research gaps. However, the current study is not without its limitations.One of the primary constraints is the reliance on the Scopus database for document collection.Future research could benefit from incorporating data from the Web of Science and other databases to broaden the scope and depth of the analysis.Additionally, the selection of keywords such as "Russia Ukraine", "Social Media", and "Propaganda" was strategic for this study, yet it omitted other pertinent terms like "Misinformation", "Disinformation", "Fake news", "Kremlin Propaganda", and "Ukraine Russia".Future researchers are encouraged to consider these and other relevant keywords to enrich the investigation of propaganda on social media.This expanded approach will facilitate a more comprehensive understanding of the research landscape and contribute to the body of knowledge on social media propaganda.  Excluding errata documents  Editorial material excluded  Papers excluded • Not relevant • Without abstract Figure 3 . Figure 3. Co-occurrence network diagram of all keywords.Source: Author.Figure 3. Co-occurrence network diagram of all keywords.Source: Author. Figure 3 . Figure 3. Co-occurrence network diagram of all keywords.Source: Author.Figure 3. Co-occurrence network diagram of all keywords.Source: Author. Figure 4 . Figure 4. Network map of the highest cited documents on Russia-Ukraine, Propaganda, and Social Media.Source: Author. Figure 4 . Figure 4. Network map of the highest cited documents on Russia-Ukraine, Propaganda, and Social Media.Source: Author. Figure 5 . Figure 5. Network map of the most frequently cited sources on Russia-Ukraine, Propaganda, and Social Media.Source: Author. Figure 5 . Figure 5. Network map of the most frequently cited sources on Russia-Ukraine, Propaganda, and Social Media.Source: Author. Figure 6 . Figure 6.Density visualization of mostly documented countries.Source: Author. Figure 7 . Figure 7. Network visualization of bibliographic coupling of documents.Source: Author. Figure 7 . Figure 7. Network visualization of bibliographic coupling of documents.Source: Author. Table 1 . Description and distribution of primary information. Table 2 . Information about document type and numbers. Table 3 . Frequency and link strength of keywords. Table 3 . Frequency and link strength of keywords. Table 4 . Most frequently cited documents on Russia-Ukraine, Propaganda, and Social Media.Source: Author. are also missing in the associated set, despite being among the 24 most cited studies. Table 5 . Most cited sources and documents.Source: Author. Table 6 . Countries with the corresponding number of documents. Table 6 . Countries with the corresponding number of documents. Figure 6.Density visualization of mostly documented countries.Source: Author.
2024-07-21T15:09:13.008Z
2024-07-17T00:00:00.000
{ "year": 2024, "sha1": "ee9a34f4cc72d9aff9d04b4beef36391c59d7d49", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/journalmedia5030062", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "66c2e5bf63ab8a6fd1ddd3f49757733157adfa33", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
54013283
pes2o/s2orc
v3-fos-license
Hydroelastic slamming response in the evolution of a flip-through event during shallow-liquid sloshing The evolution of a flip-through event [6] upon a vertical, deformable wall during shallow-water sloshing in a 2D tank is analyzed, with specific focus on the role of hydroelasticity. An aluminium plate, whose dimensions are Froude-scaled in order to reproduce the first wet natural frequency associated with the typical structural panel of a Mark III containment system, is used. (Mark III Containment System is a membrane-type tank used in the Liquefied Natural Gas (LNG) carrier to contain the LNG. A typical structural panel is composed by two metallic membranes and two independent thermal insulation layers. The first membrane contains the LNG, the second one ensures redundancy in case of leakage.) Such a system is clamped to a fully rigid vertical wall of the tank at the vertical ends while being kept free on its lateral sides. Hence, in a 2D flow approximation the system can be suitably modelled, as a double-clamped Euler beam, with the Euler beam theory. The hydroelastic effects are assessed by cross-anal... The evolution of a flip-through event [6] upon a vertical, deformable wall during shallow-water sloshing in a 2D tank is analyzed, with specific focus on the role of hydroelasticity.An aluminium plate, whose dimensions are Froude-scaled in order to reproduce the first wet natural frequency associated with the typical structural panel of a Mark III containment system, is used.(Mark III Containment System is a membrane-type tank used in the Liquefied Natural Gas (LNG) carrier to contain the LNG.A typical structural panel is composed by two metallic membranes and two independent thermal insulation layers.The first membrane contains the LNG, the second one ensures redundancy in case of leakage.)Such a system is clamped to a fully rigid vertical wall of the tank at the vertical ends while being kept free on its lateral sides.Hence, in a 2D flow approximation the system can be suitably modelled, as a double-clamped Euler beam, with the Euler beam theory.The hydroelastic effects are assessed by cross-analyzing the experimental data based both on the images recorded by a fast camera, and on the strain measurements along the deformable panel and on the pressure measurements on the rigid wall below the elastic plate.The same experiments are also carried out by substituting the deformable plate with a fully stiff panel.The pressure transducers are mounted at the same positions of the strain gauges used for the deformable plate.The comparison between the results of rigid and elastic case allows to better define the role of hydroelasticity.The analysis has identified three different regimes characterizing the hydroelastic evolution: a quasi-static deformation of the beam (regime I) precedes a strongly hydroelastic behavior (regime II), for which the added mass effects are relevant; finally, the free-vibration phase (regime III) occurs.A hybrid method, combining numerical modelling and experimental data from the tests with fully rigid plate is proposed to examine the hydroelastic effects.Within this approach, the measurements provide the experimental loads acting on the rigid plate, while the numerical solution enables a more detailed analysis, by giving additional information not available from the experimental tests.More in detail, an Euler beam equation is used to model numerically the plate with the added-mass contribution estimated in time.In this way the resulting hybrid method accounts for the variation of the added mass associated with the instantaneous wetted length of the beam, estimated from the experimental images.Moreover, the forcing hydrodynamic load is prescribed by using the experimental pressure distribution measured in the rigid case.The experimental data for the elastic beam are compared with the numerical results of the hybrid model and with those of the standard methods used at the design stage.The comparison against the experimental data shows an overall satisfactory prediction of the hybrid model.The maximum peak pressure predicted by the standard methods agrees with the result of the hybrid model only when the added mass effect is considered.However, the standard methods are not able to properly estimate the temporal evolution of the plate deformation.C 2014 Author(s).All article content, I. INTRODUCTION The wave impact phenomenon that may occur during the evolution of sloshing flows in a tank is an important issue for the safety of Liquefied Natural Gas (LNG) carriers.The violent free-surface motions in a sloshing tank generally occur when the wave-induced horizontal ship velocities, in roll and/or pitch, contain sufficient energy in the frequency band close to the lowest sloshing frequency of the tank.Then, slamming events may occur, which originate impulsively with accompanying large local loads that may compromise the integrity of the tank structure. The technological solutions consolidated for oil tankers to damp the sloshing phenomena are unsuitable for the membrane-type prismatic LNG tanks: the need of maintaining low temperature inside the tank, in order to keep the gas at the liquid state, implies that the side walls, designed to provide a good thermal insulation and adequate mechanical properties, are not capable of supporting the damping devices (e.g., vertical baffles). The full understanding of the physical phenomena and the accurate evaluation of the local loads in sloshing-induced slamming events is a challenge for research.Typically, LNG carriers can operate both in fully loaded and ballasted conditions: in both cases, sloshing phenomena matter.The filling height and the geometry of the tank influence the sloshing scenarios and the induced global and local loads on the walls of the tank.The sloshing motion of the liquid, in partially filled tanks with finite liquid depth, generates a standing wave that may cause the liquid impact against the roof of the tank; eventually with the formation of a large gas cavity entrapped. 2Conversely, in ballasted conditions, when low filling depth of the tank, i.e., shallow liquid, conditions exist, 14,15 the occurrence of travelling bore waves propagating with high velocity back and forth into the tank may cause large slamming loads.Depending on the impact angle, several and complex scenarios can occur.For example, when the impact angle between liquid and wall is small, gas entrapment may happen leading to gas compression and its interaction with the free surface. 4,5 n contrast, for an incipient breaking wave approaching a vertical wall, flip-through events 6 or flat impacts may occur causing localized and large loads without any gas-entrapment. In all these cases, when the typical temporal duration of the local load is comparable with a natural period of the structural mode contributing to large structural stresses, hydroelasticity matters, 7 and affects the integrity of the structure.As a consequence, the assessment of the structural strength of a LNG membrane tank exposed to the dynamic and impulsive sloshing loads requires the prediction of the hydroelastic response of the structure. However, because of the difficulty in solving the hydroelastic problem and in scaling the structural properties of the complex and composite material constituting a LNG tank, both the fully coupled hydroelastic calculations and the hydroelastic experiments at model scale are still unsolved challenges. Simplified methods are used during the design stage.The classification rules 10,11 suggest two calculation methods to assess the dynamic structural response to sloshing loads: (i) the Direct Dynamic Finite Element Analysis (FEA) uses the pressure loads measured during experiments carried out with a rigid model (properly scaled to prototype scale) as input of a dynamic FEA of the full scale structure; (ii) the Indirect Dynamic FEA uses the results from a static FEA multiplied by a correction factor obtained through the Dynamic Amplification Factor (DAF) curve.The DAF is the ratio between the maximum dynamic response and the maximum static response for a considered sloshing pressure rise time. The present investigation pursues the experimental study of the kinematic and dynamical features of a flip-through event occurring on a vertical wall of a 2D sloshing tank in shallow water condition.The previous paper by Lugni et al., 6 has emphasized how the maximum pressure at a fixed point of the impact area is a poor indicator for the maximum load, because of the extremely local behavior of the impact phenomena.In the present paper, the strain distribution along a vertical deformable aluminium plate inserted in the rigid vertical wall of a sloshing tank has been measured in model tests to characterize the dynamical features of the local loads.Since the plate is clamped at the vertical ends, kept free at the lateral ones and the impacting flow is almost 2D, the plate is modeled as a beam.The sizes, the thickness and the structural properties of the deformable beam have been fixed in order to reproduce the lowest structural natural wet frequency of a prototype panel typically used in a Mark III containment system. 7Geometric scaling is respected using where L P and L m indicate the length of the tank in prototype (P) and model (m) scale, respectively.Since the considered response (pressure, strain) is a function of the frequency of oscillation of the structure σ , of the length L of the tank and of the gravity acceleration g, it follows from the Pi theorem that the non-dimensional response is a function of Since gravity is involved, this is in general sense called Froude scaling.Although the complete scaling of the structural properties of a Mark III structural panel is far from the objective of the present work, a comprehensive hydroelastic model scale experiment is carried out, which reproduces the lowest natural wet frequency of a prototype panel.In Sec.II, a theoretical model is described to study the hydroelastic problem.Such a model is preliminarily used to define the scaling of the experimental model, whose set-up together with the dynamic response of the deformable plate is detailed in Sec.III.Section IV contributes to understand the role of hydroelasticity during the evolution of the flip-through phenomenon.Finally, in Sec.V the results of the proposed hydroelastic model are compared against the experiments and the results of two simplified models typically used at the design stage to assess the role of the hydroelastic effects. II. THEORETICAL MODEL AND DYNAMIC SCALING Because of the complex physical phenomena connected to the sloshing flow and the subsequent hydroelastic slamming, here we propose a simplified theoretical model to estimate when hydroelastic effects matter.A hybrid numerical-experimental method, which uses both the information coming from a simplified numerical hydroelastic model and the data from experiments carried out using a fully rigid tank, is proposed to solve it.The hybrid model recovers, at the global level, the contribution coming from the time variation of the added mass. In general, hydroelasticity may involve a strong or a weak coupling between the loading and the response.In the former case, the response influences the wetted area and the free-surface deformation causing a time varying added mass effect and a nonlinear variation of the kinematic and dynamic field.When a weak coupling is assumed, a quasi-static approach can be used and the hydrodynamic load on a fully rigid structure forces the structural response. A. Definition of the problem A 2D square tank with length L and height H (L = H), partially filled with water up to a height h has been considered.An elastic beam of length l is placed at a vertical distance a from the tank bottom y = 0 (see Figure 1).It reproduces the structural behavior of a single Mark III structural panel fixed between the stiffeners of a LNG tank, hence double-clamped conditions hold.The geometric and structural properties of the beam are chosen to obey Froude scaling of the lowest natural frequency of the structural panel at full scale, that is, based on Eqs. ( 1) and ( 2): Here σ P and σ m indicate the lowest natural frequency of the structural panel at prototype and model scale, respectively.The Euler beam theory describes the deformation w(t, y) along the beam that is, for a ≤ y ≤ a + l x = L + w(t, y): Initial Conditions on w and ∂w ∂t . Here, M B indicates the mass per unit length and breadth of the beam, μ the structural damping, I the inertial moment (i.e., area moment of inertia of the beam cross section/breadth of the beam), and E the Young's modulus of the material.On the right-hand side, p(t, y, w) is the local hydrodynamic pressure load, which includes the mutual interactions between the structural deformations (and stresses) and the hydrodynamic flow.In order to know the pressure field p(t, y, w), the solution of the hydrodynamic field for the sloshing problem is required.Because of the complex local phenomena involved in the sloshing flows (e.g., breaking waves, double phase flows, wave impacts) their numerical prediction requires, in general, the solution of the Navier-Stokes equation with nonlinear boundary conditions on the instantaneous air-water interface and on the wetted surfaces of the tank.This implies that along the beam, the following boundary condition holds where u u u is the local fluid velocity (a part in the liquid and a part in the air when air is entrapped, while it is assumed that the open air is at rest) and n n n is the local normal to the beam.The latter boundary condition, applied to the instantaneous deformable wetted beam, and the forcing pressure in the Euler equation, make the hydroelastic problem strongly coupled. Colagrossi et al. 3 demonstrated that the simulation of the local flows characterizing the impact events in sloshing phenomena is complex, even when the tank is taken as fully rigid, which makes the use of the double-phase Navier-Stokes solvers unavoidable.On the other hand, the numerical simulation of a flip-through event can be done on the basis of potential flow assumptions, as assessed by Professor Peregrine. 12,13 n the present experimental investigation the analyzed flip-through event occurs during the third cycle of oscillation of the tank, after two previous oscillations during which some wave impact events occurred with air entrapped and vorticity was generated in the water as induced by the run down of the jet falling along the wall.These phenomena, in principle, preclude use of the potential flow theory to reproduce the hydrodynamic field that leads to the formation of the flip-through event of interest.However, since the main aim of the present paper is the physical discussion and assessment of the hydroelastic effects, we do not perform any numerical simulation of the hydrodynamic field and, rather, in the following, we propose a simplified approach which takes into account the hydroelastic actions. B. A simplified hydroelastic approach: The hybrid model The proposed hybrid model is based on the assumption of a weak interaction between excitation and response, so that the forcing term in Eq. ( 4) can be decomposed as the followng linear superposition: The first contribution p r (t, y) is the pressure field induced by the flip-through event on the fully rigid wall, i.e., u u u = 0 at x = L.It is a fully nonlinear load which depends on the nonlinear kinematics of the flip-through event and needs to be modeled as such.Because of the difficulty in reproducing it numerically, at least in the present case, we have chosen to model it using the experimental value of the pressure measured during the experiments on a fully rigid tank.During these experiments we reproduce exactly the same filling condition and tank motion used in the case of a deformable panel, hence the features of the wave interacting with the wall are the same. The second contribution, p v (t, y, w), is the vibrational pressure, which solves the hydroelastic problem of the vibrating beam around a rest state.Using the potential flow assumption for an incompressible fluid with density ρ, the pressure forcing term p v = −ρ ∂φ v ∂t is given by the linearized Bernoulli equation.The vibrational potential function is instantaneously determined as solution of the Boundary Value Problem (BVP): in the water field ∂φ v ∂n = 0 on the rigid walls of the tank and on the bottom In this case we assume the vibrational pressure to be independent from the local shape of the freesurface and from the local kinematics of the flip through (which is already taken into account in the term p r ).However, p v accounts for the instantaneous wetted length h(t) of the vertical beam, influenced by the evolution of the flip-through.Because of the large value of the lowest wetted natural vibration frequency of the beam (with respect to the typical frequency range when gravity affects the free-surface behavior), a high-frequency approximation is assumed for the combined free-surface boundary condition.Although this is a strong approximation, we solve the problem through a simple approach and then verify the validity of the assumption through the comparison with the experiments.The above BVP for the Laplace equation is solved numerically as detailed in the following.Like p r , also the wetted length h(t) of the beam depends on the evolution of the flipthrough, hence it cannot be predicted numerically and it has been measured from the experimental images. The beam deflection w(t, y) is expressed as the eigenfunction expansion of a finite number N of dry normal modes ψ k , k = 1...N, satisfying the stationary homogeneous problem obtained from Eq. ( 4) with clamped conditions at the beam ends.Following Faltinsen and Timokha, 7 by defining qk φ k the velocity potential associated with the vibrations of mode k, and assuming a proportional model for the structural damping, we get which, respectively, are the mass, the hydrodynamic added mass and the stiffness matrices.The added mass matrix requires the solution of the BVP (6) for the vibrational potential function.By introducing the eigenfunction expansion for the beam deformation, the BVP (6) becomes and the potential vibrational vector φ v φ v φ v follows as solution of N Boundary Value Problems, each one corresponding to a prescribed vibrational mode of the beam. Due to the orthogonality of the eigenfunction vector ψ ψ ψ, the mass and stiffness matrices are diagonals ([I] is the identity matrix), while the added mass matrix is full.Structural damping [C] is determined from the impulsive dry tests (hammer tests), as specified in Sec.III. C. Numerical solution of the hybrid model. The hydroelastic problem (7) is integrated in time using a fourth-order Runge-Kutta method to determine the mode amplitude vector q(t) q(t) q(t).At each time step, the forcing pressure p r (t, y) is prescribed by using the experimental pressure distribution measured through seven pressure transducers distributed along a rigid wall located like the beam in the fully rigid tank experiments (see Sec. III).In contrast, φ v φ v φ v comes from the numerical solution of the BVPs (8).For the latter, Faltinsen and Timokha 7 proposed an analytical solution assuming a Fourier expansion for ψ ψ ψ and φ v φ v φ v .However, its validity is limited to the fully wet beam case, i.e., h(t) ≥ a + l in the third equation of (8).Since the dynamics of the flip-through phenomenon imposes a rapid change of the beam conditions from completely dry to fully wet (i.e., a ≤ h(t) ≤ a + l in the third equation of (8)), a numerical solution is used here to solve the vibrational problem associated to each mode.To this purpose the solution for the vibrational potential vector is assumed of the form which satisfies, in the BVP (8), the Laplace equation and the boundary conditions on the free-surface, the bottom and the wall opposite to the impact.This corresponds to using the Fourier Transform method, hence a linear system is solved for the unknown coefficients A A A n which forces the fulfillment of the boundary condition on the tank side with the deformable beam.At each time t, h(t) is measured from the corresponding experimental image. The present solution of the potential vector has been validated against the results of a Higher Order Boundary Element Method (HOBEM) used to solve the BVP (8).In this case the BVP has been rewritten within an integral formulation using the Green's second identity; the integral equation is discretized by means of quadratic elements on the boundaries of the computational field. 8,9 he results of the comparison, limited to the modal analysis, are presented hereinafter. D. Modal analysis: Structural natural frequencies The equations of motion (7) with p r = 0 are solved with the aim to determine the natural frequency of each mode of the beam.Since the added mass matrix is time dependent, a more and more complex solution of the homogeneous problem is achieved depending on the shape function assumed for [A(t)].In the following, just to give a rough estimation of the wet natural frequency (under the hypothesis of a quasi-static variation of the wetted length), the added mass is assumed to be constant on the time scale of the natural frequency.From the physical point of view, this is equivalent to solve the BVP (8) with the free-surface flat and constant in time, i.e., frozen.Assuming the solution to be harmonic, that is q q q(t) = Q Q Qe iσ w σ w σ w t , the homogeneous problem related to Eq. (7) gives the eigenvalue problem: The natural frequency vector σ w σ w σ w is evaluated from the characteristic equation: When [A] = 0, the dry natural frequency vector is simply with e e e the eigenvalue vector associated with the problem.Figure 2 shows the variation of the wet natural frequencies associated with the first and second (i.e., i = 1, 2 in figure) modes of the beam as function of the instantaneous dimensionless filling height of the tank.The lowest dry natural frequency of the beam σ d (i = 1) = σ d, 1 = 1.575 kHz is used to make the data dimensionless.The symbols represent the solution obtained by using the HOBEM method to solve the BVP (8) for the added mass calculation.While, the lines represent the solution obtained by using the shape function (9) for the vibrational potential function.Their good agreement confirms the reliability of the latter method, which is preferred in the following (because of its higher efficiency).In the solution of the complete hybrid problem (i.e., with p r = 0), although the free-surface is still considered flat at each time step, no hypothesis about the shape of the added mass function is given, since it is calculated from the solution of the BVP (8) by enforcing the instantaneous wetted length measured from the experiments.This means that h(t) changes in time. E. Scaling of the physical problem The modal analysis is first used for the scaling of the experiment, in order to properly design the hydroelastic model which reproduces the behavior of the prototype.Sloshing model tests with hydroelastic impact require the simultaneous satisfaction of the Froude scaling together with proper scaling of the elastic properties of the structure. 7Because of the complex structure characterizing a typical panel of a Mark III containment system, its complete structural scaling is an unresolved challenge, which is far from the aims of the present study.According to Faltinsen and Timokha, 7 we must ensure the Froude scaling of the relevant natural frequencies of the elastic vibrations of the structural panel.Further, Faltinsen and Timokha 7 showed that several natural modes play an important role in describing the maximum structural stresses in the Mark III panel and their frequencies vary between 100 Hz and 500 Hz.However, since the lower modes are associated with the steel plate of the Mark III panel, only the lowest is properly Froude scaled in the current investigation.In particular, for a LNG tank, whose typical length is about 30-40 m, the maximum length of a structural panel is about 3 m and the lowest wet natural frequency is around 110 Hz (in fully wet condition). 7Because the sloshing tank used in the actual experiments is 1 m long (the same tank used in Refs.4-6), a geometrical scale factor λ = 30 is assumed.This choice forced the length of the model beam to be equal to 0.09 m and the value of the lowest wet natural frequency (Froude scaled according to Eq. ( 3) and in fully wet condition) about 610 Hz.Using Eq. ( 11), such a frequency corresponds to the wet natural frequency of an aluminium beam with thickness 2.5 mm.Since previous investigations in rigid sloshing tanks [4][5][6] showed that the flip-through phenomenon occurs at a height of h/L = 0.17 − 0.18 from the bottom of the tank, an aluminium plate is placed with the lower end 0.13L above the tank bottom.Figure 2 shows that both the first (i = 1) and the second (i = 2) natural frequencies tend to decrease by increasing the filling depth; this is a consequence of the increasing added mass.A similar behavior is then expected also during the evolution of the flip-through as a consequence of the changing wetted length of the beam.However, they remain quite far from each other; this suggests that they remain uncoupled and justifies the scaling of the lowest mode of vibration only. III. EXPERIMENTAL SET-UP A 2D plexiglas tank (L ×H×B = 1 m × 1 m × 0.1 m) reinforced with steel and aluminum structure (see Fig. 3) has been used.It is almost the same tank used in the previous experiments with rigid tank. 4,5 he difference is the presence of an aluminium plate on the lateral left wall (highlighted by the white arrow in the red oval in Figure 3). The lateral left wall has been completely rebuilt in stainless steel (see right panel of Figure 4) and milled (see enlarged view on the bottom-left panel of Figure 4) to hold the deformable aluminium plate (see enlarged view on the top left panel of Fig. 4).A suitable clamping system has been designed (visible on the left panels of Figure 4) to ensure clamped conditions at the vertical ends of the plate.Conversely, its lateral boundaries have been left free and sealed with silicone. The plate is 110 mm high.However, two bulges (each one 10 mm high) have been built at both vertical ends (see enlarged view on the top left panel of Fig. 4) to realize the clamping system.Then, the deformable part of the plate is extended vertically for 90 mm according to the geometric scaling specified in Sec.II E. Since the sloshing flows reproduced in the model tank and here studied are almost 2D, the deformable part of the plate behaves like a beam; then, its bending deformation is measured by means of 5 half-bridge strain gauges HBM XY11 − 3/350 placed along the centreline at 12, 28, 45, 62, 78 mm from the lowest end of the beam, i.e., at 142, 158, 175, 192, 208 mm from the bottom of the tank. A. Static calibration of the strain gauges The strain gauges have been calibrated by imposing uniform load along the clamped beam.In particular, once the plate has been mounted and clamped on the stainless steel wall, static tests have been performed by lowering the pressure inside the tank in order to reproduce uniform pressure distribution on the plate.The calibration factor of each gauge has been computed by comparing the measured strain with the theoretical solution given by the beam theory and with the numerical data obtained by using a finite element method (FEM) 19 on the 2D plate.Several pressure conditions inside the tank have been realized, in order to perform a linearization of the calibration factor.A linearization error lower than 2% has been measured for each strain gauge.Because of the halfbridge configuration used for the strain gauges, their output is proportional to the difference ( a − t ) between the axial and transversal strains.However, transversal strains can be assumed to be negligible during the sloshing tests performed in the present research investigation; nearly 2D flow conditions were realized during the whole experimental activity. B. Dynamic calibration of the strain gauges Impulsive tests with a calibrated hammer have been also performed to check the dynamic behavior of the strain gauges.The hammer test consists in hitting the structure, hence giving an impulsive load which excites a wide frequency spectrum.Because the hammer is calibrated, the time history of the impulsive load can be recorded, as well as the response of the structure through the strain gauges.This allows both for a measurement of the dynamical response of the structure and for the dynamical response of the strain gauges.For the latter, a FEM method solving the structural problem on the same structure with the same input load is necessary.The hammer blow is given as close as possible to the centre of the plate.Due to the impulsive and intrinsically 3D load distribution (the tip of the hammer is small), the beam theory is no longer valid; the plate theory must be applied and transversal strains affect the measurement of the strain gauges.Figure 5 shows the comparison of the measured strains (symbol) along the plate with the numerical results of the FEM model a applied on the 2D plate (continuous line) at the time of the maximum strain and for two different hammer tests.The maximum value of the impulsive load is reported at the top of each panel in Fig. 5.The small difference is justified by the contributions of the transversal strains t , numerically evaluated and represented by the short-dashed (red) line in Fig. 5.This is due to the strongly 3D load applied.Since the global flow during the flip-through phenomenon studied here is 2D, this difference does not affect the experimental results of the present investigation.The value ( a − t ), calculated and indicated by the green line in Fig. 5, shows a good agreement with the corresponding experimental measurements, hence validating the adopted calibration procedure. Use of the strain gauge to measure such an intense dynamics of the strain time evolution might be questionable.To further check the dynamic response of a single strain gauge, an accelerometer has been mounted as close as possible to it at the same vertical position.An accelerometer is a transducer with a reliable response at high frequencies.Then, a hammer test has been performed and the time histories of the signals (acceleration and strain) have been compared in Fig. 6 (top panel) together with the corresponding logarithmic value of the amplitude (bottom panel) spectrum.They confirm the reliability of the strain gauge measurement, at least until 2.0 kHz.In the low frequency range (lower than 300 Hz) the strain gauge is a transducer with a reliable dynamic response; the disagreement with respect to the accelerometer is mainly due to the low energy content of the spectrum in that frequency range.Because of the limitation in the dynamic response of the strain gauges, hereinafter our analysis is mainly focused on the highest natural vibration period of the beam.As a consequence, each observation about the effects of the higher modes (whose frequencies are larger than 2.0 kHz, see Fig. 2) should be regarded purely qualitative. Since the aim of the present work is the assessment of the hydroelastic effect during the wave impact in a LNG tank by means of the hybrid model proposed in Sec.II, the same slamming events have to be reproduced both in the case of the full rigid wall and for the wall with deformable plate.To this purpose, a second setup corresponding to the fully rigid case (no hydroelastic case) has been built to measure the pressure distribution along the rigid wall.The aluminium thin elastic plate has been replaced with a rigid 20 mm thick aluminium plate.Five differential pressure transducers Kulite (with full range equal to 38 kPa) have been mounted along it at the same position as the strain gauges in the elastic case.An accelerometer on the vertical stainless steel wall and a wire potentiometer are used to check the global horizontal motion of the tank.Two additional differential pressure probes have been installed on the stainless steel wall, below the removable plate (rigid or elastic).A filling depth h/L = 0.122 has been considered.The time evolution of all the transducer signals has been recorded at a sampling rate of 50 kHz.A high-speed camera with a rate of 5000 fps and a resolution of 1024 ×1024 pixels provided the visualization of the local flow during the evolution of the impact event while the global view of the sloshing flow in the tank has been recorded through two slow digital cameras (with a frame rate of 100 fps).The spatial resolution of the high-speed camera gives a calibration factor of 7.8 pixels/mm, which ensures high accuracy in the measurement of the instantaneous height of the wave at the wall.A common reference signal was used to synchronize the flow images and the analog signals of the transducers.An absolute pressure transducer measured the ullage pressure inside the tank. The tank is forced to move along its longitudinal axis with the sinusoidal motion, x = Acos(2π t/T (t)) through the system "MISTRAL," a dynamic hexapod for the motion of the tank following the 6DOF. A is the amplitude of the motion, while T(t) is the period which varies with a ramp function between an initial value and the final value T 0 = 1.6 s.The high accuracy of the system ensured a good repeatability of the forced motion.To reproduce a flip-through phenomenon (FT), an amplitude A/L = 0.03 is enforced with the following ramp function: where T 1 = 4 s and T a = 0.05 s.The considered flip-through event occurs at the third cycle of oscillation, after one first impact event, with air trapping which occurs on the opposite wall. C. Dry and wet lowest vibration natural frequencies The hammer test has been used to check the lowest vibration natural frequency of the beam with respect to the theoretical value.According to Eq. ( 11), the lowest natural frequency varies with 7, respectively.In the latter the frequencies associated with the first two bending and torsional modes of the plate are also highlighted.Because of the constraints of the plate, the first bending mode prevails.This is confirmed also by the test carried out in wet condition, where the first bending mode is dominant.Inspection of Table I reveals that the measured frequency associated with the first bending mode is quite close to that predicted by the beam theory.The disagreement can be attributed to the additional mass induced by either the wire or the strain gauges (see Fig. 4).This quantity, estimated in few grams (5-7 g) is compatible with the difference in the value of the first bending frequency measured and predicted through the beam theory.Then, in our hybrid model we consider the mass of the beam increased by 7 g with respect to the nominal mass value, obtaining a predicted value of the first bending frequency equal to 1.499 kHz. To further stress the good approximation given by the beam theory, Figure 8 shows the comparison between the calculated wet natural frequency (blue line) associated with the first bending mode of the beam and the corresponding value measured through the hammer tests (green symbols) for several filling depths of the tank. D. Experimental analysis of the structural damping From a theoretical point of view, two contributions may influence the response damping: the hydrodynamic damping due to the boundary layer flow and the structural damping.The former is taken to be negligible in sloshing flows.However, by studying the oscillation of an air pocket entrapped by a standing wave at the roof of a sloshing tank, Abrahamsen 1 found that the boundary layer damping in the water domain influences the decay of the pressure signal when the frequency associated with the bubble oscillation is much larger than the main natural frequency of the global sloshing flow.In our study, in spite of the high oscillation frequency of the elastic plate, the structural damping governs the decay of the measured strain.It means that the hydrodynamic contribution does not matter.Moreover, the flow field associated with the local problem is completely different in the present case from that considered in Ref. 1. Impulsive tests have been used to calculate the structural damping.The hammer test, indeed, reproduces a free-vibration test, that is mathematically represented through the homogeneous equation associated with Eq. (7).For this the solution holds, under the hypothesis of small dimensionless damping ξ and neglecting the vibration modes higher than the first one (whose natural pulsation is indicated by ω n ).Note that ξ = C 11 , with C 11 the first element of the damping matrix [C].This solution, properly multiplied by the eigenfunction ψ 1 (y), is used to calculate the time history of the beam deformation and then of the strain at the centre of the beam.The best fitting with the measurement of the strain gauge #3 during the hammer test, allows achieving the damping value.Structural damping is frequency dependent because of the varying natural frequency of the structure induced by the changing wet length of the beam.This implies that structural damping has been evaluated by performing free-vibration tests (i.e., hammer tests) with several filling depths of the tank, in order to realize several conditions of the elastic plate, from dry to fully wetted.A suitable constant damping value has been identified from each hammer test performed with a prescribed filling depth.Figure 9 shows the dimensionless damping coefficients relative to the lowest structural mode estimated experimentally as function of the filling depth.The dashed line illustrates the structural damping in dry conditions, while the symbol at h/L = 0.13 refers to the completely dry beam condition.The reported interpolation function (solid line in Figure 9) is used in the hybrid model to determine the dimensionless damping coefficient as a function of the beam wetted length. A. Kinematic and dynamic flow fields When a steep wave approaches a vertical wall, a flip-through event may occur causing large local loads. 12The kinematic and dynamic evolution of a flip-through event along a rigid wall of a 2D sloshing tank is detailed in Ref. 6. Three different stages are recognized: (i) wave advancement, characterized by the wave front, moving towards the wall, which forces the wave trough to quickly rise up; (ii) focusing stage, where the wave crest and trough approach to one another causing their focusing and then the occurrence of the (iii) flip-through.The latter stage causes a sudden turning of the flow close to the focusing area which forces the formation of an energetic vertical jet.This is associated with a rapid change of the contact angle between the free surface and the tank wall. Here we are considering a pure flip-through, i.e., with no air entrapped, but more generally in our previous studies [4][5][6] we demonstrated that the flip-through always occurs when a steep wave hits a wall, even when air is entrapped and it is associated with the formation of a jet flow escaping from an open cavity. Rigid wall case Figure 10 (Multimedia view) shows the evolution of the pure flip-through event generated along the rigid wall during the present experimental activity: each image shows the flow configuration for all the stages that characterize the wave impact.On each panel the vertical pressure distribution along the wall (red line) is also reported as reconstructed from the interpolation of the pressure signals recorded by five gauges P 3 , P 4 , P 5 , P 6 , and P 7 located on the rigid wall at the positions highlighted by the green diamonds.Two other gauges, P 1 and P 2 , were placed at y = 35 and 50 mm from the bottom of the tank, respectively, but they are not shown in the figure .The evolution of the local loads at the wall reflects the kinematic behavior of the flow field: the hydrodynamic term induced by the slow increase of the vertical velocity at the wall (see left panel of Fig. 11 in Ref. 6) added to the quasi-static term (i.e., to the term ρg(h(t) − y i ) where h(t) is the instantaneous height of the wave trough at the wall and y i the vertical position of the i th pressure sensor) characterizes the local load distribution during the wave advancement stage.Because of the small vertical velocity of the wave trough, at the beginning of stage (i), the quasi-static hydrostatic pressure prevails, generating a spatial pressure distribution that decreases with the distance from the bottom of the tank down to zero at the free surface (see top-left panel in Figure 10 (Multimedia view)).This behavior is further confirmed by the time history of the pressure probes located along the wall and reported in Figure 11.In each panel, the dashed curve and the error bars represent the mean value of the pressure (made dimensionless with the hydrostatic pressure ρgh) and the corresponding standard deviation of five statistically equivalent repetitions of the same run, respectively.Moreover, the black line represents the result of a single run, i.e., that associated with the images of Figure 10 (Multimedia view).The time instants corresponding to the frames shown in Figure 10 (Multimedia view) are also indicated with the vertical dotted-dashed lines and highlighted through the labels A, B, C, D, and E, respectively.Each pressure signal refers to the atmospheric pressure, i.e., to a completely dry probe.The Euler equation, helps identifying the physical flow regime at each stage.Before and around the time of the first frame, i.e., t = −10.0ms (labelled as A in Figure 11), Dv v v Dt −g g g meaning that the problem is dominated by the quasi-static term.The pressure signal of the probes below the instantaneous free surface increases almost linearly with the almost constant vertical wave velocity V according to the instantaneous quasi-static pressure ρgV t (see Lugni et al. 6 ).To this purpose, the dashed line shown in the two panels of Figure 11, relative to the pressure at y = 35 mm and y = 50 mm, is drawn with the slope equal to the vertical velocity of the wave trough. For increasing time within stage (i), the accelerating water along the wall causes an increase of the pressure signal recorded by the wetted probes, resulting in a nonlinear pressure variation in time (see Figure 11, around t = −2.9ms labelled as B).Similarly, the spatial pressure distribution at t = −2.9ms (top-right panel of Figure 10 (Multimedia view)) increases moving toward the free surface where the fluid velocity and acceleration are larger. At the focusing time (t = 0 ms), the rapid increase of the vertical acceleration induces a strong and sudden growth of the ρ Dv v v Dt term in Eq. (12), which dominates the time and spatial evolution of the dynamic load.This causes an intense variation, both in time (see time range around label C in Figure 11) and space (see middle-left panel of Figure 10 where 0.016 m is the distance between two subsequent pressure probes.This means that the vertical acceleration dominates over the gravity.At this stage, a jet flow is starting at the wall.The value of the jet acceleration estimated from the measured pressure gradient is lower than that given by the direct measurement of the acceleration shown in Ref. 6.This implies that a larger pressure value may occur between the pressure probes P 5 and P 6 at the time corresponding to panel C of Figure 10 (Multimedia view).A bias error must be accounted for when discussing this analysis where a single Phys.Fluids 26, 032108 (2014) experimental run is examined.However, this error is assumed not to influence the results reported in Sec.V, where the mean value of the maximum load obtained from repeated runs is considered.Because of the highly local behavior of the pressure at the time of the impact, the standard deviation of the measured pressure takes into account the variability of the maximum pressure position also. From the kinematic evolution of Figure 10 (see middle-left panel C), the focusing area occurs at a location corresponding about to probe P 5 ; as a consequence, the transducers below this area show the maximum pressure peak at the same time and with a value decreasing with the distance from the focusing area.In contrast, the time histories recorded by the pressure sensors above P 5 shows a time delay that is a consequence of the steadiness of the phenomenon in a reference system moving with the maximum pressure peak. 13The latter moves upwards with a velocity approximately equal to the velocity V of the wave trough 6 (i.e., during the flip-through stage).During the focusing and the flip-through stages (see panels C, D, and E of Figure 10 (Multimedia view)) the vertical pressure gradients (varying between 520 m/s 2 in C and about 300 m/s 2 in E) govern the kinematic field giving the vertical acceleration of the flow.Because the last available pressure transducer is P 7 (panel E), the spatial pressure variation (red curve in Figure 10) stops here.However, for a rough estimate of the pressure gradients, we assume that pressure vanishes at the upper free surface, i.e., at y ≈ 225 mm from the bottom.The large pressure gradients are associated with the rapid turning of the flow around the focusing point.This type of pressure gradient is similar to the one observed at the spray roots during the water-entry phenomenon of a wedge.Since the beam portion above P 5 is dry during the previous stages (i) and (ii), the pressure signals recorded in (iii) above P 5 grow almost instantaneously from zero to the maximum value after a sudden drop (see Figure 11).Moreover, because of the disrupting jet occurring during stage (iii), the value of the maximum pressure peak above P 5 decreases with the distance of the pressure transducer from the focusing area (see D and E panels in Figure 10 (Multimedia view), and pressure sensor P 6 at the time labelled D and sensor P 7 at time E in Figure 11). Deformable beam case A rather different dynamic behavior is observed when a deformable plate is inserted in a rigid wall, rather than in a fully rigid wall.As discussed in Sec.II, the plate is assumed equivalent to a beam with the lowest wet natural frequency that is Froude scaled with respect to the corresponding frequency of a prototype panel of a LNG tank.The same motion of the tank used for the rigid case is applied to ensure the highest repeatability of the event.Five repetitions of the same run have been performed for the error analysis.Figures 12 and 13 show the flow behavior and the dynamic evolution of the local loads (stress and pressure, respectively) during several stages of the flip-through phenomenon.Each panel of Figure 12 (Multimedia view), beyond the image of the instantaneous configuration corresponding to the time specified, reports also the spatial deformation of the vertical plate through the dashed curve interpolating the values of the beam displacement (circle) measured at the strain-gauge positions (diamond).For a proper representation of the deformation curve, the local displacement of the beam is multiplied by a factor of 3 × 10 5 . Figure 13, from top to bottom, shows the time evolution of the dimensionless stresses (made dimensionless with the yield stress of the aluminium, i.e., σ Y = 15 MPa) at three points along the centreline of the plate (i.e., at y = 192, 175, and 158 mm), and of the pressure measured on the rigid part of the vertical wall at a height y = 35 mm from the bottom of the tank.The dashed curve with the circles and the error bars represent, respectively, the mean value and the standard deviation of the physical quantity (i.e., stress or pressure) obtained through five repetitions of the same run.The solid line refers to the single run, whose evolution is shown by the images of Figure 12 (Multimedia view).The first six vertical dashed lines identify the times (from A to F) corresponding to the instantaneous configurations reported in Figure 12 (Multimedia view).The stress σ is calculated from the measured strain , using the relation σ = E with E = 210 MPa.The location of the pressure transducer is exactly the same used for the tests on the fully rigid wall. The kinematic evolution of the flip-through event along the deformable beam resembles the one observed along the rigid wall (see Fig. 12 (Multimedia view)).The same three stages can be identified in the movie attached to the present paper.However, large differences characterize the dynamic evolution, especially after the focusing stage.Then, by referring to the dynamic evolution reported in Figure 13 the following regimes characterize the hydroelastic behavior of the beam during a flip-through event: I. quasi-static regime, dominated by the quasi-static hydrodynamic load; II.fully hydroelastic regime, characterized by the maximum stress distribution and by the strong coupling between the hydrodynamic load and the structural reaction; III.free-vibration regime, where the structure behaves as a beam excited and free to oscillate.During the wave advancement stage (top-left panel of Figure 12 (Multimedia view)), the quasistatic hydrodynamic load dominates, causing a weak and quasi-static deformation of the wall.This behavior characterizes the hydroelastic regime I in Figure 13, called quasi-static. Evolving towards the wave focusing stage, the beam deforms smoothly (top-middle panel of Figure 12 (Multimedia view)).Because of the asymmetric load induced by the rise up of the trough, the second vibration mode of the beam matters at this stage.This is confirmed also by the different values of the stress loads measured by the sensors at y = 158 mm and y = 192 mm (see Fig. 13), i.e., placed symmetrically with respect to the centre of the beam at y = 175 mm.Starting from this time, the hydroelastic regime II called fully hydroelastic governs the dynamic evolution of the phenomenon (see Fig. 13) up to t = 5 ms. At the focusing time t = 0 (see top-right panel of Figure 12 (Multimedia view)) the beam reaches its maximum deformation.Now, the first vibration mode of the plate dominates the spatial deformation field; however, from the evolution of the stresses (see the first three diagrams of Figure 13), the maximum value recorded at t = 0 ms is different at the strain gauges located at y = 158 mm and at y = 192 mm.This implies that the second vibration mode is relevant even at the focusing time, as a consequence of the asymmetric distribution of the wetted length of the beam. The comparison of the pressure signal recorded at y = 35 mm for the rigid and elastic plate case (see Figure 14), emphasizes the role of the hydroelasticity.For each of them, both the average results (indicated with P 1m ) and the instantaneous curve (indicated with P 1 ) related to the attached movies are also reported.Up to the focusing time t = 0 (identified by the vertical dashed-dotted line), both signals appear similar, reaching the same value at t = 0.This means that the hydrodynamic forcing that originates from the focusing is the main cause of the maximum deformation of the wall. In the following evolution of the phenomenon, a strong hydroelastic behavior occurs as illustrated by comparing the pressure signals in Figure 14.Due to the structural reaction, at the beginning of the flip-through stage, i.e., from t = 0 ms to t = 0.6 ms (the latter is labelled as D in Figure 13), the plate moves against the incoming wave, hence counteracting the hydrodynamic load (see panels C and D in Figure 12 (Multimedia view)).This behavior causes a steady increase of the pressure field (see bottom diagram of Figure 13) up to the maximum value at time D. The deflection field along the beam at this time reveals (see bottom-left panel of Figure 12 (Multimedia view)) that the second vibration mode is still acting.The maximum pressure measured at the sensor in y = 35 mm in the deformable plate case is twice the value measured in the rigid case (see Fig. 14), this revealing a strong hydroelastic effect.The full hydroelastic coupling persists for the next three oscillations of the signals (see bottom-middle and bottom-right panels of Figure 12 (Multimedia view), corresponding to times E and F in Figure 13), that is until t = 6 − 7 ms. Later on, the beam behaves like a freely oscillating system, this characterizing the hydroelastic regime III (see Fig. 13) called free-vibration.The elastic plate is almost fully wetted, the first wet natural period of the beam is fully excited and governs the oscillation of the structure at this stage.Furthermore, for each oscillation cycle, the maximum stress measured on the elastic plate corresponds to a minimum of the pressure measured on the rigid wall below the deformable plate, and viceversa (see, for example, the times G and H in Figure 13).This behavior identifies the free-vibration regime of the beam which concludes the hydroelastic interaction. B. Analysis by means of the empirical mode decomposition The Empirical Mode Decomposition (EMD), 16 is a reliable mathematical tool to analyse the dynamic evolution of the local load at the wall, aiming to highlight the role of the hydroelasticity during the evolution of a flip-through event.Conversely, the classical Fourier Transform, assuming the stationarity of the signal and giving the correct interpretation of linear problems, is inadequate for the comprehension of a strongly nonlinear and transient event like the flip-through. 5Similar arguments are valid for the Morlet Wavelet analysis, being a Fourier based technique.The main idea of the EMD is the use of the Hilbert transform applied to a signal with (i) local symmetry around the local zero mean and (ii) the same number of zero crossings and extrema.Fulfilment of the previous constraints, allows for the definition of suitable basis functions, the Intrinsic Mode Functions (IMF), and ensures a correct application of the Hilbert Transform and, then, a meaningful definition of the instantaneous frequency. 5The latter quantity is essential to understand the dynamic evolution of a transient nonlinear signal.Figure 15 shows the time evolution of the first three IMFs (top panel) obtained by the dimensionless stress signal σ 3 /σ Y measured at the centre of the beam, and the corresponding instantaneous frequency (bottom panel) colored as function of the local amplitude of the IMF.The theoretical variation of the first and second wet natural frequency, estimated through Eq. ( 11), are also represented with the dashed and solid line, respectively, in Figure 15.In Eq. ( 11) the wet natural frequencies depend on the wetted length of the beam.This has been measured at each time from the images collected by the high-speed camera.Because of the increasing wetted length of the beam, the added mass increases causing a decrease of the first wet natural frequency.The time variation of the dominant instantaneous frequency well reproduces the theoretical variation of the first natural wet frequency.The first IMF, reproducing the effect of the higher modes (in particular of the second mode), is almost zero at the centre of the beam, which is a node for the second This article is copyrighted as indicated in the article.Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions. mode.Conversely, the second IMF contains most of the signal associated with the dominant first wet natural frequency during the first six oscillation cycles.During the same time range, the third IMF modulates its instantaneous frequency from the first wet natural frequency around t = 0 ms to the changing value of the first wet natural frequency approximately at t = 10 ms.After this time, the third IMF contains all the energy of the signal associated with the free-vibration regime of the beam.However, over this time span the estimated theoretical frequency slightly decreases in time (as a consequence of the rising free-surface) while the experimental value remains almost constant.This aspect is emphasized by Figure 16 which shows an enlarged view of Figure 15.This difference, though small, is taken into account to improve the hybrid numerical model for times larger than 7 − 8 ms. The above considerations imply that during the hydroelastic regime II a quick variation of the added mass occurs; conversely, during regime III the added mass remains almost constant. The strain gauge at the centre of the beam cannot capture the oscillations associated with the higher modes of vibration.To analyse the role of the second vibration mode, Figure 17 reports the time history of the first three IMFs derived from the stress measurements at gauges 2 and 4 (top panels), and the corresponding instantaneous frequency (bottom panels).It is worth to recall that the Froude scaling between prototype and model has been applied for the natural frequency associated with the first vibration mode only.Then, hereinafter, every observation about the second vibration mode is intended to be purely qualitative.As expected, the wet natural frequency associated with the second vibration mode has a marginal role and is limited to the time range around the first peak of the local load, i.e., the hydroelastic regime II and the beginning of regime III.This is confirmed by the time evolution of the first IMF in both the panels of Figure 17.The associated instantaneous frequency shows a large scattering of the data between the first (dashed line) and second (solid line) wet natural frequency of the beam.According to Faltinsen and Timokha, 7 the higher mode initially has an amplitude much lower than the lowest mode.Further, because of the relatively large damping, the higher modes nearly disappear at the scale of the period of the lowest mode.However, because of the dynamic response of the strain gauges used (see Sec. III) any quantitative evaluation of the energy associated with the higher modes cannot be expected.on the rigid wall, and the time variation of the wetted length of the beam.This choice is a consequence of the experimental observations that highlighted the role of the quick change of the added mass induced by the changing wetted length of the beam.Figure 18 reports the comparison between the results of the hybrid model (solid line) with the data of the experiments (dashed line) for the time history of the strains measured at the centre of the beam.Five repetitions of the same run have been considered (both for the rigid case, used as input to the hybrid model, and for the hydroelastic case) to estimate the mean values (middle panel) and the maximum (i.e., standard deviation added to the mean value, shown in the top panel) and the minimum (i.e., standard deviation subtracted from the mean value shown in the bottom panel) variation.the strain at the centre of the beam and t the thickness of the beam, is larger for the numerical results (Err% = 41%) than for the experimental data (Err% = 25%), at least at the first peak (t = 0 ms in Figure 18).This behavior is a consequence of the larger standard deviation measured on the rigid wall experiments by using a pressure transducer.Conversely, on the elastic wall the repeatability of the measured strain is higher.The hybrid numerical solver is globally able to capture the wet vibration frequency of the beam at least for the first cycles of oscillation, i.e., until 6-7 ms in Figure 18; later a phase shift occurs. V. COMPARISON WITH With the aim to explain the disagreement between the theoretical model and the experiments for t > 6-7 ms, we recall the main assumptions of the hybrid model.In particular, the solution of ( 7) is based on the linear superposition of the forcing pressure field p = p r + p v .The first term accounts for the local hydrodynamic load acting on a fully rigid beam.Furthermore p v , coming from the solution of the BVP (6) for the potential function φ v , is used to calculate the added mass term [A] in Eq. ( 7) associated with the vibrational problem of the elastic plate around a rest state.Since the solution of ( 7) depends on the instantaneous wet length of the elastic beam (through the boundary condition), the added mass will increase with h(t).As a consequence, from Eq. ( 11), the wet natural frequency of the elastic beam will decrease by increasing h(t).Even though this behavior justifies the ability of the hybrid model to properly capture the wet vibration frequency of the beam during the regime II, it cannot justify the phase shift between the numerics and the experiments occurring at t > 7 ms. The hydrodynamic force can excite the structural reaction until the corresponding energy is larger than a threshold value.This threshold value corresponds, in the present case, to the fully wet beam and occurs at t = 7 ms (see Figures 15 and 16).When the energy content of the forcing contribute is insufficient, the structure behaves as a beam in the free-vibration regime, 7 i.e., it keeps vibrating with the frequency corresponding to the fully wet beam (this identifies the hydroelastic regime III).The latter is slightly larger than the frequency corresponding to the instantaneous filling depth (see also Figure 16) but it justifies the phase shift between the numerical solution and the experiments. Because of the large scatter induced by the pressure measurements on the rigid wall and to better evaluate the capability of the hybrid numerical solver, the numerical results obtained using the maximum pressure distribution measured at the rigid wall are compared with the experimental data corresponding to the maximum strain distribution measured at the elastic wall.Even though the numerical results shown in Figure 18 account for the proper modelling of the damping term, in the following the effect of the damping term is analysed. Figure 19, therefore, shows the comparison between the experiments (magenta line) and the numerical results (black line) calculated with no damping term included and with the added mass varying in time according to the instantaneous variation of the wet length.A reasonable agreement is observed for the prediction of the first peak.However, a difference of about 15% still persists.This means that further hydroelastic effects, not included in the present model, could matter at the first peak.As already observed, a good prediction of the instantaneous hydroelastic frequency exists as long as the beam is not fully wet.Later, i.e., for t > 6 -7 ms, a time-dependent phase delay appears because of an unreliable estimate of the added mass in the numerical model (see Figure 16), as already mentioned above.Further, due to the absence of the damping term in the numerical model, prediction of the successive peaks completely fails.To overcome the disagreement, a suitable modelling of the physical damping is essential.Damping sources of the structural response are viscous dissipation, acoustic radiation damping, thermodynamic dissipation, and structural damping.The latter contribution is expected to be dominant in the present case.In Sec.III D the structural damping was assumed to be frequency dependent as a consequence of the varying wet natural period of the beam with its wet length.The damping coefficient has been determined by using the results of the hammer tests in dry and wet conditions; Figure 9 of the structural damping, this is the first assumed constant in the hybrid model and it is taken equal to the value measured in dry condition (dashed line in Figure 9); the comparison between numerical results and experimental data related to the strain gauge #3 is shown in Figure 20.The time history of the wet length, used for the calculation of the added mass term, is also reported (dashed green line).Accounting for the structural damping in dry condition is not enough to justify the experimentally observed decay.The frequency dependent damping coefficient in wet condition (solid line in Figure 9) is then used; the results are shown in Figure 21.The green dashed line shows the instantaneous variation of the wet length which has been considered both for the calculation of the added mass and of the damping term (by using the results of Figure 9 solid line).Although the method used to estimate the damping coefficients does not completely account for the dynamic variation of the wet length of the beam, the agreement between numerical and experimental results is globally satisfactory.However, some differences appear in the first two-three oscillations (see Figure 21) after the first maximum peak, and more in general during the hydroelastic regime II.They are essentially related to the decay of the pressure time history measured in the fully rigid case (see Figure 11).In the proposed hybrid model the peak of the pressure distribution measured in the rigid case causes an impulsive load on the structure, which begins to oscillate as in a free-vibration regime (but with a varying natural frequency due to the changing wetted length).The disagreement between the results of the hybrid model and the experimental data in the prediction of the maximum amplitudes of the structural deformation of the elastic beam indicate that a stronger hydroelastic effect occurs during the first two/three oscillations of the structure, suggesting the need of a more reliable model for the regime II.At following times, i.e., at t > 6 − 7 ms, the hydroelastic evolution resembles the free-vibration behavior with a constant added mass.This means that a constant value of the wet natural frequency of the beam follows as detailed at the beginning of this section.To this purpose, Figure 22 shows the comparison of the numerical and experimental time histories of the strain at sensor #3 with the results of the numerical model properly enhanced with the constant added mass for t > 6 − 7 ms (i.e., a constant wet length, see green dashed line in Figure 22), resulting in a good agreement both in phase and amplitude. B. Simplified models A first question arises about the effectiveness of the mathematical models used during the design stage.Two different models are typically suggested by the classification rules to take into account the liquid-structure interaction: (i) FEA and (ii) Indirect Dynamic FEA.Method (i) solves the unsteady structural problem with the external forcing coming from the rigid pressure distribution measured in the sloshing model tests.In contrast, method (ii) solves the quasi-static FEA with forcing given by the maximum rigid pressure loads measured in the model tests.A suitable DAF which depends on the rise time of the local load, amplifies the calculated local stresses to account for the dynamic effects.Moreover, the proper modelling of the added mass term associated to the vibration of the structure is questionable.In the classification notes 10 it is suggested to calculate the added mass in model (i) using a linearized potential flow solver on a significant portion of the possible wetted surface of the membrane; however, the added mass is assumed to be constant in time.In method (ii) the DAF should take into account also the added mass contribution.To better understand the limits of the simplified methods, we implemented both models for the specific problem, and we compared their results with the experimental data for the elastic wall.In particular, method (i) resembles the hybrid method we proposed previously, except for the estimate of the added mass.In method (i) the added mass is kept always constant and estimated on an a priori prescribed portion of the wetted surface.Conversely, in the hybrid model the added mass is varying in time during the rise up of the wave trough along the wall (influencing the maximum peak of the structural load), and until the end of the focusing stage; after, when the jet is formed and the elastic wall is almost fully wet, the added mass in kept constant. The comparison with the experimental results for the time history of the strains recorded by sensor #3 is shown in Figure 23.No damping term is used in the model.Although the first peak value looks similar to that estimated through the hybrid model, it occurs at a slightly different time.Furthermore, large differences characterize the evolution of the phenomenon.Due to the constant added mass used, the prediction of the vibration frequency completely fails.However, if the main interest is the value of the maximum load peak, method (i), consistently with the hybrid model, still underestimates the experimental value (around 20% lower than the experimental value).Nevertheless, this underestimation is much lower than that obtained from the quasi-static model, i.e., method (ii).In this model, Eq. ( 7) is applied by completely neglecting the inertial and damping terms; the pressure distribution is given from the measurements on the rigid wall experiments.The results, concerning the average and the corresponding error bar (calculated through the repetition of 5 runs) of the strain distribution along the vertical beam, are shown in Figure 24.Both the experiments (red symbol) with the reconstruction based on the use of the first natural vibration mode of the beam (red line) and the quasi-static calculations (blue symbol) are reported.Also for the latter, only the first vibration mode is used.Comparing the maximum value of the strain, the ratio between the measured strain and the quasi-static one, follows, i. is much larger than that proposed in Gervaise et al. 18 (approximately equal to 1 at the rise time measured in the present experiments, i.e., 1 ms at model scale corresponding to 5 − 6 ms at full scale) using the dynamic model described in Pillon et al. 17 and applied to a Mark III containment system.However, in the present experiments a completely different type of impact and a different material for the structural panel is considered.Nevertheless, some remarks can be done.The model proposed by Pillon et al. 17 does not seem to account for the added mass contribution: a finite element method on a structural panel excited through the pressure load distribution measured during experiments on a rigid tank (with a well refined grid of sensors) is used.This aspect can explain why in Refs.18 and 17 a maximum DAF = 1.6 is obtained at a rise time around 1 ms in full scale; a maximum DAF value at a larger rise time is expected when the added mass contribution is also considered.A third curve, the black one shown in Figure 24, is numerically obtained by using the simplified method (i) with the constant added mass term.In this case, a ratio between the experimental and the numerical maximum strain equal to 1.33 is achieved, confirming the essential role of the added mass in the dynamic Finally, the bars reported on the data shown Figure 24 allow assess the reliability of the maximum strain (or stress) recorded at the centre of the beam.The experimental value on the elastic beam shows a relative error around 27%. Differently, the relative errors corresponding to the quasi-static and dynamic numerical model (based on the experimental maximum pressure value recorded at the centre of the beam in the rigid case) are around 42% and 48%, respectively.This confirms that the maximum strain (or stress) can be a good candidate to be used as an indicator of the maximum local load. The mean value of the experimental maximum strain distribution along the wall reported in Figure 24 (red symbols), shows an evident asymmetric trend, even highlighted by comparing with the symmetric reconstruction using only the first vibrational mode (see red solid line in Figure 24).This behavior of the experimental data hides the effect of the higher vibrational modes.Their contribution to the local structural load has been previously discussed and observed at the time instant of the maximum strain.Figure 25 shows the experimental strain distribution (dot symbol) and the related reconstruction using one, two, and three modes (dashed, dotted, and solid line, respectively), and assesses the role of the higher mode at this time.The first two vibration modes well recover the experimental measures of the strain along the elastic beam; the third mode is almost negligible.The maximum strain occurs at y = 0.17 m, i.e., 5 mm below the location of the strain gauge (in the elastic case) and of the pressure transducer (in the rigid case).This suggests the optimal positioning of the transducers (both for the rigid and elastic case) to be used in a next step of the study. VI. CONCLUSIONS A comprehensive experimental investigation has been carried out to explore and quantify the role of the hydroelasticity during the evolution of a flip-through event 6 inside a sloshing tank in low filling depth condition.A deformable aluminium plate, whose dimensions have been fixed in order to ensure the Froude scaling of the first natural vibration frequency of a Mark III structural panel, has been clamped in a stiff stainless steel wall.Strain gauges along the deformable plate centerline measure the structural load.To properly characterize the hydroelastic effects, the same experiments have been performed in a fully rigid tank, i.e., by substituting the deformable plate with a stiff plate, and by using pressure transducers to measure the dynamic load along the wall.A ramp function on the sinusoidal motion of the tank forces the flip-through event to occur at the third cycle of oscillation, after a first impact on the opposite wall.The study emphasizes that the hydroelastic evolution is characterized by three different regimes which vary from the quasi-static deformation of the beam in regime I to a strong and fully coupled hydroelastic interaction in regime II and to a free-vibration regime in regime III which ends the dynamic evolution. The three regimes are detailed with the aid of figures and movies illustrating the kinematics of the flow and the dynamic evolution of the beam deformation (in the elastic case) and of the local pressure (in the rigid case).In particular, during regime I the quasi-static hydrodynamic load induces a small and quasi-static deformation of the beam.A strong and fully coupled hydroelastic behavior follows in regime II; the rapid increase of the hydrodynamic load originates the maximum strain.As a consequence of the structural reaction, the hydrodyanamic pressure still increases.The varying wetted length of the beam causes a variation of the added-mass term and then of the natural vibration frequency of the deformable plate.When the elastic plate is fully wetted the natural frequency remains constant characterizing the free-vibration regime III. A numerical-experimental model, called hybrid model, has been proposed in order to model the structural load.Such model couples the unsteady Euler beam-theory with the forcing term given by the experimental pressure measured in the rigid case.The added mass term is calculated using a potential flow model for incompressible liquid and assuming a quasi-static variation of the free surface.The instantaneous wetted length of the beam is determined by the experimental images.The comparison against experimental data confirms an overall satisfactory prediction of the model.However, differences appear in regime II where a more refined hydroelastic model is necessary. More simplified theoretical models, typically used at the design stage, have also been implemented and compared with the experimental data.They show an error similar to that of the hybrid model for the prediction of the maximum structural stress when dynamic hydroelastic effects are taken into account.In any case they are not capable to correctly predict the subsequent evolution of the plate deformation. FIG. 1 . FIG. 1. Global sketch of the tank with the positioning of the elastic beam (red line). FIG. 3 . FIG.3.View of the plexiglas tank reinforced with aluminium and steel structures.The red oval highlights the left lateral wall built in stainless steel and holding the deformable aluminium plate (indicated by the white arrow). FIG. 4 . FIG. 4. Right column: Enlarged view of the external side of the stainless steel lateral wall, before being mounted on the tank.The red frame highlights the part milled to hold the aluminium plate.Bottom-left panel: Enlarged view of the external side of the milled part with the clamping system.The aluminium plate is mounted from the internal side.The screw holes around are not used in the present experiments.Top-left panel: Enlarged view of the aluminium plate with the full-bridge strain gauges along the vertical centreline.The bulges at both vertical ends are used to clamp the plate at the stainless steel wall. FIG. 5 . FIG.5.Hammer test: Strain distribution along the centreline of the plate at the time of the maximum strain and for two different impulsive loads (whose maximum is indicated at the top of each panel).Symbols represent the measured strains along the vertical centerline of the plate, while the red continuous line reports the strains distribution a predicted by applying a FEM method to the 2D plate.The short-dashed red line represents the transversal strains t calculated numerically.Finally, the green dashed line gives the difference a − t . FIG. 6 . FIG. 6. Hammer test.(Top panel) Comparison between the acceleration time history on the plate measured through an accelerometer and a strain gauge.(Bottom panel) Comparison of the amplitude of the corresponding Fourier Transform. FIG. 7 . FIG. 7. Hammer test.Left panel: Time history of the strains measured by means of the strain gauge at the centre of the plate in dry condition.Right panel: Amplitude of the corresponding Fourier Transform.The frequencies associated with the first two bending and torsional modes are highlighted. FIG. 9 . FIG. 9. Hammer tests in dry and wet conditions: dimensionless values of the structural experimental damping coefficient relative to the lowest structural mode (symbols).The solid line represents the cubic polynomial function which fits the symbols.The dashed line reports the values of the structural damping in dry conditions.h/L = 0.13 gives the completely dry beam condition. (Multimedia view)), of the pressure signal, which reaches a maximum value (approximately equal to 10 times the undisturbed hydrostatic pressure) at the probe located at y = 175 mm above the bottom of the tank.At this time, see panel C of Figure 10 (Multimedia view), the pressure gradient can be roughly estimated as 1 ≈ 520 m/s 2 This article is copyrighted as indicated in the article.Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions. Downloaded to IP: 150.146.145.201On: Fri, 21 Mar 2014 14:22:28 FIG. 10 . FIG. 10.Rigid case.Evolution of the flip-through event at five different times (reported on each panel).On each panel, the red dashed line represents the interpolation of the pressure data (red symbols) recorded by five transducers P 3 , P 4 , P 5 , P 6 , and P 7 , numbered from the lowest position, placed on the rigid wall at the location indicated by the green diamond (Multimedia view) [URL: http://dx.doi.org/10.1063/1.4868878.1]. FIG. 11 . FIG.11.Rigid case.Dynamic evolution of the flip-through event.Each panel reports the time history of the pressure transducers located at a prescribed position of the completely rigid vertical wall.The position is indicated on the panel.The red dashed line with symbols represents the mean value calculated with 5 repetitions of the same run; the related standard deviation is given by the error bar.The black line displays the data relative to a single run, i.e., that whose images are reported in Figure10(Multimedia view) and in the attached movie.The instants labelled as A, B, C, D, and E refer to times reported in the images of Figure10(Multimedia view).The slope of the black dashed line (present in the panels relative to P 1 and P 2 for −20 ≤ time(ms) ≤ −10) equals the vertical velocity V of the wave. FIG. 12 . FIG. 12. Elastic case.Evolution of the flip-through event at six different times (reported on each panel).The red dashed line represents the interpolation of the beam deformation (red symbols) recorded through five gauges placed on the elastic wall at the location indicated by the green diamond (Multimedia view) [URL: http://dx.doi.org/10.1063/1.4868878.2]. FIG. 13 .FIG. 14 . FIG.13.Elastic case.Dynamic evolution of the flip-through event.From top to bottom, the first three panels report the time history of the strain gauges located along the elastic wall at the positions reported in the panel itself.The fourth panel reports the time history of the pressure transducer located at y = 35 mm above the tank bottom, along the rigid wall where the elastic plate is clamped.The red dashed line with symbols represents the mean value calculated with 5 repetitions of the same run; the related standard deviation is indicated by the error bar.The black line represents the data relative to a single run, that is the one whose images are reported in Figure12. FIG. 15 . FIG. 15.Elastic case.Top panel: Time history of the stress measured at the centre of the beam and of the relative first three Intrinsic Mode Functions.Bottom panel: Time history of the instantaneous frequency associated with the three IMF reported above.Dashed and solid lines represent the theoretical variation of the first and second wet natural frequency of the beam, respectively. −FIG. 17 .FIG. 18 . FIG. 17. Elastic case.First and third panels from top: time history of the stresses measured at the positions #2 and #4 and of the relative first three Intrinsic Mode Functions.Second and fourth panel: time history of the instantaneous frequency associated with the three IMF reported above.Dashed and solid lines represent the theoretical variation of the first and second wet natural frequency of the beam, respectively. FIG. 19 .FIG. 20 . FIG.19.Elastic case.Comparison between numerical prediction of the hybrid model (black line) without the damping term and experimental results (magenta line) relative to the time history of the strains at the centre of the beam.Both data are related to the run corresponding to the maximum measured value. FIG.21.Elastic case.Comparison between numerical prediction of the hybrid model (black line) with the varying damping term (evaluated as a function of the instantaneous wetted length of the beam) and experimental results (magenta line) relative to the time history of the strains at the centre of the beam.Both data are related to the run corresponding to the maximum measured value. FIG.22.Elastic case.Comparison between numerical prediction of the hybrid model (black line) with the varying damping term (evaluated as a function of the instantaneous wetted length of the beam) up to the fully wet condition (and then kept constant) and experimental results (magenta line) relative to the time history of the strains at the centre of the beam.Both data are related to the run corresponding to the maximum measured value. FIG. 25 . FIG. 23.Elastic case.Comparison between numerical prediction of the simplified model (black line) (i) without damping term, and experimental results (magenta line) relative to the time history of the strains at the centre of the beam.Both data are related to the run corresponding to the maximum measured value. TABLE I . Dry natural frequencies associated with the first two bending and torsional modes. of the beam.Then, several hammer tests have been performed by using several filling depths of the tank.Initially, dry vibration frequencies have been measured and compared with the corresponding values predicted by the beam theory and by the FEM model applied to the plate.The comparison, reported in TableI, shows a satisfactory agreement.The time history of the strains measured at the centre of the plate and the corresponding Fourier Transform for a hammer test in dry condition are shown in the left and right panel of Figure Hammer tests in wet conditions: comparison between the measured natural frequency associated with the first bending moment (symbol) and the corresponding value predicted by the hydroelastic model (line) detailed in Sec.II, as function of the filling depth of the tank. This article is copyrighted as indicated in the article.Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions. Downloaded to IP: 150.146.145.201On: Fri, 21 Mar 2014 14:22:28 This article is copyrighted as indicated in the article.Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions. Downloaded to IP: 150.146.145.201On: Fri, 21 Mar 2014 14:22:28 Both the structural and the hydrodynamic damping terms calculated in Sec.III D are used in the numerical results.At a first glance, the relative error, estimated as This article is copyrighted as indicated in the article.Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions. Downloaded to IP: 150.146.145.201On: Fri, 21 Mar 2014 14:22:28
2018-11-27T12:38:46.562Z
2014-03-21T00:00:00.000
{ "year": 2014, "sha1": "8cd6f288aa8f67c94969229d2baa105abaf3fa4a", "oa_license": "CCBY", "oa_url": "https://aip.scitation.org/doi/pdf/10.1063/1.4868878", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "8cd6f288aa8f67c94969229d2baa105abaf3fa4a", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225178248
pes2o/s2orc
v3-fos-license
In Situ Wet Etching of MoS2@dWO3 Heterostructure as Ultra-Stable Highly Active Electrocatalyst for Hydrogen Evolution Reaction Electrocatalysts featuring robust structure, excellent catalytic activity and strong stability are highly desirable, but challenging. The rapid development of two-dimensional transition metal chalcogenide (such as WO3, MoS2 and WS2) nanostructures offers a hopeful strategy to increase the active edge sites and expedite the efficiency of electronic transport for hydrogen evolution reaction. Herein, we report a distinctive strategy to construct two-dimensional MoS2@dWO3 heterostructure nanosheets by in situ wet etching. Synthesized oxygen-incorporated MoS2-was loaded on the surface of defective WO3 square nanoframes with abundant oxygen vacancies. The resulting nanocomposite exhibits a low overpotential of 191 mV at 10 mA cm−2 and a very low Tafel slope of 42 mV dec−1 toward hydrogen evolution reaction. The long-term cyclic voltammetry cycling of 5000 cycles and more than 80,000 s chronoamperometry tests promises its outstanding stability. The intimate and large interfacial contact between MoS2 and WO3, favoring the charge transfer and electron–hole separation by the synergy of defective WO3 and oxygen-incorporated MoS2, is believed the decisive factor for improving the electrocatalytic efficiency of the nanocomposite. Moreover, the defective WO3 nanoframes with plentiful oxygen vacancies could serve as an anisotropic substrate to promote charge transport and oxygen incorporation into the interface of MoS2. This work provides a unique methodology for designing and constructing excellently heterostructure electrocatalysts for hydrogen evolution reaction. Introduction Electrocatalysts water-splitting product hydrogen is reproducible and clean energy [1][2][3]. The development of platinum (Pt)-free electrocatalysts is a crucial problem to improve the generally efficiency for hydrogen evolution reaction (HER), especially when facing resources shortage and energy crisis [4][5][6]. Newly, two-dimensional (2D) transition metal chalcogenides have always been mentioned, such as molybdenum disulfide (MoS 2 ) [7,8], tungsten oxide (WO 3 ) [9,10], molybdenum selenide (MoSe 2 ) [11,12] and can demonstrated to clarify the mechanism for HER in strong acids. For instance, MoS 2 has great potential as an alternative to Pt for catalyzing overall water-splitting cost-effectively and efficiently [13][14][15][16]. Theoretical studies have shown that along the edges of 2D MoS 2 nanosheets is the source of highly HER, which plays a crucial role in constructing hybrid catalyst [15,17,18]. Therefore, the development of MoS 2 based catalysts, featuring rich active edge sites [19] and accessible surface areas [20], is of vital significance to improve the overall efficiency for HER. Though many literatures have been reported the great potential of MoS 2 as HER electrocatalyst [3,21,22], the severe stacking of MoS 2 nanosheets during the preparation and electrochemical reaction process significantly impact its stability and hinders its application compared with Pt-based electrocatalysts [21,23]. Therefore, unless the electronic structure can be appropriately modulated by varying the chemical constituents [18] to increase the population of active sites [17] or introducing durable supportive materials [5,24] to prevent the aggregation, the 2D layered MoS 2 could hardly be employed into practical application without long-term stability [25]. Tungsten trioxide (WO 3 ) is another distinctive 2D layered oxide [26] with faster proton insertion kinetics than other two-dimensional transition metal chalcogenides, attributing to its larger proton diffusion coefficient [9]. Recently, improving the HER performance by designing and constructing various nanostructure components has been considered an available strategy to remarkably increase the active edge sites [25,27] and improve the long-term stability [28]. Indeed, researchers have reported that regulating the interlayer spacing of MoS 2 nanosheets by forming hetero-structure can optimize the HER performance [29][30][31]. For example, Lan and coworkers synthesized a molybdenum disulfide/nitrogen-doped reduced graphene oxide (MoS 2 /N-RGO) hetero-composite and found that the enlarged interlayer spacing of MoS 2 is beneficial to improve HER performance [32]. Because of the unique 2D ultra-thin layered morphology, which ensure the ultra-fast electron transfer, the MoS 2 /N-RGO t exhibited excellent catalytic activity with a lower onset potential, lower Tafel slope, larger current density and good stability over 5000 cycles. Moreover, Wang reported a newly strategy to construct well-defined W 17 O 47 -MoS 2 heterostructure catalyst with clear interface. By constructing heterostructure, it is possible to create highly accessible anion-deficit sites for precise electronic structures and intimate hetero-interface for spatial charge-flow steering [18]. The W 17 O 47 -MoS 2 composite shows that the mass activity of MoS 2 is 116 times higher than pure MoS 2 . Therefore, the structural combination of different nanostructures, leaded to more active edge sites [33,34] and enlarged interlayer spacing of MoS 2 [35,36], is demonstrated to be an effective approach to improve the HER activity. In this regard, rational design and further synthesis of MoS 2 -based catalysts with low-dimensional constrains and thermodynamic limitations between two completely different compounds are highly desirable, but also challenging. Herein, we demonstrate an in situ wet etching method by growing the oxygen-incorporated 2D MoS 2 nanosheets on defective WO 3 nanoframes (denoted as MoS 2 @dWO 3 ) to form a unique two-dimensional heterostructure. The defective WO 3 nanoframes with abundant oxygen vacancies serve as an anisotropic substrate to promote charge transport and carrier injection into the interface of MoS 2 nanosheets [37], while 2D MoS 2 nanosheets provide highly active sites on edges for HER. The resulting hetero-catalyst exhibits highly activity with a low overpotential of 191 mV at 10 mA cm −2 , with a Tafel slope of 42 mV dec −1 toward hydrogen evolution reaction. The long-term cyclic voltammetry cycling of 5000 cycles and more than 80,000 s chronoamperometric (CA) tests promises its outstanding stability. The success synthesis of MoS 2 @dWO 3 via the construction of 2D heterostructure through defects-engineering (in situ wet etching) will provide unique pathways for pursuing efficient electrocatalysts for energy storage and conversion. Results and Discussion The MoS 2 @dWO 3 heterostructure nanosheets were prepared by in situ wet etching of presynthesized WO 3 nanosheets at the presence of Mo-based etching agent. The pure WO 3 nanosheets exhibited a uniform square of 100-120-nm lateral size and 8-10-nm-thickness with a relative smooth and regular surface morphology, as shown in Figure 1a. After the in situ etching, transmission electron microscope (TEM) image shown that a heterostructure hollow 2D MoS 2 @dWO 3 nanoframe with uniform shape and size was prepared (Figure 1b). In addition, the MoS 2 sheets were grown on the main body of WO 3 nanoframe through the thermolysis of (NH 4 ) 2 MoS 4 in DMF at hydrothermal process of 180°C for 10 h. Ultrasmall (10 ± 1 nm) multilayer (3-6 layers) MoS 2 sheets were in situ grown up the surface of relatively large WO 3 nanoframe to construct MoS 2 @dWO 3 heterostructure. It is worth nothing that the excessive use of glucose and citric acid in the synthesis process can ensure the formation of uniform 2D square nanoframe morphology of WO 3 and can also reduce the interface energy to promote the subsequent growth of MoS 2 . Though delicate control of WO 3 /(NH 4 ) 2 MoS 4 mass ratio and etching time, the load in this 2D heterostructure was systematically optimized. Catalysts 2020, 10, x FOR PEER REVIEW 3 of 12 WO3/(NH4)2MoS4 mass ratio and etching time, the load in this 2D heterostructure was systematically optimized. The morphology and subtle structural evolution of the as-prepared MoS2@dWO3 catalyst were also observed by high resolution transmission electron microscopy (HRTEM). Comparing to the direct thermolysis of (NH4)2MoS4 to give relatively large MoS2 nanosheets of ~200 nm, as shown in Figure S2, the introduction of WO3 sheets as template substrate could significantly restrain the lateral growth of MoS2 nanosheets around 10 nm, offering abundant active edge sites as potential catalytic centers for HER. Meanwhile, the high crystalline solid 2D WO3 nanosheets have been simultaneously etched to form hollow square nanoframes accompanied the hydrothermal reaction of (NH4)2MoS4. As shown in Figure 1c, the high magnification lattice fringes of MoS2@dWO3 exhibited an enlarged interlayer spacing of 0.95 nm for (002) MoS2, which is considerably different from pure MoS2 with a conventional interlayer spacing of 0.62 nm. The MoS2@dWO3 heterostructure nanosheets were prepared by in situ wet etching of WO3 nanosheets by (NH4)2MoS4 in the solution of dimethylformamide (DMF), as shown in Equation (1) [38]. The intimate and large interfacial contact between MoS2 and WO3, favoring the promoted charge transfer and electron-hole separation by the synergy of defective WO3 and MoS2, is believed the decisive factor for improving the electrocatalytic efficiency of the nanocomposite. Specifically, the pyrolysis of (NH4)2MoS4 in DMF environment, produces MoS3 with a strong reducibility (Equation (2)), which makes the WO3 nanosheets etched to form a nanoframe, and abundant oxygen vacancies are generated. As an anisotropic substrate, WO3 is conducive to charge transport and carrier injection into the interface of MoS2. Meanwhile, controllable disorder engineering of layered MoS2 by oxygen incorporation to enlarge their interlayer spacing, providing plenty of unsaturated sulfur atoms as active sites for HER [39] and effectively regulating the electronic structure [40,41] of this nanocomposite to further enhance the intrinsic conductivity. (NH4)2MoS4→MoS3 + 2NH3 + H2S (1) The morphology and subtle structural evolution of the as-prepared MoS 2 @dWO 3 catalyst were also observed by high resolution transmission electron microscopy (HRTEM). Comparing to the direct thermolysis of (NH 4 ) 2 MoS 4 to give relatively large MoS 2 nanosheets of~200 nm, as shown in Figure S2, the introduction of WO 3 sheets as template substrate could significantly restrain the lateral growth of MoS 2 nanosheets around 10 nm, offering abundant active edge sites as potential catalytic centers for HER. Meanwhile, the high crystalline solid 2D WO 3 nanosheets have been simultaneously etched to form hollow square nanoframes accompanied the hydrothermal reaction of (NH 4 ) 2 MoS 4 . As shown in Figure 1c, the high magnification lattice fringes of MoS 2 @dWO 3 exhibited an enlarged interlayer spacing of 0.95 nm for (002) MoS 2 , which is considerably different from pure MoS 2 with a conventional interlayer spacing of 0.62 nm. The MoS 2 @dWO 3 heterostructure nanosheets were prepared by in situ wet etching of WO 3 nanosheets by (NH 4 ) 2 MoS 4 in the solution of dimethylformamide (DMF), as shown in Equation (1) [38]. The intimate and large interfacial contact between MoS 2 and WO 3, favoring the promoted charge transfer and electron-hole separation by the synergy of defective WO 3 and MoS 2 , is believed the decisive factor for improving the electrocatalytic efficiency of the nanocomposite. Specifically, the pyrolysis of (NH 4 ) 2 MoS 4 in DMF environment, produces MoS 3 with a strong reducibility (Equation (2)), which makes the WO 3 nanosheets etched to form a nanoframe, and abundant oxygen vacancies are generated. As an anisotropic substrate, WO 3 is conducive to charge transport and carrier injection into the interface of MoS 2 . Meanwhile, controllable disorder engineering of layered MoS 2 by oxygen incorporation to enlarge their interlayer spacing, providing plenty of unsaturated sulfur atoms as active sites for HER [39] and effectively regulating the electronic structure [40,41] of this nanocomposite to further enhance the intrinsic conductivity. At the same time, the material structure of MoS 2 @dWO 3 was further characterized via XRD. As shown in Figure 1d, the XRD pattern of MoS 2 @dWO 3 is markedly different from that of pure 2H-MoS 2 (JCPDS No. 75-1539). After the formation of heterostructure, the nanocomposite showed a prominent (002) peak come out at the low-angle region at 9.3 • , corresponding to a d spacing of 9.5 Å, verified the increased interlayer spacing from observed TEM image. In addition, a broadened peak at high angle region (33 • ) corresponds to the (100) planes of the pristine MoS 2 , indicating the similarly atomic arrangement along the basal planes. Therefore, it can be inferred from the above characterization that the main reason for the enlarged interlayer spacing of MoS 2 @dWO 3 is due to the oxygen incorporation. It is noted that both theoretical and experimental studies have implied that the MoS 2 with an enlarged interlayer spacing can present a more preferable free energy change for hydrogen adsorption (∆G H ) and exhibits more short-range disordering than the general MoS 2 -with an interlayer spacing of 6.2 Å, favoring the ultrafast kinetics process of HER [15,36,42]. Chemical state and electronic properties of the catalyst surface was further detected by X-ray photoelectron spectrum (XPS). As shown in Figure 2, XPS survey scan confirmed the presence of W, Mo, O and S elements in MoS 2 @dWO 3 heterostructure. Compared with pure WO 3 , a distinctive new peak assigned to W 4+ 4f 7/2 was appeared around 32.5 eV, indicating that the wet etching of pristine WO 3 would induce the electron transfer from oxygen vacancies to tungsten cations on the defective WO 3 surface (Figure 2a). Moreover, a positive shift and peak broadening of both W 6+ 4f 7/2 and W 6+ 4f 5/2 demonstrates the electron modulation and recombination of different compound between defective WO 3 and in situ grown MoS 2 [43]. In Figure 2b, the Mo 3d XPS spectrum of MoS 2 @WO 3 had two characteristic peaks of Mo 4+ 3d3/2 (231.75 eV) and Mo 4+ 3d5/2 (228.65 eV), indicating the dominance of MoS 2 in the product. However, the enhanced peak of Mo 6+ 3d5/2 (232.63 eV) and Mo 6+ 3d3/2 (235.73 eV) represents the formation of Mo-O bonds [32,44], verifying that the oxygen was incorporated into MoS 2 -layer to generate the interlayer disorder, which also corresponds to the peak shit of S. As shown in Figure 2c, the O 1s spectrum revealed the existence of W-O-W (530.53 eV) and O-vacancies (531.68 eV) [18] in MoS 2 @dWO 3 . The increase of O-vacancies peak area indicates the enhancement of O-vacancies after the formation of heterostructure. Meanwhile, the appearance of new peak at 533.46 eV is ascribed to Mo-O bond, indicating a certain amount of oxygen atoms were incorporated into the layered MoS 2 structure. The C-OH peak in the O 1s spectrum mainly comes from the glucose ligand on the surface. All the positions of S 2p 1/2 , S 2p 3/2 peak were in accordance with the previous literature (Figure 2d) [24]. However, we observed that the peak of S 2-2p3/2 have 0.05 eV shift toward low blinding energy, which indicated that S became easier to obtain electrons after forming heterostructure [45] and also confirmed peak of Mo 6+ 3d5/2 (232.63 eV) and Mo 6+ 3d3/2 (235.73 eV) were further enhanced because of the conservation of gain and loss electrons [5]. In summary, formation of heterostructure caused the increase of O-vacancies on the defective WO 3 surface and the generation of Mo-O bond by oxygen incorporation into MoS 2 layer. In order to assess HER performance of MoS 2 @dWO 3 heterostructure catalyst, the electrocatalytic measurements were carried out using a standard three-electrode cell in N 2 -saturated 0.5-M H 2 SO 4 electrolyte. Linear-sweep voltammetry (LSV) curves of MoS 2 @dWO 3 heterostructure catalyst, control samples of pristine WO 3 and pure MoS 2 are shown in Figure 3a. Pristine WO 3 demonstrates the most limited HER activity reflected by the low current density, attributing to the lack of active sites and its poor conductivity. The pure MoS 2 exhibited significantly better activity than the pure WO 3 , as exhibiting an overpotential of 210 mV at 10 mA cm −2 . The improved activity of MoS 2 originated from the active-edge sites of 2D MoS 2 nanosheets. The MoS 2 @dWO 3 heterostructure catalyst was shown the most active catalyst, reaching the highest activity with an overpotential of 191 mV at 10 mA cm −2 and the lowest Tafel slope of 42 mV dec −1 , as shown in Figure 3b. The much improved activity could be attributed to the in situ grown ultrasmall MoS 2 nanosheets, more specifically, the oxygen-incorporated MoS 2 active edge sites, plus the synergetic effects brought by the intimate and large interfacial contact between defective WO 3 and oxygen-incorporated MoS 2 . Catalysts 2020, 10, x FOR PEER REVIEW 5 of 12 In the environment of strong acidity and high current density, the long-term stability and durability of heterostructure catalyst are very important for the practical application of electrochemical hydrogen production. Therefore, the chronoamperometry is used to further perform the process at a constant potential of −0.21 V versus RHE. As shown in Figure 3c, we can observe that the hetero-catalyst runs continuously and stably for 80,000 s at a current density of~20 mA cm −2 which implying its good durability of heterojunction under HER working condition. Meanwhile, a long-term cycling test of MoS 2 @dWO 3 heterostructure catalyst was also carried out, as shown in Figure 3d. After 5000 cycles at a scan rate of 100 mV s −1 between −0.45 and 0.3 V vs. RHE, the polarization curve before and after the cycle remain similar, and a negligible activity increase is observed, which means that superior stability of prepared heterostructure. Compared to the latest reported MoS 2 based and other nonprecious metal based HER electrocatalysts, ours heterostructure catalyst demonstrate comparable or even more efficient performance, which is more significant for practical application for electrodes in harsh working conditions ( Table 1). As control samples, the pure WO 3 nanosheets were measured and exhibited poor HER activity. The pure MoS 2 nanosheets prepared without WO 3 templates showed a poor long-term stability and durability, in Figure 3e. These experimental results shown that the formation of heterostructure nanosheets is crucial for the enhanced HER performance. Intuitively, by comparing the TEM image of MoS 2 prepared without WO 3 , the effect of WO 3 on the catalyst preparation is expected to be significant for the confined growth of MoS 2 nanosheets. Indeed, the theoretical simulations indicate that the catalytic activity of the edge sites of layered MoS 2 could be effectively impacted by underlying substrates [49]. These results demonstrated that this in situ wet etching strategy is a unique method for constructing heterostructures, which can improve electronic modulation and increase the active edge sites by oxygen incorporation, leading to an enhanced HER performance of MoS 2 -based catalysts. The excellent electrocatalytic activity and electrical conductivity of heterostructure catalysts was further examined by electrochemical impedance spectroscopy (EIS). As shown as in Figure 3f, the charge transfer resistance (Rct) of MoS 2 @dWO 3 nanosheets is significantly lower than original MoS 2 and WO 3 in the EIS, which means that the heterogeneous interface makes the barrier minimize of the charge transfer barrier and improves the electronic conductivity as well. At the same time, it also shown that WO 3 is an excellent matrix material, which can promote the transmission of protons at the interface [50]. In order to further evaluate the effective electrochemical active surface area (ECSA) of MoS 2 , WO 3 and MoS 2 @dWO 3 , the electrochemical double-layer capacity (C dl ) was further examined as shown in Figure 4. The double layer capacitance (C dl ) of MoS 2 @dWO 3 is 83 mF cm −2 , which is higher than that of pristine WO 3 (8 mF cm −2 ) and pure MoS 2 (30 mF cm −2 ). The plots in Figure 4d show that the MoS 2 @dWO 3 have a large active surface area, which can be attributed to the heterostructure with rich active edge sites. Our experiment measurement indicates a strong dependence in the catalytic activity of the MoS 2 film on the WO 3 substrate. Table 1. Comparison of hydrogen evolution reaction (HER) catalytic performance of the different electrocatalysts. Catalyst Synthetic Method η (mV) (j = 10 mA cm −2 ) Tafel Slope (mV dec −1 ) Ref. The high HER activity and good stability of the optimized catalyst constructed with heterostructure and designed electronic modulations can be mainly attributed to the following aspects: (i) the heterostructure increases the number of active edges as active sites for hydrogen evolution catalysis; (ii) the WO 3 nanoframes with rich oxygen vacancies serve as an anisotropic substrate to promote electron transport; (iii) the oxygen incorporation in MoS 2 can enhance the overall intrinsic conductivity and help improve electron structure; (iiii) the synergistic effect between MoS 2 and WO 3 is beneficial to simultaneously improve the catalytic activity and stability of the material. All in all, in order to achieve high-efficiency electrocatalytic hydrogen evolution, the integrated MoS 2 @dWO 3 heterostructure catalyst were successfully prepared by constructing heterostructural and electronic modulations, paving a new way for improving the activity of various multielement electrocatalysts. shown in Figure 4. The double layer capacitance (Cdl) of MoS2@dWO3 is 83 mF cm −2 , which is higher than that of pristine WO3 (8 mF cm −2 ) and pure MoS2 (30 mF cm −2 ). The plots in Figure 4d show that the MoS2@dWO3 have a large active surface area, which can be attributed to the heterostructure with rich active edge sites. Our experiment measurement indicates a strong dependence in the catalytic activity of the MoS2 film on the WO3 substrate. The high HER activity and good stability of the optimized catalyst constructed with heterostructure and designed electronic modulations can be mainly attributed to the following aspects: (i) the heterostructure increases the number of active edges as active sites for hydrogen evolution catalysis; (ii) the WO3 nanoframes with rich oxygen vacancies serve as an anisotropic substrate to promote electron transport; (iii) the oxygen incorporation in MoS2 can enhance the overall Synthesis of WO 3 Nanosheets We synthesized WO 3 nanosheets through a facile hydrothermal method. First, 0.5 mmol of Na 2 WO 4 ·2H 2 O were added to 20 mL of deionized water to form a clear solution, and then 0.75-mmol citric acid and 5-mmol glucose was slowly injected into it. After ultrasonic stirring for 20 min, we slowly dropped 10 mL of HCl (3 M) into the above mixed solution. After magnetic stirring for 60 min, the mixed solution was transferred into a 50-mL Teflon autoclave and heated at 120°C for 6 h. The precipitates were centrifuged and washed with ethanol and water several times and then dried in vacuum at 60°C for 4 h. Synthesis of MoS 2 @dWO 3 Heterostructure Nanosheets First, we weighed 5 mg of the WO 3 nanosheets prepared above and added it into 25 mL DMF under ultrasonication for 10 min at room temperature. Then, under ultrasonic stirring, the solution was added into 5 mL of DMF containing 1 mg of (NH 4 ) 2 MoS 4 for 20 min. The mixed solution was then transferred into a 30-mL Teflon autoclave and heated at 180°C for 10 h. The precipitates were centrifuged and washed with ethanol and water 5 times and then dried in vacuum at 60°C for 4 h. Characterizations We observed the size and morphology of these prepared nanocomposites through a JEM-1200EX (JEOL; Tokyo, Japan) transmission electron microscope (TEM) operating at 100 kV and JEM-2100 F (JEOL; Tokyo, Japan)transmission electron microscope operating at 200 kV. At the same time, a Bruker AXS D8 Advance X-ray diffractometer (Bruker Daltonics Inc., Karlsruhe, Germany) was used to test the X-ray diffraction (XRD) patterns of these obtained samples with Cu Kα radiation (λ = 1.5418 Å). The operation current and voltage were 40 mA and 40 kV (2θ ranging from 5 • to 80 • ), respectively. Finally, in order to explore the combined state of the surface, we used the X-ray photoelectron spectrum (XPS) to measure it on AXIS ULTRA DLD (AXIS; Manchester, UK). We compared the standard binding-energy table and the XPS total spectrum to determine the energy axis of each element. All electrochemical measurements in this article were performed on an electrochemical workstation (CHI 660E, CH Instrument, Inc., Shanghai, China). Fabrication of Electrodes First, we weighed 1.0 mg of carbon black (XC-72) and 2.2 g of MoS 2 @dWO 3 hybrid nanocrystals, added into 3.0 mL ethanol and then sonicated the mixture for 60-min before transferring the catalyst to the surface of the carbon black. Finally, we took 1 mL of the mixed solution, added isopropanol (750 µL), ethanol (250 µL) and nafion (5%, 20 µL) to form a homogeneous catalyst ink. Then, we weighed 1.0 mg of carbon black (XC-72) and 2.2 g of MoS 2 hybrid nanocrystals. To this was added into 3.0 mL ethanol. Then, we sonicated the mixture for 60 min. Finally, we took 1 mL of the mixed solution, added isopropanol (750 µL), ethanol (250 µL) and nafion (5%, 20 µL) to form a homogeneous catalyst ink. The ink of pure MoS 2 and WO 3 nanocatalyst was prepared the same way. Finally, the nanocatalyst ink (8.7 µL) was dropped onto the glass carbon (GC) electrode (3-mm in diameter) and then dried for 2 h before electrocatalytic tests, yielding a catalyst loading of 0.28 mg cm −2 approximately. Catalytic Measurements All electrochemical measurements in this article were performed on the electrochemical station (CHI 660E, CH Instrument, Inc., Shanghai, China). The nanocatalyst inks were dropped onto a glassy carbon electrode (GCE) as the working electrode. At the same time, a carbon rod was used as the counter electrode, and a saturated calomel electrode (SCE) was used as reference electrode, respectively. The electrolyte solution was a 0.5-M H 2 SO 4 solution; N 2 was passed through for 30 min before the test. The HER performance was tested by linear-sweep voltammetry with sweeping the potential from 0.20 to −0.60 V vs. RHE at a scan rate of 2 mV s −1 . The cyclic voltammetry was tested under the potential of 0.20 and −0.20 V vs. RHE at the scan rate of 5 mV s −1 . Electrochemical impedance spectroscopy measurement was carried out from 10,000 Hz to 0.01 Hz. The electrochemical stability was tested by chronoamperometry (j-t) at a constant potential of −0.21 V vs. RHE. To estimate the electrochemical active surface area of the sample, cyclic voltammetry was tested under the potential window of 0.16 and 0.36 V with various scan rate of 20, 40, 60, 80, 100, 120 mV s −1 . The saturated calomel electrode (SCE) was used as the reference electrode in the all electrocatalytic measurements. It be calibrated in regard to reversible hydrogen electrode in the 0.5-M H 2 SO 4 at H 2 -saturated environment. The Pt wires were used as the working electrode and counter electrode. The cyclic voltammetry (CV) was test at a scan rate of 1.0 mV/s, and the average value of potential at which the current crossed zero was the thermodynamic potential for the hydrogen electrode reaction. The CV curve is shown in Figure S3. Therefore, in this work, E(RHE) = E(SCE) + 0.242 V. Conclusions In conclusion, we successfully synthesized the MoS 2 @dWO 3 heterostructure catalyst with enlarged MoS 2 interlayer spacing and defective WO 3 substrate using in situ etching method. Importantly, the in situ etching process achieved oxygen-incorporated MoS 2 via construct hetero-interface at the surface of WO 3 nanoframes. The MoS 2 @dWO 3 heterostructure catalyst with rich edge active sites and high overall conductivity could deliver substantially enhanced HER activity and long-term durability in strong acid environment. The resulting nanocomposite exhibits highly activity with a low overpotential of 191 mV at 10 mA cm −2 , with a Tafel slope of 42 mV dec −1 and long-term stability toward hydrogen evolution reaction. The results indicate that the MoS 2 @dWO 3 heterostructure catalyst has large potential in water-splitting devices, and our study highlights a promising approach to design and synthesis of other nanostructure catalyst in miscellaneous promising applications. Author Contributions: X.L. performed data analysis and wrote the study. C.W. contributed to the study design and scientific discussion of the results. All authors have read and agreed to the published version of the manuscript.
2020-09-03T09:03:59.946Z
2020-08-31T00:00:00.000
{ "year": 2020, "sha1": "fc34eaf516e57404c4e539831f1f67d2565a03a5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4344/10/9/977/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1023b745fd4f1133664f6286b308b3614cd02e9e", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
234402825
pes2o/s2orc
v3-fos-license
Lexical Bundles across levels of Proficiency in Portuguese as a Second Language : an examination of bundle function Recebido: 05/6/2020 Aprovado: 07/12/2020 Publicado: 09/02/2021 Abstract: Formulaic sequences are known for being measures of foreign language fluency for learners. Research in language processing suggests that native speakers as well as learners process these sequences as a single word (ELLIS, 1996). Nevertheless, little is known about the use of formulaic sequences in Portuguese and, even fewer studies have examined the use of formulaic sequences in learners of Portuguese. Therefore, in this study, we sought to investigate the textual function of lexical bundles extracted from a corpus of learners of Portuguese as a Second Language (PSL). Lexical bundles are sequences of three or more words that occur with larger than expected frequency in a specific corpus. In this study, we used corpus linguistics tools to extract lexical bundles that occur frequently at two levels of proficiency – beginner and intermediate – in Portuguese. These bundles were, then, classified according to their textual function. Results indicate that beginner level students use more bundles associated with concrete references, while intermediate learners use more bundles associated with textual organization and stance. This study contributes to the description of Portuguese acquisition at these two levels of proficiency. In addition, the results can foster classroom activities with which the PSL teachers introduce new functions of lexical bundles to students. Finally, we hope that this study motivates more research describing the language used at different stages of Portuguese acquisition. Introduction There has been extensive research on formulaic sequences (WRAY, 2013), especially on how important and difficult they are to learners of any foreign language, regardless of their proficiency level (PAQUOT; GRANGER, 2012). Under the overarching term formulaic language, we find several different instances of words sequences, such as, collocations, idioms, lexical phrases, and lexical bundles, the latter being the object of study of the present investigation. Considering that mastering formulaic sequences -including lexical bundles -is intimately related to language proficiency, it is imperative to understand how language learners use these linguistic features across levels of development. However, we know very little about what linguistic patterns, namely lexical bundles (LBs), learners of Portuguese use since most studies examining the use of LB have described the use of these structures across levels in English as a second language (L2). Ferreira (2014) has investigated how LBs in Portuguese appear in textbooks, Sardinha, Teixeira and Ferreira (2014) have focused on LBs in different registers, and Goulart (in press) has analyzed their structure. Nevertheless, these studies are scarce, thus, the urgent need for further exploring LBs in Portuguese. Differently from Goulart (in press), who has analyzed the structural patterns across levels of development, this study focuses on the functional patterns of the LBs previously found in that study and relates both the structure and function of these sequences of words. Having said that, it is our hope to contribute to a further understanding of both structure and function of LBs in Portuguese. This study is divided into five sections, being this introduction the first one, followed by a description of what lexical bundles are and some findings of previous research on the topic. Then, on section three, the corpus is described, as well as the methods. The results accompanied by the discussion are presented in section four, and the fifth and last section is dedicated to the conclusion. Biber et al. (1999) On one hand, Chen and Baker (2016) found that learners at lower levels of proficiency tend to use more bundles associated with conversation. A similar pattern was found in Staples et al. (2013), for whom lower-level learners use bundles more frequently than their more advanced counterparts, but these bundles are used in the prompts Few studies have investigated lexical bundles in languages other than English. Tracy-Ventura, Cortes and Biber (2007) 1) What differences, if any, are there in the types and tokens of lexical bundles in beginner and intermediate levels? Lexical bundles 2) To what extent do the functions of the bundles extracted vary at each level of proficiency? The Corpus of Written Productions of Portuguese as a Second Language In order to answer the research questions posed was excluded from the analysis due to its small size. In addition, texts with less than 100 words were excluded from the analysis. Table 2 shows that most of the texts in the corpus were written as a response to texts related to the individual. Nevertheless, environment related topics become more frequent at the intermediate level. In this section, the corpus and subcorpora used for the analysis were described. In the following section, the method for bundle identification and classification will be presented in detail. Bundle extraction This study draws on previous findings of a research examining learner language development in lexical bundles (see GOULART, in press). Therefore, bundle size and bundle extraction followed this previous investigation. Three-word bundles were selected as the most appropriate bundle size due to the fact that these are short texts, varying from 100 to 600 words. In addition, upon initial analysis, it was determined that fourword bundles resulted in variable slots at the final bundle position (eu gosto de *); thus, three-word bundles resulted in the same grammatical and functional information as four-word bundles. For extraction criteria, the researchers piloted different solutions, in order to guarantee that these bundles were representative of the two levels being investigated. Tracy-Ventura, Cortes and Biber (2007) analysis. In addition, for the purposes of this study, dispersion was more critical than frequency. When examining the patterns of language development, the researchers wanted to guarantee that the bundles found were representative of that level, rather than on the learner's idiolect. Therefore, bundles had to occur in at least 5% of the texts in each subcorpora in order to be extracted. This guaranteed that the bundles had a frequency of at least 12 occurrences in each subcorpora, without compromising the number of bundles extracted. Bundles were extracted using the n-gram function on Antconc. After bundle extraction, their raw frequency was normalized by a thousand. Bundle classification This study seeks to explore specifically how bundle functions vary across two levels of proficiency in Portuguese. Previous studies had already examined structural development but lacked an analysis of functional development along with a correlation between function and form. Even though it is not the focus of this study, bundle structure was classified according to the categories presented in Table 3. It is worth noting that, although Hyland's (2008) and Biber, Cortes and Conrad's (2004) categories have been thoroughly used in previous studies, a functional taxonomy should emerge from the bundles found in the corpus, rather than imposed on the data. After an initial survey of the data, the following functional taxonomy was created for the bundles extracted in this corpus. section, we will briefly introduce the results of the structural patterns found across levels. Then, the functional patterns for each level will be discussed and compared. Finally, the relationship between functional and structure will be examined. The structural types of lexical bundles across levels As explained in the section above, the structural classification used in a previous investigation of the same corpus was adapted to combine the A1 and A2 corpus into our beginner corpus, and the B1 and B2 corpus in our intermediate corpus. While these are appropriate forms to respond to the prompt, we can see in Excerpt 3b how an advanced student responds to the same prompt. In Excerpt 8, we can see that instead of using the verb gostar to express preferences, students use na minha opinião. That is, we see an increase in the repertoire of devices students use to indicate preferences. We can also notice that the use of place referential bundles might be an outcome of the writing prompt "do you like to live in the city?". The relationship between forms and function across levels In this section, we examine the possible relationship between form and function at these two levels of proficiency. For this comparison, we have combined all referential bundles into a single variable. In addition, we only considered bundle type. Figure 4 illustrates the patterns found for beginner levels.
2021-05-13T00:03:15.276Z
2020-12-31T00:00:00.000
{ "year": 2020, "sha1": "10ecb3966af64bdc6629b0f2f648055ebdfd1e9e", "oa_license": "CCBY", "oa_url": "https://revistaseletronicas.pucrs.br/index.php/fale/article/download/38377/26580", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "075d7f5fea8215a07e98275057f5bcc557224a63", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
260885184
pes2o/s2orc
v3-fos-license
Design Rules for Binary Bisamide Gelators: toward Gels with Tailor-Made Structures and Properties This study intends to develop design rules for binary mixture of gelators that govern their assembly behavior and subsequently explore the impact of their supramolecular assembly patterns on the gels’ rheological properties. To achieve these goals, nBA gelators with odd and even parities [n-methylene spacers between the amide groups (n = 5–10) and 17 carbons at each end] were blended at different ratios. Such bisamides with simple structures were selected to study because their different spacer lengths offer the possibility to have matching or non-matching hydrogen bonds. The results show that the assembly behavior of binary mixtures of bisamide gelators is the same in the solid and gel states. Binary mixtures of gelators, which only differ two methylene moieties in the spacer length, form compounds and co-assemble into fibers and sheets observed for (5BA)1(7BA)1 and (6BA)1(8BA)1 mixtures, respectively. Binary gelator mixtures of the same parity and a larger spacer length difference still lead to mixing for the odd parity couple (5BA)1(9BA)1), but to partial phase separation for the even parity mixture (6BA)1(10BA)1. Binary mixtures of gelators of different parities gave complete phase separation in the solid state, and self-sorted gels consisting of discrete fibers and sheets in the gels of (5BA)3(6BA)1 and (5BA)3(10BA)1. The even–even binary gels (20 wt %) consisting of co-assembled sheets show higher G′ than odd–odd binary gels (20 wt %) consisting of co-assembled fibers. In general, the self-sorting of odd and even molecules into the separate primary structures results in a dramatic decrease of G′ compared to the co-assembled gels (20 wt %), except for (5BA)1(9BA)1 gel (20 wt %). It might be due to larger woven spheres in (5BA)1(9BA)1 gel (20 wt %), which probably have a less entangled gel network. . Phase behavior of 5BA10BA gelators: (a) the second heating traces for various mixing ratios 5BA and 10BA and DSC N (T) fits on the experimental traces (the traces and fits were shifted vertically for clarity) and (b) XRD patterns of (5BA) 3 (10BA) 1 in comparison to single 5BA and 10BA gelators and binary (5BA) 1 (10BA) 1 which also shows two DSC peaks and two distinct first-order reflections (curves were normalized to the highest intensity), the insets magnify high-angle (20°-25° (2ϴ)) and low-angle (0°-10° (2ϴ)) regions.Table S1.Fit parameters and statistical coefficient of the DSC N (T) function fitted to the experimental DSC trace of molecularly mixed binary 5BA7BA at different ratios [a] .ΔC p,m,1 (W.g -1 .K -1 ) NA Second peak ΔH 2 (J.g -1 ) 126.18±0.04 104.52±0.23 116.64±0.29 131.82±0.12 120.[a] The samples (6 mg) were heated at 10 (K min -1 ) after calibration at the onset for the given weight and rate, in the case of (5BA) 1 (7BA) 1, only one fitting peak is required due to the peak overlap (the error margins are from the nonlinear fitting), ΔC p,m doesn't converge (Not available=NA) due to purely mathematical artefact, if the peaks are sufficiently apart with sufficient baseline tail on each side, the cumulative ΔC p,m can be reliably determined via the DSC N (T) function for binary systems. Table S2.Fit parameters and statistical coefficient of the DSC N (T) function fitted to the experimental DSC trace of 6mg of molecularly mixed binary 5BA9BA at different ratios [a] . First peak ΔH 1 (J.g -1 ) [a] The samples (6 mg) were heated at 10 (K min -1 ) after calibration at the onset for the given weight and rate, in the case of (5BA) 1 (9BA) 1, only one fitting peak is required due to the peak overlap (the error margins are from the nonlinear fitting), ΔC p,m does not converge (Not available=NA) due to purely mathematical artefact, if the peaks are sufficiently apart with sufficient baseline tail on each side, the cumulative ΔC p,m can be reliably determined via the DSC N (T) function for binary systems. Table S3.Fit parameters and statistical coefficient of the DSC N (T) function fitted to the experimental DSC trace of 6mg of molecularly mixed binary 6BA8BA at different ratios [a] .[a] The samples (6 mg) were heated at 10 (K min -1 ) after calibration at the onset for the given weight and rate, in the case of (6BA) 1 (8BA) 1, only one fitting peak is required due to the peak overlap (the error margins are from the nonlinear fitting), ΔC p,m does not converge (Not available=NA) due to purely mathematical artefact, if the peaks are sufficiently apart with sufficient baseline tail on each side, the cumulative ΔC p,m can be reliably determined via the DSC N (T) function for binary systems. Table S4.Fit parameters and statistical coefficient of the DSC N (T) function fitted to the experimental DSC trace of 6mg of molecularly mixed binary 6BA10BA at different ratios [a] .[a] The samples (6 mg) were heated at 10 (K min -1 ) after calibration at the onset for the given weight and rate, in the case of (6BA) 1 (10BA) 1, only one fitting peak is required due to the peak overlap (the error margins are from the nonlinear fitting), ΔC p,m does not converge (Not available=NA) due to purely mathematical artefact, if the peaks are sufficiently apart with sufficient baseline tail on each side, the cumulative ΔC p,m can be reliably determined via the DSC N (T) function for binary systems. Table S5.Fit parameters and statistical coefficient of the DSC N (T) function fitted to the experimental DSC trace of 6mg of molecularly mixed binary 5BA6BA at different ratios [a] . First peak ΔH 1 (J.g -1 ) [a] The samples (6 mg) were heated at 10 (K min -1 ) after calibration at the onset for the given weight and rate, in the case of (5BA) 3 (6BA) 1, only one fitting peak is required due to the peak overlap (the error margins are from the nonlinear fitting), ΔC p,m does not converge (Not available=NA) due to purely mathematical artefact, if the peaks are sufficiently apart with sufficient baseline tail on each side, the cumulative ΔC p,m can be reliably determined via the DSC N (T) function for binary systems. Table S6.Fit parameters and statistical coefficient of the DSC N (T) function fitted to the experimental DSC trace of 6mg of molecularly mixed binary 5BA10BA at different ratios [a] . First peak ΔH 1 (J.g -1 ) [a] The samples (6 mg) were heated at 10 (K min -1 ) after calibration at the onset for the given weight and rate, in the case of (5BA) 3 (10BA) 1, only one fitting peak is required due to the peak overlap (the error margins are from the nonlinear fitting), ΔC p,m does not converge (Not available=NA) due to purely mathematical artefact, if the peaks are sufficiently apart with sufficient baseline tail on each side, the cumulative ΔC p,m can be reliably determined via the DSC N (T) function for binary systems.
2023-08-15T06:17:32.058Z
2023-08-14T00:00:00.000
{ "year": 2023, "sha1": "e72bb73e71cd7b7303e88f6dd913952cddd04fe4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1021/acs.langmuir.3c01487", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d4c9b84591beb8e3fce5fe81a894a925f81ade0a", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
252873687
pes2o/s2orc
v3-fos-license
Parameter Averaging for Feature Ranking Neural Networks are known to be sensitive to initialisation. The methods that rely on neural networks for feature ranking are not robust since they can have variations in their ranking when the model is initialized and trained with different random seeds. In this work, we introduce a novel method based on parameter averaging to estimate accurate and robust feature importance in tabular data setting, referred as XTab. We first initialize and train multiple instances of a shallow network (referred as local masks) with"different random seeds"for a downstream task. We then obtain a global mask model by"averaging the parameters"of local masks. We show that although the parameter averaging might result in a global model with higher loss, it still leads to the discovery of the ground-truth feature importance more consistently than an individual model does. We conduct extensive experiments on a variety of synthetic and real-world data, demonstrating that the XTab can be used to obtain the global feature importance that is not sensitive to sub-optimal model initialisation. Introduction Neural networks (NNs) have gained wide adaption across many fields and applications. However, one of the major drawback of NNs is their sensitivity to weight initialisation [19]. This drawback is not critical for most classification and regression tasks, and is less obvious in applications such as explainability in most computer vision (CV) tasks. The problem is more obvious in settings, in which we pay attention to individual features (e.g., a feature in tabular data, or a pixel in the image) rather than group of features (e.g., a region in the image). And it becomes critical in settings, in which we might need to make costly decisions based the explanation that the model gives for its outcomes. Few such applications include disease diagnosis in clinical setting, drug repurposing in drug discovery, and sub-population discovery for clinical trials, in all of which the discovery of important features is critical. In this work, we investigate the robustness of neural networks to model initialisation in the context of feature ranking, and conduct our experiments in tabular data setting. The methods developed to explain predictions should ideally be robust to model initialisation. This is especially important to build trust with stakeholders in fields such as healthcare. In this work, we define the "robustness" as one, in which the feature ranking from the model is not sensitive to sub-optimal model initialisation. Some examples of robust models are seen in tree-based approaches such as the random forest [3] and XGBoost [6], especially when they are used together with methods such as permutation importance. In these methods, each tree is grown by splitting samples on each decision point by using an impurity metric such as Gini index for the classification task. The importance of a feature in a single tree is typically computed by how much splitting on a particular feature reduces the impurity, which is also weighted by the number of samples the node is responsible for. The importance scores of the features are then averaged across all of the trees within the model to get their final scores. It is this averaging that might be one of the reasons why these models are robust and consistent when used for feature ranking. However, we should make a distinction between the robustness of a method and the correctness of its feature ranking as tree-based methods are known to have their shortcomings when estimating the feature importance [28]. To get a robust explanation using neural networks, we could use an ensemble approach by training multiple neural network-based models to get feature importance, and use the majority rule to rank them. However, the ranking of features by using the ensemble of models may still not be easy in cases where the same feature(s) get ranked equally likely across different positions by the models. Moreover, the ensemble approach requires us to store all models so that we can use them to explain a prediction at test time, which is not ideal. Instead, in this work, we propose a novel method, in which we obtain a single global mask model that is based on averaging the parameters of multiple instances (local masks) of the same model. We take advantage of the sensitivity of NNs to initialization by initializing and training each local mask with a different random seed. We show that although the global model might have a higher loss than an individual model, it ranks features more correctly and consistently, and hence can be used to extract the feature importance. Our primary contributions in this work are the following: We obtain a global model by averaging the parameters of multiple instances of a shallow neural network trained with different random initialisation and use it to extract feature importance. The global model obtained in this manner might have a higher loss than any of the individual models [19]. We show that although this is true, the global model is still able to discover the ground-truth feature importance more consistently than an individual model does. We also demonstrate that weight regularization such as dropout and weight-clipping can improve the robustness and consistency of the global model. We show that the existing the state of the art (SOTA) methods proposed for feature ranking or selection are not robust to model initialisation. Finally, we provide insights via extensive empirical study of parameter averaging using both synthetic and real tabular datasets. Method Parameter averaging is extensively studied in the context of Federated Learning [19], in which individual models are trained on datasets stored in different devices, and a global model is obtained by averaging individual models in various ways. For example, the naive parameter averaging is shown to give a lower loss on full training set than any individual model trained on a different subset of the data when the individual models are initialized with same random seed [19]. It is well known that the loss surface for typical neural networks is non-convex [19] and, hence, averaging parameters of models could result in a sub-optimal global model, especially when their parameters are initialised differently. However, the loss surfaces of over-parameterized NNs are shown to be well behaved and less prone to bad local minima in practice [7,8,12]. In light of these observations, we investigate settings, in which we can combine multiple models that are initialized and trained with different random seeds to obtain a global model that is less sensitive to sub-optimal initialisation of any individual model. So, in this work, we propose a framework to obtain such a global model that can be used for both feature ranking and selection. We show that global model is able to extract feature importance correctly and consistently especially when the network architecture is shallow. We also show that this behaviour breaks down for deep architectures although regularizing their weights still helps improve them. Figure 1 shows our framework, in which we use a shallow neural network as mask generator that in turn is used to learn important features and their weights for a downstream task. In this work, without the loss of generality, we use the classification task for the experiments as shown in Figure 1 (right). Training High-level overview: A mask generator, an encoder and a classifier are trained K times using the same training set. K training runs can be parallelized in a distributed setting, or can be run in series on the same machine. At the beginning of each run, we change the random seed before initialising all models (i.e. mask, encoder, and classifier) using Kaiming He uniform initialization [13] with the gain of √ 5 for linear layers. At the end of each training run, we keep the learned weights of the mask model referred as the local mask. So we have K different set of weights for the same mask model at the end of K runs. Then, we average the parameters of K local mask models to obtain the weights of the global mask model. In Section 3.4, we show that the global mask is good at extracting feature importance, but it can be sub-optimal for the classification task since it has a higher loss than an individual model as shown in Figure 5. Thus, we initialise and train the models one final time, during which we combine the output of global mask model (weights frozen) with the one from a local mask (trained). The local mask is trained to gain back any potential loss in classification performance. We should note that one can also choose to fine-tune the global mask, but we prefer to use it as a reference to improve on in the final training. Training to obtain a local mask: We train a local mask generator, a classifier and an encoder for a downstream task. Mask generator, M l M l M l , gets data X X X, and generates a mask m m m = m l m l m l . We then mask the input X X X by using an entry-wise multiplication with m m m 2 to generate a masked input X M X M X M . We use m m m 2 instead of m m m to push low values in m m m towards zero. In our experiments, we observed that using m m m 2 works better than m m m. m = M l (X), and X M = m 2 X (1) Inspired by the proposal for subsetting features in SubTab [31], we then generate subsets of data by dividing the features of Learning from subsets of features is shown to be effective in learning good representations for downstream tasks such as classification while enabling parameter sharing between the features of the tabular data [31]. We also add noise to randomly selected features in each subset since we observe that adding noise improves classification performance and the robustness of feature ranking as discussed in Section J of the Appendix. To add noise, we first generate a binomial mask, β, and a noise matrix, , both of which have the same shape as the subsets, and are re-sampled for each subset. The entries of the mask are assigned to 1 with probability p, and to 0 otherwise. As an example, the corrupted version, x ic of subset x i is generated as following: Please note that different noise types can be used to generate . In this paper, we mainly experiment with Gaussian noise, N (0, σ 2 ), except for SynRank100 dataset, for which we use swap noise [31]. The encoder takes each of the corrupted subsets {x ic , x jc , x kc , ... x ic , x jc , x kc , ... x ic , x jc , x kc , ...}, and projects them up to generate corresponding embeddings, As in SubTab [31], we aggregate the embeddings by using mean aggregation to get the joint embedding, h h h, as shown in Figure 1. Finally, the classifier makes a prediction using the joint embedding h h h. We minimize the total loss by using the objective function in Equation 3 that consists of two loss functions; i) Cross entropy for the classification task (Equation 4), ii) Mask loss consisting of Gini index and an extra term taking the mean over the entries of the generated mask to induce sparsity (Equation 5): summing the output of global mask (with frozen weights) and local mask (being trained), followed by scaling this output to make sure that the maximum entry in the mask is 1 as shown in Equations 7, and 8: We should note that C is a scaler, i.e. maximum entry in m g + m l sum. We use the same loss functions described in equations 3, 4 and 5 to update the parameters of the local mask, encoder and classifier. We should note that m f can be computed in various ways such as using a gating mechanism similar to input and forget gates in LSTMs [14]. We can also choose to keep updating M g in a sequential manner rather than averaging parameters of multiple models all at once. We leave these ideas as future work. Our method is summarized in the Algorithms 1 and 2 in the Appendix. Test time At test time, we use m g m g m g shown in Equations 7 to infer the feature importance. m g m g m g is shown to give a robust global ranking of features in our experiments. In XTab, the importance score for a feature is the mask weight in the final generated mask. The mask weight indicates the feature's relative importance, and we rank the features based on their mask weights. We extract the global feature importances for test set by getting mask values for all samples and computing the mean values over the samples for each feature: wheref î f î f i ∈ R d represents mask weights (i.e. feature importance) for the d number of features in sample gives the mean of mask weights over N samples and we use it when computing the global feature importance. Finally, when ranking the categorical features, we can rank individual one-hot encoded features to show importance of each sub-category. We can also sum the weights of each one-hot encoded feature to get the overall weight for the parent category. We use both in our experiments when comparing our method to other methods in Sections J.4 and J.5 of the Appendix. Experiments We conduct extensive experiments on diverse set of tabular datasets including six syntetic datasets as well as real world datasets such as UCI Adult Income (Income) [16], and UCI BlogFeedback (Blog) [4]. We conduct our initial experiments on synthetic datasets since their ground-truth important features are known. We also compare global feature rankings obtained by the proposed method for synthetic datasets to those given by some of the popular methods such as permutation feature importance used together with random forest [3] and gradient boosting classifier [22] as well as recently published neural network-based methods such as Invase [33], L2X [5], TabNet [1], Saliency Maps [26], and Integrated Gradients [29]. In our framework, we use a shallow, overcomplete encoder architecture with 1024 units in hidden layer and leakyReLU as activation function for all datasets [31]. The summary of model architectures and hyper-parameters such as the number of subsets, the percentage of features shared between subsets, masking ratio, noise variance etc. is in Section C.1 and Table A1 in the Appendix. We report the detailed results on Income and Blog datasets in Sections J and K while additional experiments using synthetic datasets from L2X [5] are in Sections I of the Appendix respectively. Data SynRank dataset: We generate a synthetic dataset, referred as SynRank, consisting of training and test sets with 10k samples each for a binary classification to evaluate whether our method can rank important features in correct order. We first generate data X X X from 10-dimensional standard Gaussian with no correlations across the features N (0, I 0, I 0, I). We then shift the sixth feature , f 6 , to be centered around −10 for the first 45% of samples. For the next 35% of the samples, we shift the first feature f 1 to be centered around 10. The remaining 20% of the samples are kept same as is. We generate the label Y Y Y by sampling it as a Bernoulli random variable with P (Y = 1|X X X) = 1/(1 + g(X X X)). In this case, g(X X X) is defined as exp(f 6 ), Figure 2: SynRank dataset: a) Feature rankings from each of 10 local masks M l k M l k M l k , referred as l k l k l k in the figure, obtained at a particular training run for 10 separate runs. b) Feature rankings from the global model, obtained by averaging the parameters of individual models up to a specific run i.e. cumulative average (CA). For example, g 3 g 3 g 3 corresponds to the global model obtained by averaging the parameters of first 3 local masks (l 1 l 1 l 1 , l 2 l 2 l 2 , l 3 l 3 l 3 ). c) The feature importance weights from M g M g M g obtained by averaging the parameters of all local masks. exp(f 1 ) and exp(f 2 ) for the 45%, 35% and 20% of the samples respectively. So the first 45% and 35% of the samples will be labeled as 1 and 0 with a high probability, respectively. For the remaining 20% samples, we can expect the proportions of class labels to be similar since f 2 is from a standard Gaussian with µ = 0. Based on this dataset, we expect that our method discovers the global feature importance ranking as f 6 > f 1 > f 2 . SynRank100 dataset: This dataset is same as the SynRank, but it has 100 features instead of 10. The features f 100 , f 1 , and f 75 are the equivalents of f 6 , f 1 , f 2 in SynRank respectively, and hence the feature ranking is f 100 > f 1 > f 75 while the remaining 97 features are uninformative. Income: Income is a public dataset based on the 1994 Census database [16]. It is used for a classification task of predicting whether the income of a person exceeds $50K/yr by using heterogeneous features such as age, gender, education level and so on. It contains 32.5k and 16k samples for training and test sets respectively. The dataset has 14 attributes consisting of 8 categorical and 6 continuous features. We dropped the rows with missing values, and encoded categorical features using one-hot encoding. Once we encode the categorical features as one-hot, we end up with 105 features in total. BlogFeedback: Referred as Blog in this work, it is a UCI dataset [9] and contains the number of comments in the upcoming 24 hours for blog posts. It includes 281 variables consisting of 280 integer and real valued features and 1 target variable indicating the number of comments a blog post received in the next 24 hours relative to the basetime. We converted the target to a binary variable to use the data for a classification task of predicting whether there is a comment for a post. The more details on Income, Blog and the synthetic datasets from L2X [5] are added in Section B of the Appendix. Comparing Global Model to Local Models We start our experiments with the classification task on SynRank dataset to get insights into how parameter averaging works for extracting feature importance 1 as shown in Figure 2. The results for L2X datasets [5] can be found in Section G while the details on hyperparameters such as p used for generating the binomial mask, β, and the variance of Gaussian noise, ∼ N (0, σ 2 ), can be found in Section C.1 of the Appendix. We train our models on the whole training set for the downstream task 10 times, each time with a different random seed. We store the parameters of the trained masks, referred as local masks, from each training and denote them as {M l1 , M l2 , . . . , M l10 }. We examine the feature importance obtained from each of 10 local masks for the test set ( Figure 2a). We observe that each local mask gives a slightly different ranking. More specifically, we could have different ranking, depending on which seed is used when training the models. The main reason for this variation is the model initialisation since everything shown in Figure 2a. We refer to M g10 M g10 M g10 as M g M g M g for simplicity in the rest of the paper. We can see that the feature ranking becomes more stable as we use more local masks in the parameter averaging to obtain the global mask model (Figure 2b). Figure 2c shows the feature weights obtained from M g M g M g = M g10 M g10 M g10 . We first note that the weights of the features are correlated with the frequency and position of their ranks across all local masks. Specifically, f 6 is ranked a little higher on average than f 1 is (since f 1 occasionally takes 3 rd rank). Hence, M g M g M g correctly gives f 6 a little more weight than f 1 and suggests f 2 as the 3 rd most important feature as shown in Figure 2c. Similar observations can be made using the results for L2X datasets in Section G of the Appendix. Moreover, we investigate the instance-wise feature importance for XTab. Since we optimize our models using a global objective function (Equation 3), we expect M g M g M g to be biased towards globally important features when estimating the feature importance for an individual sample as confirmed in Section E.1 of the Appendix. We compare the robustness and consistency of the global feature importance extracted from various methods by running each method 10 times with different random seeds on SynRank dataset in Table 1. XTab discovers the top three features as "f 6 > f 1 > f 2 " consistently. This ranking is same as the one obtained by using permutation importance on random forest and gradient boosting classifier, two of the most commonly used models. However, the rankings from TabNet [1], Invase [33], L2X [5] and Saliency Maps [26] are not robust to initialisation although they perform well in terms of accuracy. For example, TabNet sometimes confuses the ranking of the important features (e.g., Run# 1, 2 and 8), or ranks uninformative features such as f 3 and f 8 as important (Run# 5, 6, 7). Similar observations can be made for other models (incorrect rankings shown in bold), indicating their susceptibility to model initialisation. Please note that Invase and L2X are originally proposed as feature selection methods, and that we use the feature-selection probabilities and the number of times a feature is selected across all test samples when computing the rankings for Invase and L2X, respectively. In our experiments with other datasets, we observe that the gradient-based approaches such as Saliency Maps [26] and IG [29] give more consistent rankings across multiple runs compared to the ones that explicitly generate a mask for feature selection, or ranking (e.g., TabNet [1], Invase [33]). The details of the training for other models as well as the comparison of the ranking results for L2X Nonlinear Additive and Switch datasets can be found in Sections C and I of the Appendix, respectively. The effect of weight regularization on the robustness. We run additional experiments on Syn-Rank under three conditions: We apply i) No weight regularization to the weights of the mask model i.e., our original setting so far, ii) Dropout with p = 0.2 for the layers with leaky ReLU activation, iii) Weight-clipping ([−0.2, 0.2]) to limit the magnitude of the weights in each layer. For each of the three cases, we train 20 separate models and compare two different settings. In the first setting, we compute the variation in the feature rankings given by 20 local mask models (top row in Figure 3a-c). In the second setting, we obtain 100 global models, each of which is obtained by averaging the parameters of 10 local models bootstrapped from 20 models. We compare the variation in feature rankings given by global models (second setting -bottom row in Figure 3a-c) to that of 20 local models (first setting -top row). We observe that: i) Regularization methods such as dropout and weight-clipping has a little effect in improving the variation across 20 local models (e.g., comparing f 1 , f 2 and f 6 across three cases at the top row). ii) Similarly, they do not improve the robustness of the parameter averaging significantly (e.g., comparing same features across three cases at the bottom row). iii) Parameter averaging results in more robust estimation of feature rankings (comparing top and bottom rows). Overall, the global models are able to discover important features in the correct order (e.g., f 6 > f 1 > f 2), and the variation in feature ranking across global models is small (almost zero) for ground truth important features (e.g., f 1 , f 2 and f 6 at the bottom row in Figure 3c). We should note that we conduct the same experiment for SynRank100 (Figure 3d), L2X Switch, Income, and Blog datasets, for the latter three of which the weight regularization helps improve robustness of parameter averaging (Figures A7, A17, and A21 in the Appendix respectively). Thus, the weight regularization can help improve the robustness and weight-clipping works better than dropout in our experiments. We also observe that the parameter averaging itself pushes the magnitude of the weights towards a tight range around zero as shown in Figure A8 (Section H.1 of the Appendix), indicating a potential relationship between robustness and a tighter weight distribution. Please note that the results from the repeat of the experiment in Figure 3c for Income and Blog datasets are shown in Figure 4. The robustness and consistency of parameter averaging Exploring the mask generator with deeper architecture. We re-run the robustness experiments by replacing the shallow mask model with a deeper model (5 hidden layers) and show that the weight regularization also helps with the robustness of parameter averaging in deeper networks although the parameter averaging does not work as well as the shallow networks especially if weight regularization is not used (see Figure A15(f) for Income dataset in the Appendix). Additional results for SynRank, L2X Switch, and Income can be found in Section E.3, H.2 and J.3 of the Appendix respectively. Comparing the loss and solution space of local and global models. We consider two sets of parameters: θ l for a local model and θ g for the global model. We can compare the possible loss and solution space by interpolating from local to the global model: θ * = (1 − α) * θ l + α * θ g , where α is swept from 0 to 1 in 50 steps. Figure 5a-b shows two separate examples of such interpolation done using SynRank dataset. In Figure 5a, the local model (α = 0) has the wrong feature ranking of f 1 > f 6 > f 2. As we move from local model (α = 0) to the global model (α = 1), the estimate of feature ranking gets better. The global model estimates the ranking correctly (f 6 > f 1 > f 2). Similarly, in Figure 5b, a different local model has the wrong feature ranking of f 6 > f 2 > f 1 while the global model again gives the correct ranking. Moreover, for SynRank, the expected global feature importance weights are . Thus, we plot how this loss changes as we interpolate from local to global model in Figure 5c, which corresponds to the interpolations in Figure 5a-b. It indicates that the loss increases as we move towards the global model, which is mainly due to the fact that the noise floor (nf ) increases in both cases as shown in Figure 5a-b and that the weight of f 6 moves away from 0.45 in the case of Figure 5b. Averaging parameters of the multiple models with different initialisation is known to give a global model that might have a higher loss than any of the local models [19]. However, we show that the global model is still better at estimating feature ranking and more robust than the local models. Related works Parameter averaging Averaging parameters to get a global model has been extensively studied in the Federated Learning setting under different assumptions; i) The convex optimisation under IID data assumption, in which it is shown that the global model is no better than a single model in the worst-case [2, 35,34]. ii) The non-convex optimisation under IID and non-IID data assumptions, in which individual models are initialized from the same random initialization to avoid bad local minima before training each independently [19]. Parameter averaging using models with same initialisation is studied under different contexts as well [32,23,15]. In [32], the authors average the parameters of multiple models, each of which is obtained by fine-tuning a pre-trained model by using different hyper-parameters. In this case, the fine-tuning process starts from the same initial model i.e., pre-trained model. The parameter averaging in [23,15] is done by averaging the parameters of the same model along the trajectory of stochastic optimisation during the training. Moreover, dropout method is previously shown to approximate model averaging implicitly [27,11]. However, we show that the dropout alone is not enough to achieve robustness in Section 3.4. In this work, we study non-convex setting under the IID assumption, and consider averaging model parameters obtained across multiple training runs to produce the final global model. What differentiates our method from aforementioned works is that we initialise the models with different random seeds. Although averaging the parameters of the models trained with different random seeds is shown to lead to a bad local minima [19], we show that the global model obtained in this way gives a robust estimate of the feature importance and can be used for feature ranking. We review other related works in Section D of the Appendix. Conclusion In this work, we show that a global model obtained by averaging the parameters of multiple instances of a shallow network trained with different random seeds can be used to estimate global feature importance and that its estimates are not sensitive to sub-optimal initialisation of individual models. Furthermore, regularization methods can enhance the robustness of parameter averaging. We give insights into how parameter averaging can be useful for feature ranking through extensive experiments using synthetic and real tabular datasets. Our method can also be extended to other modalities such as images, graph etc. and we leave it as a future work. Finally, the following are some of the shortcomings of our approach; i) The global model is biased towards globally important features and hence instance-wise feature importance will be biased, ii) We still need to do hyper-parameter search for feature bagging, noise etc., iii) The features in the real world datasets can have more intricate relationships such as multicollinearity, making the ranking of the features difficult, in which case our method can be used for feature selection rather than feature ranking, and iv) Our method needs additional compute during training, but this can be eliminated by integrating our method into K-fold cross validation, assuming K ≥ 10. # Train models K-times, each time with a different random seed. # Get the masked input using the square of the mask X M = m 2 X b # Generate subsets of data (i.e. feature bagging) References # Aggregate embeddings to get the joint embedding h = aggregate(h i , h j , h k , ...) # Get predictions using the joint embedding y pred = classif ier(h) # Compute losses L task = CrossEntropy(y pred , y b ) L mask = Gini(m) L = L task + L mask # Update models. Note that the global mask is not trained. backprop and update local mask, encoder and classifier # If validation set is provided, run validation using the steps above (no loss computation). . We split the training set into training and validation sets using 80-20% split to search for hyper-parameters. Once hyper-parameters was fixed, we trained the model on the whole training set. Features: The dataset has 14 attributes consisting of 8 categorical and 6 continuous features. We dropped the rows with missing values, and encoded categorical features using one-hot encoding. Once we encode the categorical features as one-hot, we end up with 105 features in total. Features are normalized by subtracting the mean and dividing by the standard deviation, both of which are computed using training set. Class imbalance: It is an imbalanced dataset, with only 25% of the samples being positive. B.2 UCI BlogFeedback Dataset Referred as Blog in this work, it contains the number of comments in the upcoming 24 hours for blog posts. Although the dataset can be used for regression, we turn it to a binary classification task to predict whether there is a comment for a post or not. Train-Validation-Test Split: UCI [9] provides one training set, and 60 small test sets. We combined all the test sets into one test set. We split training set to training and validation using 80-20% split to search for hyper-parameters. We trained the final model using all of the training set. Features: It includes 281 variables consisting of 280 integer and real valued features and 1 target variable indicating the number of comments a blog post received in the next 24 hours relative to the basetime. We converted the target (the last column in the dataset) to a binary variable, in which 0/1 indicates whether the blog post received any comments. Similarly to Income dataset, we used standard scaling to normalize the features. Class imbalance: ∼ 36% of the samples are positive in training set while it is ∼ 30% in the test set. B.3 Synthetic datasets from L2X: We run experiments on four synthetic datasets used for binary classification in L2X [5]. For each dataset, we have 10k training and 10k test set. In first three datasets, we generate data X X X from 10-dimensional standard Gaussian and assign labels using P (Y = 1|X X X) = 1/(1 + g(X X X)) in each dataset, where g(X X X) is defined in the following way: i) XOR: exp(f 1 * f 2 ), ii) Orange Skin: exp( 4 i=1 f i 2 − 4), and iii) Nonlinear Additive: exp(−100 * sin(2 * f 1 )+2 * |f 2 |+f 3 +exp(−f 4 )). In the fourth dataset, iv) Switch: We generate f 10 from a mixture of two Gaussians centered at ±3 respectively with equal probability. If f 10 is from the N (3, 1), then we use {f 1 , f 2 , f 3 , f 4 } to generate Y from the Orange Skin model. Otherwise, we use {f 5 , f 6 , f 7 , f 8 } to generate Y from the Nonlinear Additive model. f 9 is not used when generating labels. B.4 Data License Aduld Income and BlogFeedback are under Open Data Commons Public Domain Dedication and License (PDDL). C.1 Model architectures and hyper-parameters for XTab The classifier has three linear layers, two of which are followed by a leakyReLU and dropout (p=0.2). For the mask generator, we use two architectures; i) shallow: A linear layer followed by leakyReLU and a final linear layer, ii) deep: five linear layers, each followed by leakyReLU, and a final linear layer. The last layer for both mask generator and classifier uses sigmoid activation. The number of hidden units in each layer in the mask generator is same as the number of features in the input while we use 1024 units in each hidden layers of classifier. During training, a learning rate of 0.001 is used for all experiments and we optimize the batch size and total number of epochs. C.2 Implementation and resources We implemented our work using PyTorch [21]. AdamW optimizer [18] with betas = (0.9, 0.999) and eps = 1e − 07 is used for all of our experiments. We used a compute cluster consisting of Volta GPUs throughout this work. C.3 Details for training L2X, Invase, TabNet, Saliency Maps and Integrated Gradients (IG) L2X: We used the official implementation of L2X 2 . We set all of hyperparameters, following the instruction in L2X paper [5]. For each data set, we trained a neural network model with three hidden layers. The explainer is a neural network composed of two hidden layers. The variational family is based on three hidden layers. All layers are linear with dimension 200. The number of desired features is set to the number of true features. We fixed the step size to be 0.001 across experiments. The temperature for Gumbel-softmax approximation is fixed to be 0.1. Since the model is proposed for feature selection, we used the average number of selected features for each sample in the test set to rank them. Invase: We followed the hyperparameter selection as instructed in Invase paper [33] and all the experiments are based on the official Keras implementation 3 . We fixed the learning rate and λ as 0.0001 and 0.1, respectively. The actor and critic models are three layer neural networks with hidden state dimensions 100 and 200, respectively. L2 regularization is set to be 0.001 and activation function is ReLU. We used the feature selection probability, which is the output of the actor model, to rank the features. TabNet: We used the well established PyTorch implementation of TabNet 4 . We set the hyperparameters as N a = N b = 8, λ sparse = 0.001, B = 1024, B v = 128, γ = 0.3, and learning rate = 0.02. For all experiments, we used sparsemax as the masking function and OneCycleLR as the learning rate scheduler. The other parameters are set to be same as the default choices. Saliency Maps and Integrated Gradients (IG): For Saliency Maps [26] and IG [29], we used the same architecture as XTab and trained the models using SGD with learning rate of 0.01. For Saliency Map, we considered absolute value of each sample gradient for ranking. For IG, we used Captum PyTorch library 5 . D More on Related works Explainability The literature in explainability and model interpretation is extensive and we refer the reader to the survey papers [17,20,30] for a more complete review. In this work, we compare our method to the commonly used methods (Random Forest [3], Gradient Boosting Classifier [10,22]), to those based on the gradients and/or activations (Saliency Maps [26] and Integrated Gradients (IG) [29]), to the ones that rely on the learnable masks (TabNet [1], Invase [33]) and to some of the recently published feature selection methods (L2X [5], Invase [33]). What distinguishes our work from the aforementioned works is that we focus on the sensitivity of the feature rankings to model initialisation in neural networks. Our goal is to achieve the robustness of tree-based methods such as Random Forest [3] in neural network setting. In this regard, we compare our results to neural network-based methods both in the main paper as well as in the Appendix. Explainability in Federated Learning There is some recent work in the intersection of explainable AI (XAI) and Federated Learning (FL) such as the application of Gradient-weighted Class Activation Mapping (Grad-CAM) [25] to explain the classification results in electrocardiography (ECG) monitoring healthcare system [24]. However, it still remains to be an open problem. Lastly, although our method is not proposed for Federated Learning setting, we believe that it can still be used in this area. E Additional results for SynRank dataset E.1 The results for instance-wise feature importance Figure A1: SynRank dataset: Comparing ground-truth top feature to predicted top feature from M g M g M g for two separate runs. We see that M g M g M g is biased towards the globally important features, and fails to rank f 2 as the most important feature for 20% of the samples in the test set, for which we generated labels using P (Y = 1|X X X) = 1/(1 + exp(f 2 )) and f 2 is sampled from N (0, 1). Similarly, we can see the bias toward f 6 when estimating the most important feature for samples, for which we generated the labels using f 1 although it is less severe compared to the case of f 2 . g 3 corresponds to the global model obtained by averaging the parameters of first 3 local masks (l 1 l 1 l 1 , l 2 l 2 l 2 , l 3 l 3 l 3 ). Bottom row: The feature importance weights from M g M g M g obtained by averaging the parameters of all local masks. E.2 Results for shallow network We note that the weights of the features are correlated with the frequency and position of their ranks across all local masks. Specifically, f 1 and f 2 in L2X XOR dataset keep switching positions between 1 st and 2 nd ranks across all ten runs (top row in Figure A6(a)). Therefore, M g M g M g computes their importance weights to be similar, giving a slight edge to f 2 since it is ranked as #1 by six out of ten local masks (the first and third rows in Figure A6(a)). In the L2X Switch dataset, f 10 is used as the switch feature to change whether the label is determined by the features f 1 − f 4 or by f 5 − f 8 and is discovered as the most important global feature by M g M g M g (the bottom row in Figure A6(d)). Please note that this is the hardest synthetic dataset in a way that the effect of f 10 on the sample labels is not direct, rather it influences the labels indirectly by deciding which features to be used for label generation. This might be the main reason why a commonly used method such as permutation feature importance fails, ranking f 1 as the most important feature in our experiments with random forest and gradient boosting classifier used together with permutation feature importance (please see the results in Section I.1 of the Appendix). Consistent with the ground truth, M g M g M g also discovers f 9 as an uninformative feature (bottom row in Figure A6(d)). In L2X Orange dataset, our method correctly discovers the first four features as the most important ones with almost equal weights while it indicates f 1 and f 4 as the most important features in L2X Nonlinear Additive. H.1 Results for shallow network We run additional experiments on L2X Switch under three conditions: We apply i) No weight regularization to the weights of the mask model, ii) Dropout with p = 0.2 for the layers with leaky ReLU activation, iii) Weight-clipping ([−0.2, 0.2]) to limit the magnitude of the weights in each layer. For each of the three cases, we train 20 separate models and compare two different settings. In the first setting, we compute the variation in the feature rankings given by 20 local mask models (top row in Figure A7). In the second setting, we obtain 100 global models, each of which is obtained by averaging the parameters of 10 local models bootstrapped from 20 models. We compare the variation in feature rankings given by global models (second setting -bottom row in Figure A7) to that of 20 local models (first setting -top row). We observe that: i) Regularization methods such as dropout and weight-clipping help improve the variation across 20 local models, but the improvement is not substantial (e.g., comparing f 9 and f 10 across three cases at the top row). ii) However, they help improve the robustness of the parameter averaging significantly (e.g., comparing f 9 and f 10 across three cases at the bottom row). We also observe that the parameter averaging itself pushes the magnitude of the weights towards a tight range around zero as shown in Figure A8 (Section H.1 of the Appendix), indicating a potential relationship between robustness and a tighter weight distribution. Overall, the global models are able to discover important features in the correct order, and the variation in feature ranking across global models is small (almost zero) for ground truth important features when we apply weight regularization (please see f 9 and f 10 at the bottom row in Figure A7b and Figure A7c). iii) Weight-clipping works better than dropout in our experiments, but dropout results could perhaps be improved by hyper-parameter search on the p variable. . Bottom row: Variations in feature rankings in 100 global models, each of which is obtained by averaging the parameters of 10 models bootstrapped from 20 local models for the same three cases. Although the weight regularization helps improve the robustness, we still see variations for feature f 10 when the model is deep (see last two columns at the bottom row). Also, weight-clipping works better than dropout again. Figure A12: L2X Switch dataset: At each noise level, we ran our framework 10 times with different set of random seeds to compare how a) the test accuracy and b) the rankings of the top two features from M g M g M g , i.e. f 10 and f 1 , changes. Test accuracy improves when Gaussian noise with low variance is added. Compared to some other datasets, the feature ranking is stable with no, or low noise. Please note that we show 95% confidence interval only for feature f 10 for clarity. I More results for synthetic datasets from L2X In this section, we show the robustness of our method, XTab, by listing the global feature importance from M g M g M g using test set for L2X [5] datasets across 10 separate runs of our method. Please note that we don't use weight regularization for XTab in any of these experiments. We also list global feature importance obtained by using permutation importance together with random forest (RFP), and gradient boosting classifier (GBCP). For L2X Nonlinear Additive and L2X Switch datasets, we also compare XTab to other approaches such as TabNet[1], Invase [33], L2X[5], Saliency Maps [26], and Integrated Gradients [29] for comparison. I.1 Comparing XTab to other methods using L2X Switch dataset Figure A13: a) Comparing average cross validation accuracy for 10-fold cross validation (CV) by using mask generator architectures with different number of hidden layers. 10-fold CV is repeated ten times with different starting random seeds i.e. each run corresponds to one 10-fold CV run. 'Linear' means that the mask model has a single linear layer while '2xLReLU' means two hidden layers with leaky ReLU activation. In all cases, the final layer of the mask is a linear layer with sigmoid activation. b) Same experiment repeated for the 1xLReLU mask generator while increasing the number of the hidden units used in the classifier. c) The final test accuracy obtained once we settled on 1xLReLu mask and classifier with 1024 units in hidden layers and re-run the experiments using our framework i.e. training models on the full training set multiple times. For comparison, we included the results for other architectural choices for the mask. We first search for the optimum number of layers for mask generator, keeping the classifier architecture fixed as [1024, 1024, 1024]. The classifer has three linear layers, two of which are followed by a leaky ReLU and dropout(p=0.2). The last layer uses sigmoid activation. We compare four choices for mask generator; i) A single linear layer with sigmoid activation, ii) A linear layer followed by leaky ReLU and another linear layer with sigmoid activation (referred as 1xLReLU) iii) two linear layers, each of which is followed by leaky ReLU, and one linear layer with sigmoid, i.e. 2xLReLU and iv) 3xLReLU. The number of hidden units in each layer in the mask generator is same as the number of features in the input (i.e. 105 in the case of Income dataset). We modified our framework to accommodate K-fold cross validation (CV). We first generated a 10-fold CV dataset from the training set. For each fold, we changed the random seed before initialising and training our models on the training fold. We obtained the validation accuracy using the corresponding validation fold. This is a slight change to our original framework, in which we train models K-times on the same training set. We repeated this experiment with 10-fold CV for 10 times with different set of random seeds. As shown in Figure A13a, 1xLReLU gives the best performance for all 10 repeated experiments. Please note that we also ran experiments on 1xLReLU with wider hidden layer and observed that the standard deviation in validation accuracy increases with wider mask model (not shown). Thus, we choose the number of hidden units to be same as the number of input features for all datasets and experiments throughout the paper. We then repeated the first experiment. In this case, we used the mask generator with 1xLReLU, and varied the number of units for the hidden layers of the classifier. With everything else kept same, over-parameterised classifiers with 1024 and 2048 hidden units give the best performance. We used 1024 for the remainder of our experiments. These choices for the mask generator and the classifier are used for all other datasets and experiments since they work well as shown and discussed in the main and supplementary sections of the paper. Figure A14, but without noise at the input. Removing noise makes the rankings less robust. Second Row: Same experiment as in Figure A14, i.e. using noisy input, but the mask architecture is deeper (5 hidden layers). Parameter averaging breaks down with deeper mask model architecture (e-h). J.3 Robustness of parameter averaging Bottom row: Variations in feature rankings in 100 global models, each of which is obtained by averaging the parameters of 10 models bootstrapped from 20 local models for the same three cases. Bottom row: Variations in feature rankings in 100 global models, each of which is obtained by averaging the parameters of 10 models bootstrapped from 20 local models for the same three cases. Please note that the global model in the case of deeper networks results in a feature ranking that is not consistent with local model estimations, indicating that the parameter averaging does not work as well as it does for the shallow networks. J.4 Variations in global feature importance obtained from M g J.5 Comparing XTab to other methods for Adult Income dataset Please note that, for categorical features, since it is difficult to compute the importance of a parent category from its one-hot encoded features for other methods, we compare the rankings by the individual categories (e.g., showing the importance of "single", or "married" instead of the importance of their parent category "maritual-status"). Abbreviations in the tables are; mcs: Married civ spouse, en: education-num, cg: capital-gain, hpw: hours-per-week, cl: capital-loss, em: Exec-managerial, nm: never-married, oc: own-child, os: other-service, fw: Final-Weight, mx: Mexico, hn: Holand-Netherlands, unm: unmarried, nm: never-married, phl: Philippines, tt: Trinadad&Tobago, nif: not-in-family, ts: tech-support , seni: self-emp-not-inc. Figure A20: Blog dataset: Repeating the experiment with Income dataset (i.e. using shallow network, noisy input data) in Figure A14 (a-c) for Blog dataset. We also use weight-clipping ([−0.2, 0.2]). We repeated the experiment that we did for Income dataset (using shallow network and noisy input data) in Figure A14 (a-d) for Blog dataset. Looking at the Figure A20 (a-c), features f52 (number of comments in the last 24 hours before the basetime), f54 (number of comments in the first 24 hours after the publication of the blog post, but before basetime), f51 (total number of comments before basetime), and f20 (the median of f54) are discovered to be the most important for classifying whether a blog post would receive a comment. K.1 Robustness of parameter averaging Bottom row: Variations in feature rankings in 100 global models, each of which is obtained by averaging the parameters of 10 models bootstrapped from 20 local models for the same three cases.
2022-08-08T01:15:06.972Z
2022-08-05T00:00:00.000
{ "year": 2022, "sha1": "70d7fa8215cd9e7aa1569b43849d4b140ea4936b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d50060b1d0107684836eb7c5c5f1d638b8c9c676", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
258716675
pes2o/s2orc
v3-fos-license
Spatial Distribution of Gestational Syphilis in Brazil: Socioeconomic and Health Services Inequalities ABSTRACT. We aimed to analyze the spatial distribution of gestational syphilis from 2008 to 2018 in Brazil and identify correlations with socioeconomic and health-care aspects. This ecological study used municipalities of Brazil as the unit of analysis. Data collection took place between June and July 2021. Data were extracted for 2008 to 2018, and information on the epidemic in animals in the country was obtained from data records. The gestational syphilis detection rate was the dependent variable, and the independent variables were the Municipal Human Development Index, the proportion of doctors per inhabitant in primary health care (PHC), and the percentage of PHC coverage. The data went through an aggregation process in 482 immediate regions of urban articulation. The global Moran’s I index and the local spatial correlation indicator detected territorial clusters using GeoDa software. The gestational syphilis detection rate was distributed unevenly in the immediate regions of urban articulation between 2008 and 2018, and presented a negative spatial correlation with the Municipal Human Development Index (Moran’s I = −0.243, P ≤ 0.05), the percentage of PHC coverage (Moran’s I = −0.163, P ≤ 0.05), and the proportion of doctors per inhabitants in PHC (Moran’s I = −0.164, P ≤ 0.05). Socioeconomic inequalities, mainly related to the availability of human resources and access to health services, are correlated with the spatial distribution of gestational syphilis in Brazil. Investments in social policies and strengthening of PHC are essential for controlling gestational syphilis. INTRODUCTION The number of cases of syphilis increased in recent years and has become a global public health problem even with diagnosis and treatment protocols. Syphilis is transmitted sexually and vertically, affects the health and lives of many people worldwide, and impacts reproductive and child health directly. It may cause abortion, stillbirth, premature birth, neonatal death, and early or late congenital manifestations. 1 Studies estimate that more than 11 million new cases of syphilis occur worldwide annually, with high incidence rates in Latin America, Africa, and Asia. 2 In Brazil, syphilis reemerged and was declared an epidemic in 2016. 3 In 2020, 61,441 cases of gestational syphilis (GS) (detection rate, 21.6/1,000 live births), 22,065 cases of congenital syphilis (incidence rate, 7.7/1,000 live births), and 186 deaths were reported to the Notifiable Diseases Information System (SINAN). 4 In 2014, the Pan American Health Organization created a committee to validate the vertical transmission of syphilis and HIV. For this, the committee recommends that 95% of pregnant women have access to at least one prenatal consultation and, if necessary, be tested and treated for syphilis. The committee also certificates countries that reach this goal; 11 have already been certified. 5 Disease control requires knowledge regarding its territorial distribution and dispersion to evaluate, plan, and support clinical decision making, health management, and policies. 6 Although some studies have analyzed the correlations between GS, socioeconomic factors, and health services in Brazil, the analysis of spatial distribution and factors correlated with GS throughout the country are scarce and restricted to states or cities. In this context, the techniques developed in spatial analysis are tools for identifying areas of greater epidemiological pressure and associating the studied phenomenon with social and economic factors as well. 7 When studying health care, it is important to consider the location and characteristics of where people seek their care. This is because the place where an individual lives or works should be considered a potential determinant of the disease. Thus, epidemiological research has been developed that involves geocoding, distance estimation, residential mobility, linking of records and data integration, spatial grouping and space-time, small-area estimation, and Bayesian applications for disease mapping. Linked to spatial analysis, geographic information systems have applicability in disease mapping, rate smoothing, cluster or hot spot analysis, and spatial modeling. 8 The analysis of GS focusing on geographic location and establishing relationships with external factors may also reveal poorly explored results that may help control the disease in specific locations. Our study is relevant mainly because of spatial analysis, which identifies the spatial distribution of GS and reveals high-risk clusters of the disease using parameters related to socioeconomic conditions, human resources, and access to health services. Countries with contexts similar to Brazil may also benefit from our study because the results may improve health services and help reduce GS. Our study aimed to analyze the spatial distribution of GS between 2008 and 2018 in Brazil, and to identify correlations with socioeconomic and health-care aspects. MATERIALS AND METHODS For this ecological study, which uses secondary data from public domains, we selected immediate regions of urban articulation (IRUAs), 9 proposed by the Brazilian Institute of Geography and Statistics (IBGE), as the unit of analysis. Brazil has 5,565 cities and 27 states (including the Federal District), according to the 2010 census. The country is divided into five macro-regions: North, Northeast, Midwest, South, and Southeast. The IBGE adopts models of regional division, making the urban structure a fundamental element of space organization. This regionalization process is constructed from the definition of criteria that distinguish the regions of urban articulation (RUAs), using as references the Brazilian urban network, the hierarchy of its centers, the influence of urban areas, information from public and private administrations, and consumption of goods and services. They make the urban network compatible with zonal features, contiguous and without overlapping, commanded by a city that polarizes an area of influence of its own. It identifies regions where cities articulate in three scales of RUAs: expanded regions, intermediate regions and immediate regions. 9 The IRUAs largely reflect the area lived by the population and its daily commute to supply and search for everyday goods and services. In this urban-regional division model, composed of 482 territories, the region is contiguous and each municipality belongs to a single territorial unit, with boundaries that are not restricted to state borders. Each region has a core municipality that exerts influence in macroregional terms over the other municipalities that comprise it through the supply of highly complex goods and services. Our sample comprised reported cases of GS by city of residence from January 1, 2008 to December 31, 2018. Data were obtained from the following public databases: the Department of Informatics of the Unified Health System, SINAN, Live Birth Information System, National Registry of Health Establishments, the United Nations Development Program, IBGE, and Primary Care e-Manager. The Primary Care e-Manager is the database that gives access to several information systems of primary health care (PHC) in Brazil. It was used to extract data on PHC coverage in Brazil. The IBGE estimated that 208,494,900 inhabitants occupied an area of 8,510,820,623 km 2 in 2018. 10 Data were collected from June to July 2021. The GS detection rate was the dependent variable, and the independent variables were the Municipal Human Development Index (MHDI), the proportion of doctors per inhabitant in PHC, and the percentage of PHC coverage. Data from outcome variables, human resources, infrastructures, and PHC coverage were collected from 2008 to 2018, and data related to social and economic conditions were collected exclusively from 2010. Indicators were created by weighting data to minimize possible discrepancies resulting from different population sizes (Supplemental Table 1). An exploratory analysis of spatial data was performed using thematic maps in the GeoDa 1.14 (University of Chicago, EUA) program to observe the distribution of variables. Subsequently, the global Moran's I index and the local spatial association indicator (LISA) 11 were applied. The global Moran's I coefficient verified the spatial autocorrelation between variables, and results ranged from 21 (inverse correlation) to 11 (direct correlation); a value of zero represented no correlation. The global spatial autocorrelation and local correlation coefficients were considered significant at P # 0.05. 12 The LISA values for the 482 IRUAs were submitted to the Moran scatterplot, adopting a statistical significance of 95% (P # 0.05). Regions of urban articulation were divided into five classes: high-high (the unit and its neighbors had values greater than the average of the set), high-low (high value for the unit and low average values for its neighbors), low-high (the unit had a low value for a variable, whereas its neighbors had values greater than the average of the set), low-low (values of the unit and its neighbors were below the average of the set), and not significant (the unit has no defined relationships with its neighbors). 12,13 The LISA bivariate analysis was performed using GeoDa 1.14 software to assess the spatial correlation between the dependent variable (GS detection rate) and each independent variable (MHDI, the proportion of physicians per inhabitant in PHC, and the percentage of PHC coverage). Through this analysis, the Moran local index, maps, and correlation scatterplot were generated. Moran I statistical significance was verified by a test with 99 random permutations. In this bivariate spatial correlation, five types of spatial clusters are observed: nonsignificant (territories that did not form clusters because the differences were not significant), high-high (areas formed by IRUAs with high frequencies of the dependent variable and high frequencies of the independent variables), low-low (areas formed by IRUAs with low frequencies of the dependent variable and low frequencies of the independent variables), high-low (areas formed by IRUAs with high frequencies of the dependent variable and low frequencies of the independent variables), and low-high (areas formed by IRUAs with low frequencies of the dependent variable and high frequencies of the independent variables). 11 Figure 3 shows the spatial distribution of the GS detection rate and the MHDI for the 482 IRUAs from 2008 to 2018. According to the LISA method, GS correlated inversely with MHDI (Moran's I 5 20.243, P # 0.05). The presence of clusters in the North highlights the low human development and high number of cases of GS. Although the South, Southeast, and Midwest presented a high human development, cases of GS remained high. RESULTS Correlations between the spatial pattern of the GS detection rate and the percentage of PHC coverage in IRUAs (Moran's I 5 20.163, P # 0.05) are presented in Figure 4. Clusters with a high GS detection rate and low PHC coverage were observed in the North, and in the states of São Paulo and Rio Grande do Sul. The spatial pattern of correlations between the GS detection rate and the proportion of physicians in PHC per inhabitants in the IRUAs (Moran's I 5 20.164, P # 0.05) are shown in Figure 5. Corroborating with the clusters displayed in Figure 4, Figure 5 shows clusters with a high rate of detection of GS and a low proportion of physicians in PHC observed in the North, and in the states of São Paulo and SPATIAL DISTRIBUTION OF GESTATIONAL SYPHILIS Rio Grande do Sul. The cluster formed in Figure 5, covering the Midwest region, also deserves attention, with a high detection rate for gestational syphilis and a low proportion of PHC physicians per population. It is important to emphasize that clusters were formed in the Northeast, in the state of Minas Gerais and on the border between the state of Amazonas and Peru, showing areas with low rates of GS and greater proportions of physicians per inhabitant in PHC. DISCUSSION Our study used spatial analysis to measure the magnitude of GS as a public health problem in several Brazilian regions. Syphilis formed clusters in different regions and was correlated with social variables and PHC coverage, demonstrating areas of high vulnerability. The Midwest presented more high-high clusters of GS. A study 14 of syphilis in this region indicated that the population from cities with international borders with Paraguay and Bolivia were exposed to a greater risk of contracting sexually transmitted infections (STIs), which may have contributed to the high transmission rate of syphilis. Illegal drug trafficking in that area may also have favored the vulnerability of the population. Another study 15 observed an increased GS detection rate in a state in the Midwest with great economic development, corroborating the results of our study. The region studied by the authors had an important industrial food hub and extraction of minerals, which may have contributed to migration and a floating population, and facilitated STI transmission. In this sense, we suggest that the immigration process may have some influence on the syphilis detection rate, as well as on its transmissibility dynamics. Researchers 16 propose that strategies adopted to reduce the burden of STIs should take into account the relatively high residential mobility of at-risk populations to reduce the spread of infections to new areas. The presence of major highways in the region may also increase STIs and form high-high clusters of GS. Although the Cuiab a-Santar em highway (BR-163) that crosses the states of Mato Grosso and Mato Grosso do Sul in the Midwest is important for the economy, it facilitates the migratory process, use of drugs, fluctuation of the male population, and sex trade, favoring STI transmission. 17 In this region, we observed a spatial pattern of high rates of GS and a low proportion of doctors per inhabitant in PHC, indicating that human resources in PHC and strategies for reorganizing the PHC are essential for syphilis control. Regions of urban articulation in the South also showed high-high clusters of the GS detection rate, and a spatial pattern of high rates of syphilis and MHDIs. This may be explained by better access to health services in populations of more developed regions (e.g., South and Southeast), increasing the number of notifications. 18 de Oliveira et al. 15 reinforced that syphilis was not limited to populations with low social conditions because it may also spread in areas with easy access to education. Therefore, actions to control syphilis need population-wide policies for different contexts throughout the country. The high syphilis detection rate and low percentage of PHC coverage also indicate the need to strengthen PHC in the South. Studies [19][20][21] developed in this region highlighted obstacles in prenatal care that may have influenced the increased GS detection rates. Furthermore, difficulties managing the treatment of pregnant women and sexual partnerships were identified, reinforcing the need for improving prenatal care. Clusters of GS in RUAs of the Southeast were contradictory: high rates of syphilis and high MHDIs, and low rates of syphilis and high MHDIs. The former correlation may have occurred because regions with high MHDIs offer great access to diagnostic methods and present a better notification system. 18,22 In the latter situation, regions with high MHDIs, present great service conditions, which may improve access to treatments. 18 Regions of urban articulation located in the Southeast that formed clusters of high GS detection rates and a low proportion of doctors per inhabitant in PHC may indicate obstacles in the PHC. Furthermore, clusters in RUAs in the Southeast presenting high GS detection rates and a low percentage of PHC coverage reflect a weakness in PHC. Access to PHC may also be difficult and may increase the rate of GS in the Southeast. The state of São Paulo, located in the Southeast region, concentrates the highest proportion of doctors per inhabitant (including all medical specialties), valuing the individual clinic and strengthening secondary and tertiary care in the health care network. 23 A low-low spatial pattern of the GS detection rate was found in the Northeast. The most plausible explanation for these findings is the greater PHC coverage, because this region has a high social vulnerability. Our results corroborate studies that showed a negative correlation between GS detection rate and Family Health Strategy coverage, 24 reinforcing the importance of PHC for treatment, health promotion, and disease prevention. The maps with geographic distributions of clusters indicated that this region had a high proportion of doctors per inhabitant in PHC and low GS detection rates. Data also suggest the importance of correlations between the availability of doctors in PHC and actions of detection, treatment, and prevention of syphilis. Primary health care is important because it is the entry and communication center of the health-care network. Moreover, it can be resolute and offer services and universal access in the Brazilian Unified Health System based on integral care and inter-and multidisciplinary work. 25,26 Despite the low rates observed in the Northeast, GS is still far from being controlled or reaching the rates recommended by the WHO (i.e., , 0.5 cases/1,000 live births). We also highlight the possibility of underreporting problems in the surveillance systems of the area. 27 Machado et al. 28 indicated social vulnerability and difficulties managing resources in the health sector of the Northeast, which may hamper the control of syphilis. Furthermore, social and economic factors in the region were correlated with cases of GS, according to de Macêdo et al. 29 and de Conceic¸ão et al. 30 Syphilis detection rates were greater in RUAs of the North than in the Northeast; however, no statistical significance was observed compared with national data. These data led us to believe in underreporting records, because this region faces social vulnerability and limited PHC coverage. Freitas et al. 22 found that women from Brazilian cities with low MHDIs were less likely to have access to tests for diagnosing syphilis. Spatial clusters formed in RUAs of the North by correlating the GS detection rate and MHDIs highlight the relationships between GS and social vulnerability. This index was comprised of dimensions of longevity, education, and income, and was presented coherently in the territory because several regions had high syphilis detection rates and low MHDIs. Our study also showed spatial clusters formed in RUAs of the North, indicating high GS detection rates and a low proportion of doctors per inhabitant in PHC and percentage of PHC coverage. These findings agree with those of Soares Filho et al., 31 who indicated the Brazilian North as a critical area according to the proportion of Family Health Strategy teams per inhabitant. Another study 32 demonstrated the unavailability of medical appointments and lack of doctors in a city in the North. According to de Oliveira et al., 23 the poor distribution of doctors in the country varies according to region, medical specialty, and level of care; the PHC faces difficulties in keeping professionals in this health care network. In this sense, a government plan for encouraging and maintaining doctors in these areas is needed to strengthen PHC and control the syphilis epidemic. The low proportion of PHC units in the North associated with geographic factors in the region increases difficulties related to access to health services and may contribute to a negative scenario of syphilis. Geographic characteristics of other countries (e.g., Australia) also led to difficult access to health services by the population. 33 It is important to point out that clusters with a high GS detection rate, high PHC coverage, and a high proportion of PHC physicians per population may reflect women's improved access to health services. Thus, the functionality of the health system can culminate in greater opportunities to offer exams for the diagnosis of syphilis, and more testing, which may contribute to the increase in the number of notifications of the condition in certain areas. 34 However, it is possible that, in some areas, underreporting and lack of availability of tests for diagnosing syphilis also occur in health services with high PHC coverage and a high proportion of PHC physicians per population, which are reflected in a low GS detection rate and flaws in the quality of prenatal care. 35 Our study may present limitations because of the use of secondary data from Brazilian information systems and possible underreporting. However, no major changes were observed in the context of the country. The use of data up to 2018 was a result of availability in the SINAN during data collection. Future studies using spatial analysis and focusing on regions of epidemiological interest (e.g., high-high detection CONCLUSION The spatial analysis of GS in Brazil showed high-high clusters of GS detection rates in the South and Midwest. On the other hand, the central area of the Northeast showed lowlow clusters. The correlation between GS detection rate and MHDI indicated the influence of socioeconomic aspects in several areas of the country. Clusters of health-care variables showed a negative correlation between rates of GS in certain areas and PHC coverage, proportion of doctors, and basic health units per inhabitant in PHC. Specific regions need to reorganize the health-care network, improve PHC coverage, encourage incentive policies, and maintain doctors in PHC. Our study may contribute to establishing and reorienting public policies to control GS in different contexts of the Brazilian territory. Implementing actions, ensuring equity and comprehensiveness in health, and granting access to adequate detection, treatment, and monitoring of GS may help improve its control in Brazil. Note: Supplemental table appears at www.ajtmh.org. Acknowledgments: We thank the Federal University of Rio Grande do Norte. We also thank Probatus Academic Services for providing scientific language translation, revision, and editing. Financial support: The publication fee for the article was supported by Programa S ıfilis Não, a project by the Ministry of Health of Brazil in partnership with the Federal University of Rio Grande do Norte and Coordination for the Improvement of Higher Education Personnel-Brazil (CAPES) (funding code 001). Ours was an ecological study performed with secondary data from public domains; thus, the approval by the research ethics committee
2023-05-17T06:17:43.741Z
2023-05-15T00:00:00.000
{ "year": 2023, "sha1": "b31ddafa813d93fbaa5773c576719adef1071904", "oa_license": "CCBY", "oa_url": "https://www.ajtmh.org/downloadpdf/journals/tpmd/aop/article-10.4269-ajtmh.22-0449/article-10.4269-ajtmh.22-0449.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5fecfe550426df5def5931978bd206e46a730afe", "s2fieldsofstudy": [ "Medicine", "Sociology", "Geography" ], "extfieldsofstudy": [ "Medicine" ] }
237934252
pes2o/s2orc
v3-fos-license
Oxidative Desulfurization Catalyzed by Phosphotungstic Acid Supported on Hierarchical Porous Carbons A hierarchical porous carbon material (HPC) with an ultra-high specific surface area was synthesized with sisal fiber (SF) as a precursor, and then H3PW12O40·24H2O (HPW) was immobilized on the support of SF-HPC by a simple impregnation method. A series characterization technology approved that the obtained SF-HPC had a high surface area of 3152.46 m2g−1 with micropores and macropores. HPW was well-dispersed on the surface of the SF-HPC support, which reduced the loading of HPW to as low as 5%. HPW/SF-HPW showed excellent catalytic performance for oxidative desulfurization, and the desulfurization rate reached almost 100% under the optimal reaction conditions. The desulfurization rate of HPW/SF-HPW could be maintained at above 94% after four recycles. Introduction Reduction in the sulfur content in oil products is the primary solution to decrease the pollution caused by oil burning [1]. Among the developed desulfurization technologies, oxidative desulfurization is widely studied as the most promising process because of its mild reaction conditions and good desulfurization effect [2]. In this approach, heteropoly acids such as phosphotungstate decompose in the presence of excessive hydrogen peroxide into a peroxide metal complex W(O 2 ) n , providing an active site for oxidative desulfurization [3,4]. For catalyst separation and recycling, it is desirable to load the heteropoly acids onto support materials [5]. In this context, Yang et al. [6] prepared a m/M-HPW/SiO 2 -20 catalyst by loading 12-tungstophosohoric acid (HPW) on multi-stage porous silica. Under optimal conditions, the removal rate of dibenzothiophene (DBT) was found to reach 100%. However, the specific surface area of the m/M-HPW/SiO 2 -20 catalyst was only 346 m 2 g −1 , and the aperture was less than 10 nm, making the HPW loading as high as 20%. Meanwhile, Yue et al. [7] studied the loading of HPW on hierarchical ordered silica with the aim of adjusting the pore structure of the catalyst, which not only maintained a high specific surface area but also provided structures with different pore sizes and improved the performance of the catalyst. Under optimum conditions, 95.1% sulfur was removed after eight cycles. Huang et al. [8] used sodium-dodecyl-benzene-sulfonate-modified layered double hydroxide as a support to load HPW for oxidative desulfurization. Under optimal conditions, the sulfur removal rate was close to 100%, and after 15 cycles the removal rate was reduced to 95.73%. Although the specific surface area was only 167 m 2 g −1 , the pore diameter of the catalyst was 12.99 nm, and this large pore size increased the reaction mass transfer rate, thus allowing the HPW load to be reduced to 10%. Pham et al. [9] prepared a PW-NH 3+ -SBA-15 catalyst. Under the optimal conditions, the conversion rates of DBT and BT reached 100% and 99.9%, respectively. The catalyst can be reused four times without a significant decrease in catalytic activity. The change in catalyst structure confirmed a strong interaction between the SBA-15 support and the HPW catalytically active site. Gao et al. [10] incorporated HPW into TiO 2 pellets to improve catalytic activity and recyclability. After seven runs, it had good conversion and selectivity. However, these excellent catalysts often required high loading [11][12][13]. The content of HPW will affect the quality of the oil, and the phosphorus will cause environmental problems. Therefore, it is very important to reduce the load of HPW. The large content of HPW required for the reaction is mainly due to the fact that HPW is prone to agglomeration [14]. Therefore, to reduce the loading of HPW, it is necessary to prevent agglomeration and increase its dispersion on the surface of the support, thereby providing sufficient active sites. To ensure a high dispersibility, the support must have a high specific surface area. Although the presence of micropores can increase the specific surface area, it also weakens the mass transfer effect, which is disadvantageous for the reaction of oxidative desulfurization. Therefore, to reduce the HPW load on the support, a material with a large surface area and pore structure is desirable. Hierarchical porous carbons (HPCs) have not only high specific surface areas but also large pore structures [15]. In many applications, specific surface area is the most important structural parameter. Li et al. [16] adjusted the porosity of carbon materials by controlling the template removal strategy. The method simplified the preparation process and produced honeycomb carbon with a macroporous/mesoporous/microporousscaled pore structure with a specific surface area of up to 1011 m 2 g −1 . Supercapacitors assembled with porous carbon as electrodes exhibited large specific capacitances and provided good cycle stability. Liu et al. [17] combined the electrospinning technology, the in situ polymerization and carbonization processes, and the manufacturing process to produce a nitrogen-doped graded porous carbon fiber material, which can remove organic dyes efficiently. It also mentioned that in multi-stage pore materials, macropores can provide efficient mass transfer [18,19], while micropores/mesopores provided a large surface area for the dispersion of HPW [20][21][22]. In this study, we describe the use of a HPC as a support for a HPW catalyst. The high specific surface area of HPC improved the dispersion of HPW and avoided its agglomeration, while the macroporous structure increased the reaction rate and thus allowed the HPW loading to be reduced. In addition, the catalytic performance of the catalyst in the oxidative desulfurization reaction was investigated, the reaction conditions were optimized, and the stability of the catalyst was explored. Synthesis of HPW/SF-HPC The hierarchical porous materials were prepared following procedures found in the literature [19]. The sisal fiber was cut into small pieces before use and pre-carbonized at 550 • C for 3 h in a N 2 atmosphere. The obtained sample was labeled as SF. SF and KOH were then mixed at a mass ratio of 1:5, adding deionized water to dissolve the KOH pellets. After mixing evenly, the mixture was allowed to stand for 1 h to ensure complete immersion of SF in the KOH solution. Then, the mixture was dried for 6 h at 100 • C, and carbonized at 900 • C for 3 h in a N 2 atmosphere. The mixture sample was washed with dilute hydrochloric acid and deionized water to remove unreacted KOH. The wet sample was dried in an oven at 80 • C overnight, and the solid powder was collected and labeled as SF-HPC. The catalysts were prepared as follows: after dissolving 0.05 g of HPW in 10 mL of deionized water, 0.95 g of SF-HPC was added and stirred at room temperature for 24 h. A solid sample, which was marked as 5%HPW/SF-HPC, was obtained. The sample was collected by filtration and dried at 80 • C. In a similar manner, the catalysts 1%, 10%, and 20%HPW/SF-HPC were prepared by varying the amount of HPW. Characterization Transmission electron microscopy (TEM) experiments were conducted on a Tecnai G2 F20 S-TWIN (Hillsboro, OR, USA) (200 kV) instrument. The structural parameters of the material were obtained from the Micromeritics Model ASAP 2460 (Atlanta, GA, USA) instrument by nitrogen absorption and desorption experiment at 77 K. Powder X-ray diffraction (XRD) data were collected on a Bruker (Karlsruhe, Baden-Württemberg, DEU) advanced D8 X-ray diffractometer operating at 40 mA and 40 kV with Cu-Ka irradiation (λ = 0.15406 nm). The surface morphology and structure of the materials were observed by scanning electron microscopy on a Zeiss Sigma 300 (Jena, Freistaat Thüringen, Germany). The Fourier-transform infrared (FTIR) spectroscopy was performed on a Thermo Fisher Nicolet iS10 spectrometer (Waltham, MA, USA). Inductively coupled plasma (ICP) experiments were performed on an Agilent ICP-OES 730 series ICP (Santa Clara, CA, USA). The macroporous structure of the samples was obtained by using a Micromeritics AutoPore IV 9510 mercury porosimeter (Atlanta, GA, USA). Catalytic Performance Testing DBT as a sulfur source was dissolved in n-octane to prepare simulated oil with a sulfur content of 100 ppmw, and the desulfurization performance of the catalyst was tested using the resulting solution as follows: 10 mL of simulated oil was added to a 100 mL three-necked flask, and a quantitative amount of catalyst was added and dispersed in the simulated oil by ultrasonication. The three-necked flask was placed in a constanttemperature water bath, and a certain amount of 30% hydrogen peroxide solution was added according to the desired O/S; stirring was started and timing was started. When the desired reaction time was reached, the three-necked flask was taken out of the bath, and the catalyst was allowed to sink into the bottom of the flask. The reaction solution was poured into a 25 mL beaker, and 10 mL of methanol was added for extraction. After the extraction, a part of the supernatant was taken out to test sulfur content with a Coulomb analyzer (WK-2D). The catalytic performance was evaluated according to the following equation: where C 0 represents the sulfur concentration in the configured simulated oil and C t refers to the sulfur content in the oil after t, time, measured by the Coulomb analyzer. Catalyst Characterization The scanning electron microscopy images depict the morphology of the SF-HPC sample in Figure 1. As can be seen, the SF-HPC materials had obvious macroporous structures, which provided the basis for preparing the support with a high specific surface area. The average thickness of the honeycomb carbon wall was about 3 µm, which enabled the formation of small holes in the walls. During the KOH treatment, the carbon wall portion was punctured, and the adjacent channel-like macropores communicated with each other. Nevertheless, the intermittent fracture of these carbon substrates did not disrupt the continuity of the macroporous skeleton. In fact, the interconnectivity of the macroporous skeleton was known to improve the mass transfer performance of the material [23,24]. The scanning electron microscopy images depict the morphology of the SF−HPC sample in Figure 1. As can be seen, the SF−HPC materials had obvious macroporous structures, which provided the basis for preparing the support with a high specific surface area. The average thickness of the honeycomb carbon wall was about 3 μm, which enabled the formation of small holes in the walls. During the KOH treatment, the carbon wall portion was punctured, and the adjacent channel-like macropores communicated with each other. Nevertheless, the intermittent fracture of these carbon substrates did not disrupt the continuity of the macroporous skeleton. In fact, the interconnectivity of the macroporous skeleton was known to improve the mass transfer performance of the material [23,24]. The magnified image clearly shows that the activated SF−HPC generated a large number of smaller pores on the walls of the macropores [15], which was the main reason for the increase in specific surface area. This was consistent with the results of the nitrogen adsorption and desorption test discussed below. [15], which was the main reason for the increase in specific surface area. This was consistent with the results of the nitrogen adsorption and desorption test discussed below. The scanning electron microscopy images depict the morphology of the SF−HPC sample in Figure 1. As can be seen, the SF−HPC materials had obvious macroporous structures, which provided the basis for preparing the support with a high specific surface area. The average thickness of the honeycomb carbon wall was about 3 μm, which enabled the formation of small holes in the walls. During the KOH treatment, the carbon wall portion was punctured, and the adjacent channel-like macropores communicated with each other. Nevertheless, the intermittent fracture of these carbon substrates did not disrupt the continuity of the macroporous skeleton. In fact, the interconnectivity of the macroporous skeleton was known to improve the mass transfer performance of the material [23,24]. The magnified image clearly shows that the activated SF−HPC generated a large number of smaller pores on the walls of the macropores [15], which was the main reason for the increase in specific surface area. This was consistent with the results of the nitrogen adsorption and desorption test discussed below. The XRD pattern of the support and the catalyst are shown in Figure 3A. It can be seen that SF-HPC had a distinct diffraction peak at 20-30 • , corresponding to amorphous carbon. The pattern of 1%HPW/SF-HPC was the same as that of the support SF-HPC, because the low content of HPW was uniformly distributed on the SF-HPC surface, which could not be detected by XRD. In contrast, the characteristic peaks of HPW appear in the pattern of 5% and 10%HPW/SF-HPC, being more obvious in the latter catalyst due to the increased HPW content. The XRD pattern of the support and the catalyst are shown in Figure 3A. It can be seen that SF−HPC had a distinct diffraction peak at 20-30°, corresponding to amorphous carbon. The pattern of 1%HPW/SF−HPC was the same as that of the support SF−HPC, because the low content of HPW was uniformly distributed on the SF−HPC surface, which could not be detected by XRD. In contrast, the characteristic peaks of HPW appear in the pattern of 5% and 10%HPW/SF−HPC, being more obvious in the latter catalyst due to the increased HPW content. [25][26][27], the tensile vibrations of W-Oc-W, W-Ob-W and P-Oa were blue-shifted and that of W-Od was red-shifted, which confirmed the interaction between HPW and SF−HPC. In order to further understand the structural properties of the catalyst and the support, the support and the catalyst were analyzed by nitrogen adsorption and desorption analysis to obtain physical properties, such as specific surface area, pore volume, and pore size, as shown in Table 1. The specific surface area of SF−HPC in Table 1 was as high as 3152 m 2 g −1 , which was larger than the specific surface area of HPC reported in the previous literature [20,21]. As shown in Figure 4A, in the N2 adsorption isotherm, all isotherms were close to type I, indicating the presence of micropores. In addition, there was a very small hysteresis loop in the range of P/P0 in the range of 0.45-0.5, which indicated the presence of a mesoporous structure in the pore structure. It could also be seen from the pore size distribution in Figure 4B. The macroporous structure of the sample was characterized by a mercury intrusion meter. The results showed that there were macroporous structures in the SF−HPC sample. These were consistent with those observed by SEM, which further confirmed that the samples were hierarchical porous materials. After loading HPW, the specific surface area decreased and the pore diameter did not obviously change. It could also be seen from the pore size distribution that the pore size distribution did not greatly change after the loading of HPW. Figure 3B shows the infrared spectra of SF-HPC and the HPW/SF-HPC catalysts. SF-HPC gave rise to absorption peaks at 784.41, 808.87, 872.32, 1383.88, and 2347.29 cm −1 . Meanwhile, the absorption peak of HPW became more obvious with catalyst loading increasing. In the spectrum of the 10%HPW/SF-HPC catalyst, the peaks that appeared at 795.43, 892.18, 989.50, and 1066.52 cm −1 could be attributed to the vibration of the W-Oc-W, W-Ob-W, W-Od, and P-Oa bonds, respectively. Compared to the corresponding characteristic absorption peaks in pure HPW (799.58, 892.26, 982.44, and 1080.24 cm −1 ) [25][26][27], the tensile vibrations of W-Oc-W, W-Ob-W and P-Oa were blue-shifted and that of W-Od was red-shifted, which confirmed the interaction between HPW and SF-HPC. In order to further understand the structural properties of the catalyst and the support, the support and the catalyst were analyzed by nitrogen adsorption and desorption analysis to obtain physical properties, such as specific surface area, pore volume, and pore size, as shown in Table 1. The specific surface area of SF-HPC in Table 1 was as high as 3152 m 2 g −1 , which was larger than the specific surface area of HPC reported in the previous literature [20,21]. As shown in Figure 4A, in the N 2 adsorption isotherm, all isotherms were close to type I, indicating the presence of micropores. In addition, there was a very small hysteresis loop in the range of P/P 0 in the range of 0.45-0.5, which indicated the presence of a mesoporous structure in the pore structure. It could also be seen from the pore size distribution in Figure 4B. The macroporous structure of the sample was characterized by a mercury intrusion meter. The results showed that there were macroporous structures in the SF-HPC sample. These were consistent with those observed by SEM, which further confirmed that the samples were hierarchical porous materials. After loading HPW, the specific surface area decreased and the pore diameter did not obviously change. It could also be seen from the pore size distribution that the pore size distribution did not greatly change after the loading of HPW. To clearly observe the distribution of the active component, HPW, on the surface of the support SF−HPC, we performed an EDS mapping test with the 5%HPW/SF−HPC catalyst. As shown in Figure 5, we could clearly observe phosphorus and tungsten, indicating that HPW had good dispersibility on the surface of the support SF−HPC. This result confirmed the uniform distribution of HPW on the support SF−HPC and the lack of HPW agglomeration, which was consistent with the XRD results. To clearly observe the distribution of the active component, HPW, on the surface of the support SF-HPC, we performed an EDS mapping test with the 5%HPW/SF-HPC catalyst. As shown in Figure 5, we could clearly observe phosphorus and tungsten, indicating that HPW had good dispersibility on the surface of the support SF-HPC. This result confirmed the uniform distribution of HPW on the support SF-HPC and the lack of HPW agglomeration, which was consistent with the XRD results. To clearly observe the distribution of the active component, HPW, on the surface of the support SF−HPC, we performed an EDS mapping test with the 5%HPW/SF−HPC catalyst. As shown in Figure 5, we could clearly observe phosphorus and tungsten, indicating that HPW had good dispersibility on the surface of the support SF−HPC. This result confirmed the uniform distribution of HPW on the support SF−HPC and the lack of HPW agglomeration, which was consistent with the XRD results. Catalytic Performance The desulfurization activity of the catalysts was evaluated following the procedure described in the experimental section. Firstly, we investigated the desulfurization effect before and after loading the active component. As shown in Figure 6, the desulfurization effect increased significantly after loading HPW, demonstrating that the active components promoted the desulfurization effect. Furthermore, the desulfurization activity increased with the increase in the loading (Figure 6), reaching 100% with catalyst loading of 5%. The other catalysts containing 10% and 20% loading could also achieve a desulfurization rate of 100%. Consistent with our assumptions, high dispersion of HPW allowed the HPW load to be reduced. Therefore, 5% could be envisaged as the optimal loading, and subsequent experiments were carried out with the 5%HPW/SF-HPC catalyst. Compared to previous reports [11,28,29], a great reduction in the catalyst loading was achieved by using the present HPC support. The HPW content was thereby reduced, which was beneficial for decreasing the effect of phosphorus pollution on the environment. The desulfurization activity of the catalysts was evaluated following the procedure described in the experimental section. Firstly, we investigated the desulfurization effect before and after loading the active component. As shown in Figure 6, the desulfurization effect increased significantly after loading HPW, demonstrating that the active components promoted the desulfurization effect. Furthermore, the desulfurization activity increased with the increase in the loading (Figure 6), reaching 100% with catalyst loading of 5%. The other catalysts containing 10% and 20% loading could also achieve a desulfurization rate of 100%. Consistent with our assumptions, high dispersion of HPW allowed the HPW load to be reduced. Therefore, 5% could be envisaged as the optimal loading, and subsequent experiments were carried out with the 5%HPW/SF−HPC catalyst. Compared to previous reports [11,28,29], a great reduction in the catalyst loading was achieved by using the present HPC support. The HPW content was thereby reduced, which was beneficial for decreasing the effect of phosphorus pollution on the environment. Other beneficial features of the present process were the reaction temperature, since 100% desulfurization could be achieved at room temperature, and the lack of heat consumption, which would save production costs. Other parameters displayed the relationship between the desulfurization rate and the reaction time as shown in Figure 7A. As can be seen, the desulfurization rate increased with time, reaching 100% at 30 min of reaction time. This constituted a considerable shortening of the previously reported desulfurization time, which was beneficial for industrial applications. Figure 7B shows that the sulfur removal rate improves with the increase in the O/S ratio, reaching 100% for O/S = 10. In contrast, when the O/S ratio reached 14, the desulfurization rate decreased slightly, which may be due to the excessive hydrogen peroxide adsorbing water molecules on some active sites on the catalyst surface, thus reducing the adsorption of DBT [8,30]. A significant effect of the amount of catalyst on the desulfurization effect can be seen in Figure 7C. As the amount of catalyst increased, the sulfur removal rate increased considerably, reaching 100% when the amount of catalyst was 0.1 g. In summary, the optimal parameters of the reaction, which were significantly enhanced compared to previous reports [31][32][33][34], were room temperature, 30 min of reaction time, O/S = 10, and 0.1 g of catalyst per 10 mL of oil. Other beneficial features of the present process were the reaction temperature, since 100% desulfurization could be achieved at room temperature, and the lack of heat consumption, which would save production costs. Other parameters displayed the relationship between the desulfurization rate and the reaction time as shown in Figure 7A. As can be seen, the desulfurization rate increased with time, reaching 100% at 30 min of reaction time. This constituted a considerable shortening of the previously reported desulfurization time, which was beneficial for industrial applications. Figure 7B shows that the sulfur removal rate improves with the increase in the O/S ratio, reaching 100% for O/S = 10. In contrast, when the O/S ratio reached 14, the desulfurization rate decreased slightly, which may be due to the excessive hydrogen peroxide adsorbing water molecules on some active sites on the catalyst surface, thus reducing the adsorption of DBT [8,30]. A significant effect of the amount of catalyst on the desulfurization effect can be seen in Figure 7C. As the amount of catalyst increased, the sulfur removal rate increased considerably, reaching 100% when the amount of catalyst was 0.1 g. In summary, the optimal parameters of the reaction, which were significantly enhanced compared to previous reports [31][32][33][34], were room temperature, 30 min of reaction time, O/S = 10, and 0.1 g of catalyst per 10 mL of oil. To further investigate the performance of the catalyst, we recovered the catalyst and investigated its stability. After the catalytic oxidation reaction was completed, the catalyst was filtered and dried. The dried solid product was collected for the next catalytic reaction. According to the results shown in Figure 8, the desulfurization rate was still maintained at above 94% after four recycles. The loss of HPW was detected by an ICP test. The result is shown in Table 2. The actual load of fresh 5%HPW/SF−HPC catalyst was 4.252% To further investigate the performance of the catalyst, we recovered the catalyst and investigated its stability. After the catalytic oxidation reaction was completed, the catalyst was filtered and dried. The dried solid product was collected for the next catalytic reaction. According to the results shown in Figure 8, the desulfurization rate was still maintained at above 94% after four recycles. The loss of HPW was detected by an ICP test. The result is shown in Table 2. The actual load of fresh 5%HPW/SF-HPC catalyst was 4.252% lower than the theoretical value of 5%. This was due to the fact that some HPW remained in the solution during the loading process. As the number of cycles increased, the content of HPW gradually decreased. After four cycles, the actual loading decreased from 4.252% to 2.87%. HPW was adsorbed on the support surface in the form of [PW 12 O 40 ] 3− and interacted with the positive charge on the support surface [9]. The surface potential of SF-HPC was measured by a zeta potentiometer at 16.36 V/cm, indicating that the positive charge of the SF-HPC was low and the interaction force with [PW 12 O 40 ] 3− was weak, leading to the easy loss of anions, which were consistent with the ICP test and experimental results. To further investigate the performance of the catalyst, we recovered the catalyst and investigated its stability. After the catalytic oxidation reaction was completed, the catalyst was filtered and dried. The dried solid product was collected for the next catalytic reaction. According to the results shown in Figure 8, the desulfurization rate was still maintained at above 94% after four recycles. The loss of HPW was detected by an ICP test. The result is shown in Table 2. The actual load of fresh 5%HPW/SF−HPC catalyst was 4.252% lower than the theoretical value of 5%. This was due to the fact that some HPW remained in the solution during the loading process. As the number of cycles increased, the content of HPW gradually decreased. After four cycles, the actual loading decreased from 4.252% to 2.87%. HPW was adsorbed on the support surface in the form of [PW12O40] 3− and interacted with the positive charge on the support surface [9]. The surface potential of SF−HPC was measured by a zeta potentiometer at 16.36 V/cm, indicating that the positive charge of the SF−HPC was low and the interaction force with [PW12O40] 3− was weak, leading to the easy loss of anions, which were consistent with the ICP test and experimental results. Conclusions In summary, the use of the ultra-high specific surface area and macroporous structure of the SF-HPC improved the dispersion of the HPW catalyst, avoiding the agglomeration of HPW and thus reducing the required HPW loading. The optimal HPW/SF-HPC catalyst was successfully prepared with a HPW loading of 5%. The catalytic performance was investigated in the oxidative desulfurization reaction. The optimized reaction conditions were room temperature, 30 min reaction time, O/S = 10, and 0.1 g catalyst per 10 mL of oil. Under the optimal conditions, the sulfur removal rate could reach 100%. The use of SF-HPC reduced the content of the HPW catalyst, which not only reduced the cost of the catalyst but also reduced the pollution caused by phosphorus.
2021-09-28T05:13:23.931Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "36a7a283ba66f71621b692a31b408b24ddde988d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/11/9/2369/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "36a7a283ba66f71621b692a31b408b24ddde988d", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
243023206
pes2o/s2orc
v3-fos-license
The Relationship between Positive Emotion and Resilience Among Undergraduate Students Although the numbers of youths entering universities in Malaysia are increasing rapidly, several studies have shown that some students are suffering from poor mental health which could lead them to have experience positive emotions and more negative emotions. This study aimed to investigate the relationship between positive emotion and resilience among UPSI undergraduate students and to examine the difference between male students and female students in positive emotion and resilience. Cross-sectional survey design was applied to gather the data of 140 participants using both paper and pencil method and online survey method. A set of questionnaire comprising demographic information, Dispositional Positive Emotions Scales (DPES) to measure positive emotions and Brief Resilience Scale (BRS) to measure resilience was given. Using Pearson’s product-moment correlation analysis, it was found that there was a significant relationship between positive emotion and resilience, r= 0.4. Thus, null hypothesis is rejected. The correlation coefficient obtained showed that there is positive correlation between positive emotion and resilience among UPSI students in which students with more positive emotion are more likely to have higher level of resilience. Based on the t-test analysis, the difference was not significant between male students and female students for both positive emotions and resilience. In conclusion, more positive psychology programs can be conducted to educate the students about good mental health and well-being which will eventually increase their psychological resilience in facing challenges in their university lives. Introduction Undergraduate students often experience challenges as the university life is a transition period from late teens to young adulthood in which they need to learn, adjust, adapt or cope with changes in their everyday life (Arnett, 2000;Trigueros et al., 2020). Positive emotions are important elements for individuals because positive emotion could help people to transform into more creative, resilient, knowledgeable, and healthy individuals engage in proactive social relationships with others (Fredrickson, 2004;Diener et al., 2020). Fredrickson (2001) defined emotion as "multi module reaction tendencies that develop over reasonably in less time" that are grouped in emotional families. Individuals could experience both emotions since they are split up into positive and negative emotions. Positive emotions are good feelings that show human flourishing and by using positive emotions could help individuals to bounce back from negative experience (Fredrickson, 2012). Among ten most common positive emotions are joy, gratefulness, serenity, interest, hope, satisfaction, amusement, inspiration, awe, and affection (Fredrickson, 2009). Previous studies have shown that positive emotions have been associated with various beneficial outcomes such as higher level of work satisfaction (Losada & Heaphy, 2004) higher life satisfaction (Cohn et al, 2009), and academic motivation (Low, King, & Caleon, 2016)). A review of positive emotions by Lyubomirsky et al. (2005) mentioned that positive emotions not only bring satisfaction and success at work but also good relationships with others and better physical health maintenance. Resilience has played an important role for young adults and adolescents in maintaining good health and wellbeing. Walker et al (2017) suggested that the psychological strength to face hurdles in life an individual possesses is the root of the resilience trait. Ahmad, Khairani, and Aman (2018) conducted a cross-sectional study among university students from the School of Health Sciences programme (n= 94) and School of Electrical Engineering programme (n= 254). The findings showed that Health Sciences students had higher level of resilience compared to electrical engineering students. Possibly this is due to the challenging clinical learning curriculum that requires students to prepare themselves to become competent and building up resilient throughout their studies. Arewasikporn, Sturgeon and Zautra (2019) investigated the effect of shared enjoyment and positive emotions on resilience thinking. Based on the data collected on 191 middle-aged adults (mean age= 53.51, the researchers concluded that individuals experiencing positive events and had a positive effect tend to have a resilient mindset and greater well-being. Meanwhile Seaton, Bottorff, Jones-Bricker, and Lamont (2018) found that emotional outlook and positive emotion has a positive correlation with ego-resilience in which ego-resilience act as the mediator for these factors and physical activities among men. Although the numbers of youths entering universities are increasing rapidly, past studies showed that the students do not have enough confidence in higher education because they are not able to find what they are looking for in the university, lacking education that they obtained throughout their university life to obtain a job and other social and psychological problems (Wong & Chiu, 2018). Although they are able to achieve their academic success, Malaysian students are suffering from poor mental health (Mey & Yin, 2015;Ministry of Health, 2016). Several researchers have found out that mental health among adolescents and university students is getting worse day by day in which the level of distress, anxiety, and depression are arising among them (Twenge, Roger, Joiner & Martin, 2018). A high level of resilience has been considered as a trait that could reduce mental health problems (Shi et al., 2016). Therefore, the aim of the present study was to examine the relationship between positive emotions and resilience among undergraduate students of Sultan Idris Education University (UPSI) and to examine gender differences in positive emotions and resilience . Research Design A cross-sectional survey design was used in this study in which the participants were given a set of questionnaires to be completed either through online platform or paper-andpencil. The data obtained was used to examine the participant's level of positive emotion and resilience as well as to investigate the association between positive emotion and resilience. Participants The population for this study was undergraduate students of UPSI who are enrolling in diploma and bachelor degree programme age ranging from 18 to 26 years old. The study was conducted in Universiti Pendidikan Sultan Idris (UPSI). The participants were recruited using convenience sampling. A total of 140 students enrolling undergraduate programs and diplomas took part in this study. G*power software was used to determine the sample size required for this study. According to the G*power calculation, 105 participants were required for this study. 21 students or more were required in order to have a confidence level of 90%. This would ensure the real value is within ± 10% of the measured value. Thus, a total of 140 participants were recruited to be conservative and cover the survey dropout rate. Instruments A set of questionnaire comprising demographic questions and two scales were used in this study. Demographic questions were asked to identify some basic characteristics of the participants. Dispositional Positive Emotion Scale (DPES; Shiota, Keltner & John, 2006) was used to measure the positive emotions and Brief Resilience Scale (BRS; Smith et al., 2008) was used to measure the dependent variable, resilience. Back Translation The questionnaires were using Malay language. According to the Brislin model for instrument translation, an individual who has good proficiency in two languages will translate the instruments from its original language to the targeted language (Brislin, 1970). In this research, the instruments DPES and BRS that were originally in the English language translated into the Malay language. Then, another individual who was bilingual translated the instruments back into its original language, English, and this process is known as backtranslation. It was then followed by the comparing both original and back-translated version. Any discrepancy was resolved through team discussion. The Malay version questionnaire was tested in a pilot study with 10 participants who had similar characteristics with the target participants. This pilot study helped the researchers to identify the ambiguities, readability of the questionnaire and time completion. Procedure Participants recruitment advertisement were circulated around campus (e..g., library, café) and through online (e.g., WhatsApp, Telegram). Potential participants who responded to the advertisement were then invited to take part in the study and completed the informed consent form. Participation in this study was voluntary and they can withdraw from the research at any time without giving reasons. Participants took approximately 10-15 minutes to complete the questionnaire. This study has been reviewed and approved by the evaluation committees at the Department of Psychology and Counselling, UPSI. Analysis Data analysis were completed using Statistical Packages Social Sciences (SPSS) Statistics Version 23. Means, standard deviations, frequency and percentage were presented for descriptive statistics, while Pearson correlation was run to test relationship between positive emotions and resilience. Besides, t-test was run to examine gender differences. Findings Participant characteristics A total of 140 undergraduate students took part in this study. 71.4 % of the students were degree students meanwhile 28.6 % of them are diploma students. Out of 140 participants, 66 of them were males and 74 females. The age of participants ranged from 18 to 26 years old (M= 21.41; S.D. = 1.61) and the majority of them were 22 years old (n= 50). Participants from various ethnics took part in this study. Most of the participants were Malays (n= 60), followed by Indians (n= 39), Chinese (n= 26) and other minor ethnics (n= 15). The majority of the participants were Semester 7 students (n= 43) while the least from Semester 1, in which only two students from the first semester participated in this study. Table 1 present the demographic information about the participants. A Pearson's product-moment correlation was run to examine the relationship between positive emotion and resilience among undergraduate students. The study hypothesized that there is a significant relationship between positive emotion and resilience among undergraduate students of UPSI. Table 2 shows the significant association between positive emotion and resilience, r = .43, p (two-tailed) = .01 indicating there is positive correlation between positive emotion (M = 4.74, SD = 0.95) and BRS resilience (M = 3.29, SD = .61) that is positive emotion increases as resilience level increases. Pearson correlation coefficient, r of .43 indicates that there is a moderate correlation between positive emotions and resilience among undergraduate students in UPSI. Thus, the null hypothesis is rejected and the alternative hypothesis is accepted. Table 2. Pearson correlation coefficients between positive emotion and resilience Positive emotion Resilience Positive emotion 1 .43 ** .00 Resilience 1 Independent t-test was run to investigate the difference in the level of positive emotion and resilience between male and female participants. Based on the t-test analysis, it is found that there is no statistically significant difference in the mean level of positive emotion between male and female undergraduate students of UPSI. On average, male undergraduate students experienced slightly higher level of positive emotions (M = 4.75, SD = 1.09) than the female undergraduate students (M = 4.72, SD = .82). This differences was not significant t (138) = .17, p = 0.87. The results also revealed that there is no statistically significant difference in the mean level of resilience between male and female participants t (138) = -.52, p = .61. Although on average male undergraduate students showed a higher level of resilience (M = 3.26, SD = .67) compare to female undergraduate (M =3.32, SD = .57), the different is not significant. Discussion As hypothesized, the presence of a significant relationship between positive emotions and resilience among the undergraduate students in UPSI was expected. The study demonstrates a positive correlation between positive emotion and resilience among undergraduate students in which those students who have more positive emotions are more likely to have a higher level of resilience. In line with the hypothesis, previous studies have suggested that being emotionally positive could be an important factor in order to increase the level of resilience and to be resilient in life (Sophie, 2016). The findings of the current study found substantial support in the explanation by Denovan and Macaskill (2017) who stated that undergraduate students who are highly resilient apply leisure coping to increase positive emotions in them that eventually improve their overall well-being. Leisure coping and positive emotions act as mediators between resilience and well-being among undergraduate students who experience a high level of stress (Denovan & Macaskill, 2017). Additional evidence also showed that undergraduate students from the University of Michigan with higher trait resilience had a high level of positive emotions specifically on eagerness, excitement, and happiness (Tugade & Fredrickson, 2004). Based on the hypothesis, it was expected that the male students and female students have a different level of positive emotion and resilience. However, after running an independent t-test using the data obtained, the alternative hypotheses were rejected and the null hypotheses were accepted. It was concluded that there is no statistical difference between genders in the level of positive emotions and resilience. The results of this study support the claims by Zuckerman, Li, and Diener (2017) who concluded that men and women did not statistically differ in terms of positive emotions. However, the finding of this study is contradicted to the result of several past studies. Women are more prescribed to feel as to how society wants them to feel than men. Women tend to express positive emotions nonverbally such as smiling and laughing compared to the opposite genders (LaFrance, Hecht, & Paluck, 2003). It was concluded that women are more prescribed to the expression of positive emotion than men. Current result which stated that there is no statistical difference between genders in the level of resilience is contradicted with the finding of a study conducted by Sarwar, Inamullah, Khan, and Anwar (2010) in Pakistan which stated that female students had a lower level of resilience than the male students. The differences in the findings can be because of the gender equality differences based on the countries. For example, Malaysia follows and emphasizes gender equalities to explore and achieve their goals, thus have more good values in themselves such as resilience and self-discovery than the female students in countries that are more male-dominated. There were several limitations in this study. The findings of this study are not appropriate to be generalized to the whole population of UPSI students because the participants of this study were only local students but the population of this study includes both local and foreign students, thus this study only represents the local UPSI students. Thus, future studies should include both local and foreign students and use a larger sample. This study builds on existing evidence to past studies that claimed positive emotions and resilience are correlated and men and women do not differ in the level of positive emotions and resilience. Considering the findings of this study, they are not only beneficial for the students, but also to the educational practices as well. More positive psychology programs can be conducted in order to maintain the students' good mental health which will eventually increase their psychological resilience in facing challenges in their university lives. Other interventions and learning techniques can be implemented in the university to promote positive emotions among the undergraduate students and also the staff of UPSI. Conclusion The results of the current study suggest that there is a significant relationship between positive emotions and resilience among undergraduate students but there is no gender differences in positive emotions and resilience.
2021-10-15T16:30:21.944Z
2021-06-08T00:00:00.000
{ "year": 2021, "sha1": "bd583f943ad67dc784e587aa412cc9dfd5c045e0", "oa_license": "CCBY", "oa_url": "https://hrmars.com/papers_submitted/10078/the-relationship-between-positive-emotion-and-resilience-among-undergraduate-students.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9e6aaaf89492e018a1f3d884b4413f6faaeabe17", "s2fieldsofstudy": [ "Psychology", "Education" ], "extfieldsofstudy": [] }
221279132
pes2o/s2orc
v3-fos-license
Improved Transferability of Data‐Driven Damage Models Through Sample Selection Bias Correction Abstract Damage models for natural hazards are used for decision making on reducing and transferring risk. The damage estimates from these models depend on many variables and their complex sometimes nonlinear relationships with the damage. In recent years, data‐driven modeling techniques have been used to capture those relationships. The available data to build such models are often limited. Therefore, in practice it is usually necessary to transfer models to a different context. In this article, we show that this implies the samples used to build the model are often not fully representative for the situation where they need to be applied on, which leads to a “sample selection bias.” In this article, we enhance data‐driven damage models by applying methods, not previously applied to damage modeling, to correct for this bias before the machine learning (ML) models are trained. We demonstrate this with case studies on flooding in Europe, and typhoon wind damage in the Philippines. Two sample selection bias correction methods from the ML literature are applied and one of these methods is also adjusted to our problem. These three methods are combined with stochastic generation of synthetic damage data. We demonstrate that for both case studies, the sample selection bias correction techniques reduce model errors, especially for the mean bias error this reduction can be larger than 30%. The novel combination with stochastic data generation seems to enhance these techniques. This shows that sample selection bias correction methods are beneficial for damage model transfer. INTRODUCTION Over the last decades, both the developed and the developing world have seen an increase in the frequency and severity of hydrometeorological disasters and their impacts. Many sectors are affected and can benefit from improved models to predict these impacts, so that better decisions can be taken to reduce, retain, transfer, or absorb the risk (Van den Homberg & McQuistan, 2019). Natural hazard damage models predict the damages of a disaster given hazard characteristics such as the water depth of a flood (e.g., Merz, Kreibich, Schwarze, & Thieken, 2010) or the wind speed of a cyclone (Pielke, 2007). They are used to estimate risk from natural hazards in order to support decisions about investments in risk reduction measures. An example is their crucial role for determining the required protection levels of the dikes in the Netherlands (e.g., Kind, 2013;Van der Most, Tanczos, De Bruijn, & Wagenaar, 2014). Damage models are also increasingly used for providing impact information in early warning systems (e.g., Bachmann et al., 2016), and many national meteorological and hydrological organizations are attempting to move from hazard forecasts to impact-based forecasts (WMO, 2015) whereby damage models are essential. Several actors, such as humanitarian organizations, can use these impactbased forecasts to initiate early actions that reduce risks just before a hazardous event (Coughlan de Perez et al., 2015). Once the disaster has hit, similar models can be used to prioritize humanitarian aid (risk absorption) (Van den Homberg, Visser, & Van der Veen, 2017; Van der Veen, 2016;Van Lint, 2015). Damage models or so-called catastrophe models are also applied in the insurance sector to determine premiums (Grossi & Kunreuther, 2005;Merz et al., 2010;Pielke, Landsea, Musulin, & Downton, 1999). Traditionally, damage models often follow relatively simple approaches to estimate damages. For example, flood damage models typically relate a single variable "water depth" to resulting damage using "depth-damage curve" (Merz et al., 2010), whereas typhoon damage models similarly relate maximum wind speed to storm damage (Pielke, 2007;Van den Homberg et al., 2017;Van Lint, 2015). However, these simple models show considerable uncertainty in their damage estimates (De Moel, Bouwer, & Aerts, 2014;Gerl, Kreibich, Franco, Marechal, & Schröter, 2016;Wagenaar, De Bruijn, Bouwer, & De Moel, 2016) and do not always perform well when they are transferred (e.g., Jongman et al., 2012). The main reason for the uncertainty is that the damage functions contain implicit assumptions about variables and processes that are not included in the model (Wagenaar et al., 2016). Examples of such variables are: flood duration, flow velocity, building construction and materials, precautionary measures, contamination of the flood water, and household size. Nateghi, Guikema, and Quiring (2011) introduced machine learning (ML) methods to predict impacts of natural hazards (electricity outages from storms). Merz, Kreibich, and Lall (2013) used a similar approach to predict flood damages at individ-ual building level. Since then such techniques have been applied by many authors to predict all sorts of impacts from natural hazards (Amadio et al., 2019;Ganguly, Nahar, & Hossain, 2019;Carvajal et al., 2018;Mayfield et al., 2018;Nateghi, Guikema, & Quiring, 2014;Schröter et al., 2014Schröter et al., , 2018Sieg, Vogel, Merz, & Kreibich, 2017;Wagenaar, de Jong, & Bouwer, 2017;Wagenaar, Lüdtke, Schröter, Bouwer, & Kreibich, 2018). These data-driven damage models often use more than one variable to predict the damage (multivariable models). Therefore, they often perform better than traditional flood damage models (Kreibich, Müller, Schröter, & Thieken, 2017;Wagenaar et al., 2017), particularly when models are transferred (Schröter et al., 2014;Wagenaar et al., 2018). In practice, damage models are always applied in a transfer setting (Cammerer, Thieken, & Lammel, 2013). This is, for example, a model built on data or knowledge from one location applied in another location (spatial transfer), or data collected at one moment in time being applied at a different time (temporal transfer). Detailed data on flood damages are rarely recorded in a structured and consistent way and are often outdated. Some recent examples where empirical damage data were collected are described by Kienzler, Pech, Kreibich, Müller, and Thieken (2015), Poussin, Botzen, and Aerts (2014), and Molinari et al. (2014) for cases in Germany, France, and Italy, respectively. ML methods assume that the training data to build the model consist of randomly drawn samples from the same distribution as the test samples for which the learned model needs to make predictions (Zadrozny, 2004). In a spatial and temporal transfer setting, this is often not the case. For example, damages from moderate typhoons may be used to predict the damage of an extreme typhoon. In such cases, the ML algorithms need to rely on outlier observations in the data to build the most crucial part of the model. This problem is called the "sample selection bias." This received considerable attention in econometrics for the application to linear regression (Zadrozny, 2004). In the year 2000, Heckman (1979) received the Nobel prize in economics for developing a correction method. This "Heckman" correction, however, only applies to linear regression models. Cortes, Mohri, Riley, and Rostamizah (2008) provided two techniques to correct for this problem in case other ML methods are applied: these techniques are cluster-based estimation (CBE) and kernel mean matching (KMM). In this article, we apply, to our knowledge for the first time, sample selection bias correction techniques (also known as domain adaptation) to damage models for natural hazards and show their potential benefits. We also introduce a variation of the CBE method that we call single variable distribution matching (SVDM), which only uses the most relevant variable. Sample selection bias correction techniques give weights to the training data to make the most relevant samples more important during the training of the ML models. However, such techniques can result in very high weights given to single observations. In our analyses, we therefore explore a new combination of techniques where very high weights are smoothed out before they are included in the ML model. This is done by resampling the data after the sample selection bias correction with a statistical model. The resulting synthetic data are used to then train the ML models. This synthetic data generation in combination with sample selection bias correction methods is a new approach. Sample selection bias correction techniques have never been applied to correct multivariable datadriven models to predict the impacts of natural hazards. The objective of this research is therefore to evaluate how three sample selection correction techniques (CBE, KMM, and SVDM) reduce the sample selection bias for multivariable data-driven damage models and improve their performance when they are transferred between different events and between different geographic locations. These methods are evaluated with and without resampling of synthetic data and with two different ML methods: artificial neural networks (ANNs) (Breiman, 2001) and random forests (RFs) (Rumelhart, Hinton, & Williams, 1986). In total, 12 different model setups are compared. These methods are applied to two different case studies where data-driven multivariable damage models are transferred in time and space. The first case study is based on a data set with typhoon damages in the Philippines on macrolevel (municipalities). The second case study is an extension of the paper of Wagenaar et al. (2018), where multivariable microscale (buildings) flood damage models are transferred between the Netherlands and Germany. This article starts with an explanation of the methods, including an introduction to the case studies, the data and the evaluation metrics used to assess the model performance. Next, the results are presented and discussed, and finally the conclusions are presented. METHODS AND DATA Fig. 1 presents our method with six steps to improve damage estimation in transfer settings with data-driven multivariable models based on ML techniques. The first step consists of selecting and developing training data for the damage models. These data come from different events than the application (test) data for which the model needs to predict the damages. The second step is to apply three different sample selection bias correction techniques on a training data set. The corrected training data are subsequently either directly used in two ML techniques (RFs and ANNs) to estimate damages (steps 4 and 5), or is first resampled using a statistical model (step 3). Step 3 is only applied to test the influence of generating synthetic data. The resulting damage estimates are evaluated with various error metrics (mean absolute error [MAE], mean bias error [MBE], and symmetric mean absolute percentage error [SMAPE])(step 6). This approach is applied to both case studies (flood damage and typhoon wind damage). Below, the data-driven approaches are further described (Section 2.1), the case studies are introduced (Section 2.2), the specific model setup to apply the data-driven approaches to the case studies is shown (Section 2.3), and finally the evaluation metrics are specified (Section 2.4). (Cortes et al., 2008) assigns a set of weights to the training data, so that the mean of each variable in the training data becomes as close as possible to the mean of each variable in the test data. This is called a covariate shift. These weights are determined with an optimization algorithm. The optimization problem is shown in formula 1 (Cortes et al., 2008). The data need to be normalized before applying the KMM algorithm. Data-Driven Methods where γ is the vector with weights that is determined by the optimization algorithm, G(γ ) is the distance between the means of the weighted training data and the testing data that is minimized, x tr i are the independent variables only of the training data, x te i are the independent variables only of the test data, n is the Overview of the approach for developing multivariable damage models from observational data, including the testing procedure. number of observations in the training or test data, and (x) is the kernel function that maps x to a reproducing kernel Hilbert space (Berlinet & Thomas-Agnan, 2004). A weakness of KMM is that it gives equal importance to all independent variables. Another weakness of KMM is that the algorithm only matches the mean but not the entire distribution between training and test data. There are many different solutions to get to a matching mean. Some might not lead to a better match of the entire distribution, for example, when large weights on error prone outliers are applied to shift the mean. Since the damage models are sensitive to extreme values, it would be desirable that the sample selection bias correction method leads to a better match of the entire distribution. Cluster-based estimation. In CBE, the entire data set (training and test data) is first split into several clusters. These clusters are made by combining the independent variables of the training and test data and then applying an unsupervised learning algorithm to find clusters of observations that lie relatively close together. After the clusters are identified, both the training and test data are split into these clusters. The weights are then determined in such a way that each weighted cluster appears as frequently in the training data as it appears in the test data. See the following formula: where CW x is the cluster weight to be applied on each sample in the training data that belong to the specific cluster. N x,test is the number of samples in the test data that belong to that cluster, N test is the total number of samples within the test data. N x,train and N train are the same but for the training data. The unsupervised learning algorithm k-means clustering is applied. This algorithm splits the data in K clusters based on the nearest means by placing K points in the spectrum of the data. It then clusters each data point based on which of the K points it is most close to (Kanungo, Mount, Piatko, Silverman, & Wu, 2002). The algorithm then optimizes the position of the K points in such a way that the total distance of all data points to the locations of the K points is minimized. The data need to be normalized before applying the algorithm. Just as the KMM method, the disadvantage of CBE is that all variables are equally important, while in fact the variables differ in their importance for predicting the damage. For example, wind speed is often a more important variable than the economic growth of a municipality, in the case of wind damage estimation. Since all variables are assumed to have the same importance in the clustering, this may lead to clusters that are not particularly relevant for reducing the sample selection bias. Single variable distribution matching. The CBE method is normally trying to match the distributions of all different variables. Some of these variables are, however, less important for the damage estimation than others. The CBE method is unaware of this difference in importance and will only try to match all available variables with equal importance. Matching the distributions for each variable perfectly is not possible on such small data sets, so compromises are made. These compromises reduce the quality of the match in the more important variables and therefore may reduce the model performance compared with a method that focusses on the most important variable. Therefore, we introduce a special configuration of the CBE, which we call SVDM. This method makes use of the expert knowledge on the most important variable for the damage model. This works by just supplying the CBE method with the most important variable, such as water depth for floods or the wind speed for typhoons. A disadvantage of adjusting for the most important variable only is that sometimes a transfer needs to be made over multiple variables. For example, a transfer in both building styles and water depth would be impossible with this approach. It is, however, possible to optimize this method by using several important variables rather than only the most important one. Such configurations are not explored in this research and rather only the two most extreme configurations are applied: that is, all variables in common CBE or only the most important variable in the case of SVDM. Synthetic Data Generation The sample selection bias correction methods sometimes generate high weights for specific observations, for instance when one observational value is 30 times more important than another. Generating synthetic training data by resampling can create new data with similar statistical characteristics to the weighted training data. This results in many data points similar to the observation with the high weight rather than one specific point with a very high weight. This can be done with synthetic data generation techniques that are applied for example to meteorological or river discharges data (Diermanse, Carroll, Beckers, Ayre, & Schuurmans, 2012). This synthetic data generation approach to smooth out the high weights has been inspired by a similar method called synthetic minority oversampling technique (SMOTE) (Chawla, Bowyer, Hall, & Kegelmeyer, 2002). This technique helps to correct imbalanced training data in classification problems, for instance, when rare observations in the training data need to be predicted. Synthetic data are generated by building a statistical model that represents the sample selection bias corrected training data. From this statistical model, new data points are sampled. This procedure can be summarized as follows: • A Kendall's rank correlation matrix (T) is derived from the training data. The matrix is a square matrix with the size of the number of variables. • A matrix P is derived through Cholesky decomposition, in which P × P -1 = sin(phi T/2) (Fang, Fang, & Kotz, 2002). • For each variable, sample values with the standard normal distribution function are generated using its mean and standard deviation. • Correlation is introduced between these individual samples. Such correlated samples are calculated based on multiplication between the transpose of matrix P and the sample values for each variable. • To go from the normally distributed to the originally observed distributions in the training data, an inverse transformation is applied to the normalized correlated sample based on the variable's empirical distribution. ML Techniques ML is a field of artificial intelligence that provides computer systems the ability to learn from data without being explicitly programmed. ML algorithms are classified into (i) supervised learning, (ii) unsupervised learning, and (iii) reinforcement learning. This study focuses on the application of supervised learning algorithms (Praveena & Jaiganesh, 2017) to build models that can explain the complex relationships between damages and the variables that can explain damages, such as water depth or wind speed. We applied RF and ANNs in this study. RFs are chosen because they have a good track record in damage modeling (e.g., Amadio et al., 2019;Ganguly et al., 2019;Schröter et al., 2018;Sieg et al., 2017;Wagenaar et al., 2017;Wagenaar et al., 2018), ANNs have also been used before in flood damage models (Amadio et al., 2019;Ganguly et al., 2019), and in this study they were selected because of their ability to extrapolate and at the same time find complex nonlinear relationships. Table I provides a comparison between the ML methods. The K-means unsupervised learning algorithm is applied within the CBE sample selection bias correction technique. (2001), has been used in flood damage modeling (e.g., Amadio et al., 2019;Ganguly et al., 2019;Schröter et al., 2018;Sieg et al., 2017;Wagenaar et al., 2017;Wagenaar et al., 2018). RFs are ensembles of regression trees where the data for each tree are generated by a bootstrapping resampling method. In each tree, branches are formed by splitting the data set based on binary recursive partitioning, for instance, a partition of data based on whether the average wind speed is higher than a certain value. The RF algorithm does not use all explanatory variables for each tree, but it seeks the best splits within a limited number of sampled explanatory variables. The number of sampled features is the square root of the total number of features in the data sets. The best splits refer to regression trees that split the training data in such a way that there is as little variation as possible within the resulting leaves. The predicted value for the entire RF is the mean value of the predictions from each regression tree. Random forest. RF, an ML method developed by Breiman A disadvantage of an RF is that they can never make a prediction that is higher than the values seen in the training data, hence it cannot extrapolate (Tyralis, Papacharalampous, & Langousis, 2019). This is because each regression tree has a set number of leaves. When making a new prediction it will place the prediction in an existing leaf. It cannot create a new leaf with a higher damage value. In a damage model transfer setting, this inability to extrapolate can be a disadvantage as extrapolation is sometimes required. An advantage of RFs is that they can make probabilistic predictions, which is, however, not used in this article. Artificial neural network. An ANN is an ML framework inspired by how the human brain processes information (Hassoun, 1995). It was first introduced by Rumelhart et al. (1986), ANNs gain knowledge through learning the relationships between variables in a data set without any given information about the system. The model built based on ANNs consists of several (hidden) layers of neuronlike processing units connected with each other. Each neuron is connected to all other neurons in the layer before it and after it. The connections work through coefficients that weigh each value that comes through the neuron. The coefficients of the neurons are determined with an optimization algorithm that minimizes the error on the training data set. A strength of ANNs is that they can simulate complex nonlinear patterns. Larger ANNs with more neurons can represent more complex nonlinear patterns but are also more prone to match the training data so well that it works poorly on new cases (overfitting). The model built in this study is based on a multilayer perceptron (MLP) ANN, which consists of an input layer, two hidden layers, and an output layer (prediction). For transferring multivariable damage models, ANNs may have an advantage over RFs in that they can extrapolate. In an ANN inputs are multiplied with coefficients. When the input value in the test data (e.g., water depth) is larger than the inputs in the training data, the predicted value will be also larger. A general disadvantage of ANNs is that their predictions are deterministic and hence less suitable for applications that would benefit from having probabilistic estimates. Case Studies A case study approach was used to quantitatively assess the improvement of the spatial and temporal transferability of damage models based on an ANN or an RF upon applying the three sample selection bias correction methods. Two case studies were used to allow a deeper insight into the application of damage models at two different spatial scales: macrolevel (municipalities) and microlevel (buildings). Macrolevel damage models predict the damage based on the aggregated data within one administrative boundary (e.g., village, district). This detailed level is sufficient for many applications and the data are easier to collect. For the macrolevel, a case study with typhoons in the Philippines on municipality level was adopted. The models in this article are an extension of macrolevel data-driven multivariable models that were developed to support humanitarian aid organizations with the prioritization for distributing humanitarian aid after or just before a typhoon. The models aim to provide timely and unbiased information after a disaster, which are often difficult to obtain using current practices (field surveys). Microlevel damage models, on the other hand, predict the damage on disaggregated level (e.g., per building). Microlevel models are often used for disasters that require a detailed spatial resolution such as in our case for damage from fluvial floods in Europe. Such level of detail is required in insurance applications when risk premiums need to be determined per building, or for flood mitigation policies when measures on building level are considered. Even though for many such applications the results are later aggregated, the calculations are often done on microlevel because macromodels can lead to considerable spatial uncertainty (Wagenaar et al., 2016). The data used have been selected after an assessment of the data quality on different attributes, that is, timeliness, source (reliability), accuracy, and granularity/spatial coverage (Van den Homberg, Monné, & Spruit, 2018) as will be explained for each case study. Obviously, the data for both the independent and dependent variables need to be available at the same granularity and spatial coverage. Table II summarizes the characteristics of the two case studies. In both cases, the data are always used in a transfer setting. It means the data are applied on an event or a location that was not part of the training data. Apart from the spatial scale, the cases use different types of variables, damage mechanisms and type of transfer. The macro case study has more vulnerability type variables such as poverty and building materials, and has in some cases more damage mechanisms, such as floods due to a storm surge caused by the typhoon. The transfer for the macromodel was over time since all data come from the same larger study area. In the micromodel there is both a time (different events) and space transfer (between the Netherlands and Germany). These large differences are an ideal test to see whether the sample selection bias correction techniques work under different circumstances. Macrolevel Model: Philippines Typhoons On average around 20 typhoons strike the Philippines annually and more than half of them make landfall in the country (Reliefweb, 2017). Typhoon Haiyan (local name Yolanda), which hit the Philippines in 2013, is considered one of the strongest tropical cyclones ever recorded. The fatalities caused by the typhoon amounted to about 6,000 people, around 14 million people were affected and more than 1 million houses were damaged (World Bank, 2014). 510, an initiative of the Netherlands Red Cross collated the typhoon damage data in this case study through desk research and in-country visits of key stakeholders. The purpose of collating these data is to populate 510's community risk assessment dashboard and to develop a model that can be used to predict the areas with the highest damage either just before the disaster to trigger early action or just after the disaster to improve efficiency in the aid distribution process. 2.2.1.1. Data. Data have been gathered on 12 typhoons in the Philippines at the municipality level. The median number of households in a municipality is around 6,600. The data set contains about 1,600 damage records, with 40% of those damage records corresponding to the two typhoons that cover the largest extent. This does not necessarily mean that they have the largest aggregate damages. The vulnerability and exposure variables in a municipality are the same for all typhoons while the hazard features are specific to a typhoon. The vulnerability and exposure may have changed over time in the period from 2012 to 2016, due to, for example, population growth and land use change. These changes, however, are typically relatively slow. Recovery efforts are an exception because damages could be lower in an area that was recently affected and has not recovered yet. This can be a source of variation in the data but is expected to be limited. The data set collected by the Red Cross consists of more than 40 variables from which damage is to be predicted. Table III presents the variables that were used to build the damage models for the macro case study. It is essential to have data on these independent variables with national spatial coverage and at the same administrative levels. The municipality level was chosen as the smallest geographic level because this is the lowest resolution on which all the data are available. Maximum three seconds sustained gust speed over the event in the particular municipality. Every municipality has a unique wind speed calculated based on the forecasted maximum wind speed on the track and the method from Holland (1980) to calculate it for the specific municipality. Accumulated rainfall mm Meteorological data from Global Precipitation Measurement (GPM) (Huffman et al., 2015) Total accumulated rainfall during the typhoons period from satellite data (Huffman et al., 2015). Model setup and validation. The article proposes to build data-driven damage models that can be part of a model set-up used for operational purposes on newly predicted events. The evaluation can be carried out by using one of the observed typhoons as test data and use the rest as training data. The damage caused by a historical typhoon is predicted by a model built based on the data from the other 11 typhoons. As there are data about 12 typhoons recorded in the data set, 12 prediction mod-els were built in total with each typhoon serving once as the test data for which the model is then tested. Data-driven damage models were developed to predict the percentage of completely destroyed buildings in an affected municipality based on the variables shown in Table III. The most interesting aspect about the damage data is that the average of damage varies between the 12 typhoons. The average value over all typhoons is 6% of the buildings completely destroyed, which is nearly six times smaller than the average for typhoon Haiyan. From Fig. 2, it can be seen that the distribution of the damage to buildings caused by typhoon Haiyan is much higher than for the other 11 typhoons. This indicates that the damage data from the other 11 typhoons that are used to build the prediction model for Haiyan are not fully representative for this typhoon and hence a major model transfer is required that includes extrapolation. This is a typical example where advances in the transferability of models may improve damage predictions. Microlevel Model: European Flood Damage Models Damage data and independent variables for the microlevel case study were selected for six past river flood events in Germany between 2002 and 2013 and for one river flood event in the Netherlands in 1993. These data have been used for several data-driven models in the past (Wagenaar et al., 2017, Wagenaar et al., 2018, Schröter et al., 2014, Merz et al., 2013 In the current microlevel case study, a multivariable flood damage model made based on Dutch data is transferred to Germany. The same model transfer was done in the paper by Wagenaar et al. (2018), which showed that this model transfer could potentially be improved, as it was the model with the lowest performance, owing to the low variability of the damage data in the 1993 flood event in the Netherlands. The expectation therefore is that the model can be improved by correcting for the known sample selection bias. The flood damage model predicts the relative damage on building level based on the variables shown in Table IV. Data. The Dutch training data in this case study are derived from observed flood damages after the 1993 flood in the Meuse River in Limburg reported in WL Delft (1994), supplemented with data on building and flood characteristics documented in Wagenaar et al. (2017). The model is applied to predict the damage from six different flood events in Germany. Damage from these floods including relevant building and flood characteristics were collected using phone interviews (Thieken, Kreibich, Müller, & Merz, 2007. The German data set contains a wide range of values for the different flood characteristics and circumstances , Kienzler et al., 2015, the Dutch data are on the other hand more homogenous because they are based on only one flood event (Wagenaar et al., 2018). Model setup and validation. There are 895 damage observations from the German data that can be used to test the model developed based on the 4,398 damage observations from the Dutch data. To reduce the randomness in the predictions due to the specific selection of training samples, bootstrapping is applied (Efron & Tibshirani, 1993). In bootstrapping, a random sample of the training data is taken to train the model, and then a random sample of the test data is taken to test the model. Samples are taken with replacement. This is repeated several times, so that many models are trained and tested on such subsets of the data. For each bootstrap run, 4,000 training samples from the Dutch data and 350 testing samples from the German data were randomly picked. Bootstrapping reduces the chance that a difference between the two samples is due to Wagenaar et al. (2018) randomness rather than because of an improvement in the prediction method. For the RF, 100 bootstrap samples were taken. On the other hand, only 20 bootstrap samples were taken for ANN due to the greater calculation time. Less samples were taken for the ANNs, as differences between the calculated errors were shown to be minor, while the calculation time was much longer for the ANN than for RF. Model Parameters Damage models built based on RF and ANNs have been developed using the Python 2.7 library "Sci-Kit learn" (Pedregosa et al., 2011). For the damage model based on RFs, 100 regression trees were grown. More regression trees need more computation time but also typically give better results. This improvement from adding more trees becomes negligible after a certain number of trees. For this study, the same number of trees is applied as in Wagenaar et al. (2018) and the model errors could not be reduced by adding more than 100 trees. The number of splits and minimum number of observations per leaf were optimized. For the prediction model based on ANN, learning rates and number of neurons in the first hidden layer were optimized. The number of neurons in the second hidden layer was fixed to be half of the neurons in the first layer. This optimization was carried out by randomly splitting the data set into 60:40 for the training and test set. The tuning of the parameters for both models was carried out to result in the smallest MAE on validation data that did not involve a model transfer (splitting the training data randomly). The CBE and SVDM methods have one parameter to tune: the number of clusters used. This was chosen to be 12 clusters for both case studies. The KMM method has only one parameter to be optimized also: the kernel to be used. Linear kernel was chosen because of its simplicity. For SVDM, the most important variable to predict the damage chosen was wind speed for the macromodel and water depth for the micromodel. Both variables are widely used in single variable damage models (e.g., Pielke, 2007;Merz et al., 2010;Gerl et al., 2016). Furthermore, the feature importance analysis carried out within the RF confirms this choice. For the synthetic data generation, the number of synthetic data points to be generated can be optimized. More synthetic data points generated generally gives better results, but after a specific point they do not considerably affect the results anymore. For the macromodel, the number of synthetic data points to be generated is always twice the weight of the training set after the sample selection bias correction methods are applied. This is based on a | RL sim,n −RL obs,n | | RL sim,n |+|RL obs,n | minimum weight of one, so the sample selection bias correction increases the number of data points. This typically turns out to be between 3,000 and 10,000 synthetic data points. For the micromodel, a simplified approach was applied with a fixed number of samples because the training set is always the same size, this fixed number of samples is 5,000. The number of samples to be taken was estimated based on increasing the number of samples until the evaluation metrics would no longer improve. Evaluation Metrics To evaluate the model performance, three different error metrics were used: MAE, MBE, and SMAPE. Table V shows the formulas for the different evaluation criteria. The MAE is suitable to evaluate the accuracy for individual predictions. This is important when the individual model results need to be applied, for example, for insurance or for macrolevel models. The MBE shows whether there is a bias in the model, for instance, whether it consistently makes over or underestimations. This is important when the aggregated results are used. For example, in a micromodel used for a cost-benefit analysis for an infrastructure investment, only the total sum of all predictions is important rather than individual prediction per building. In such a case, the MBE is the most important evaluation criteria. The SMAPE is used in the same manner as the MAE but is a relative error metric. This allows to compare the errors of different order of magnitude events. For example, some models have predicted damages in the order of 50-80% while others have a maximum of 20%. A 20-percentage-point error on a damage of 80% is much lower relatively, than a 20-percentage-point error on a damage of 10%. For the macromodel, the errors were evaluated for 12 different typhoons. Then the weighted mean of their errors was calculated. The weights were as-signed based on the number of predicted damages in each model. For the MBE, the absolute values are taken before the mean is calculated over the 12 events. This is done in order to ensure that a positive bias in one test cannot cancel out a negative bias in another test. Consequently, all bias errors are positive. To obtain the mean that represents the quality of the 12 models, the criteria to evaluate errors should be independent from the extent of the damage they predicted. SMAPE is particularly useful for this case study, as the errors for different models are compared with each other. For the micro case, the variation between the damage cases is less extreme and therefore a SMAPE approach is not necessary. In this article, no evaluation metrics are applied to validate the quality of the probabilistic estimates of the RF and to see whether these probabilistic estimates improve because of sample selection bias correction methods. This is not done because ANNs are not able to make such probabilistic predictions. This could be a topic for future research. Table VI compares the performance of the predictions of the different ML models as measured by the evaluation metrics described in Section 2.4. It is apparent from the highlighted numbers in this table that the best performing models in both case studies and for all evaluation criteria always have some form of sample selection bias correction included. Furthermore, on the basis of MBE evaluation criteria, all sample selection bias correction methods always outperform the reference models. The improvements on the MBE metric can be as large as 85% (e.g., MBE content damage), where many different sample selection bias correction methods result in large improvements. It is promising that the sample selection bias correction methods lead to improvements in both case studies, despite the large differences between the phenomena and data in the case studies, as discussed in Section 2.2. RESULTS AND DISCUSSION For the MAE metric, the results are a bit more varied. For the micromodel, the improvements are minor. On the other hand, every sample selection bias correction method provides improvements for the macromodel on the MAE criteria. The improvements for the SMAPE are, however, much smaller and are more in line with the improvements seen on the MAE for the micromodel. Some sample selection bias correction methods are also not better than the reference models without sample selection 0.100 Note: The best performing model setup is made gray and bold. The second-and third-best performing methods are made gray. bias correction for the SMAPE. The performance on the MAE for the macromodel is mostly based on the model performance on the extreme observations, because these observations have large errors, improving them has a relatively large impact on the MAE. For the SMAPE error metric the large and small damage observations have a more equal weight in the error metric calculation. The sample selection bias correction methods therefore seem to be most relevant to predict outlier observations. These results seem to be consistent with the general idea that the sample selection bias correction is mostly suitable for extreme observations, which is very relevant for some of the applications of damage models. In theory, these techniques should not work in a situation without a model transfer because there should not be any bias in the data when the training and test data come from the same source (i.e., same variable distributions). The weights calculated by the sample selection bias correction methods should in that case be close to one and therefore the methods do not correct for anything. To test this, the best performing sample selection bias correction methods were also applied to settings without a model transfer. For the micromodel, independent test data come from the same source as the training data (Dutch data). For the macromodel, all observations are put together and then split into training and test data. In this setting, the sample selection bias correction methods had hardly any influence on the results for the macromodel (data not shown). A reduction was seen only in the MBE on the micromodel, but without a model transfer this MBE is negligible (close to zero). Therefore, the reduction is very minimal in absolute terms. The sample selection bias correction methods lead to a larger reduction in the MBE in combination with the ANN methods then in combination with the RF methods. Without sample selection bias correction methods, the ANN model performs less well than the RF model. This occurs consistently in both the micro and the macro case. The reason for this is not entirely clear, but we speculate that this could be due to the sensitivity of the different ML methods to the data. Macro Case Study For the Philippines case study, sample selection bias correction methods have considerably improved predictions from the 12 damage models. Fig. 3 (left) visualizes an example of the improvement in predictions for the extreme typhoon Haiyan after employing the SVDM method in combination with synthetic data generation to the ANN. It shows that without sample selection bias correction the model consistently underestimates the damage as all estimates are below 30%. After implementing the sample selection bias correction method, this consistent underestimation is largely solved and damages are predicted up to 60%, as they were also observed for typhoon Haiyan. Fig. 3 (right) provides an insight on how the different ML methods result in varying improvements. It can be seen from the figure that the ANN model results in more accurate predictions for the Haiyan typhoon compared to the RF model after the sample selection bias correction methods are applied. The results also further support the theory that a model built with an ANN is better able to predict the damage by extrapolation, compared to the RF model. Table VI shows that the predictions from 12 models built using RF as basis ML method provide the smallest errors on average. This implies that most of damage caused by other typhoons than an extreme typhoon such as Haiyan can be better predicted by an RF that can only interpolate and not extrapolate. This makes sense because the extrapolating capacity of ANN is not required for most of the data points, apart from data points of extreme typhoons. A possible explanation for why the ANN models perform worse for average model results than the RF models is that RF works better on a relatively small data sets. Another likely explanation is that the ANN model is quite sensitive for parameter tuning while the RF model is not. The procedure for tuning the parameters could be improved. The tuning should not be carried out for all models at once based on the randomly split data (See section 3.3), but for each of the 12 models separately. The tuning of parameters that result in the smallest weighted mean error for the 12 models together then should be applied to all the 12 damage prediction models to be evaluated. In general, the macro case study is limited by the lack of information on exposure and vulnerability variables. Adding more variables could be helpful. Also, the data on the explanatory variables were the same for all events regardless of the year in which the typhoon hit. Over time these characteristics may have undergone change, requiring changes in the variables. For example, houses might have been built back better after a typhoon with different materials. In particular, locally this is expected to lead to some error, for instance, when large damages have occurred recently and people have responded by abandonment or much stronger building construction. These errors are, however, expected to have a negligible effect on the aggregated results of this case study. Micro Case Study Sample selection bias correction methods have reduced the MBE for all cases in the micromodel case study. In 4 of 12 cases this reduction is even larger than 50%. The MBE is the most relevant metric when the aggregated results of micromodels are used. The MAE improvements for the micromodel are rather small but in line with the SMAPE improvements of the macromodel. This is probably because outliers have a smaller influence on the aggregated MAE metric for the micromodel than for the macromodel in which large differences between damages were presented. Another possible explanation is the difference in data quality of the micromodel. The macromodel consists of municipality averages while the micromodel has values per building. The average values per municipality can correct overestimations and underestimations and hence the aleatory uncertainty is reduced. For individual building values, however, aleatory uncertainty is very high, and no such evening out of errors by averaging exists. This aleatory uncertainty cannot be reduced by sample selection bias correction methods and therefore the reductions in MAE are smaller in the micromodel. Performance of New Sample Selection Bias Correction Methods In this article, two innovations in sample selection bias corrections were introduced: using a single variable correction in the CBE method (SVDM method) and synthetic data generation. These innovations were compared to two other correction methods (KMM and CBE) with and without synthetics data generation. Single Variable Distribution Matching Method The CBE method applied to only a single variable (SVDM) often performs better than the CBE method applied to multiple variables, according to the MAE criteria. The likely reason is that a better match can be made for the most important variable when variables of minor importance to the damage prediction are not considered for determining the weight, and including all variables in the CBE method leads to the best performance on the MBE criteria compared to SVDM. In practice, a transfer will often need to be made over several important variables. For future research, multiple variables could be used to determine the weights for the training data. In this way, a balance needs to be created between not diluting the influence of the most important variables on the weight, and correcting for biases in multiple variables rather than one. In addition, the user needs to determine whether absolute or average errors are most important for the application of the model. Synthetic Data Generation Method The synthetic data generation combined with a sample selection bias correction method generally performs better than just the sample selection bias correction. This is especially the case for the MAE evaluation criteria. The reason this works is probably because ML methods can create very sharp decision boundaries. This means that when a few data points have very large weights the ML models can infer that only under the specific conditions of these data points the related high damage occurs but not with similar values. For example, according to the model, a large damage could only occur at 4 m water depth but not at 3.9 m or 4.1 m. This is a form of overfitting. The synthetic data generation methods introduce some variation in these high weighted samples and hence increase the decision region for which the ML method assigns a high damage. This is the same reason why the similar SMOTE method performs well (Chawla et al., 2002). The disadvantage of the synthetic data generation methods is that information inside the data might be lost while building the statistical model to draw synthetic data points. A future method would be desirable that also increases the decision region but minimizes the loss of information from the original data. A possible approach that could be considered is the use of differential privacy techniques (Khatri, 2017). These techniques add small perturbations to the data to reduce privacy concerns. Recently, Khatri (2017) found that these perturbations work to prevent overfitting also. CONCLUSIONS Recent advances in damage models include datadriven methods to estimate damages caused by natural hazards. An important quality of such methods is their ability to capture complex, nonlinear relationships between multiple variables related to hazard, exposure, and vulnerability. However, data-driven methods are usually limited by the availability and quality of the data required to build such models. As a result, transfer of the models (i.e., using data from one location to build a model for another location) is often required. This raises a problem, the sample selection bias, as the collected data are often not fully representative for the situation it needs to be applied on. This study was undertaken to improve such methods to correct for this sample selection bias, and to evaluate the quality of the predictions. Such corrections were applied on two different case studies: (i) a macrolevel damage model for typhoons in the Philippines and (ii) a microlevel damage model for European river flood damages. Two ML techniques were used: RFs and ANNs. They were then improved by using the three different methods to correct the sample selection bias: KMM, CBE, SVDM, which apply weights to the training data. As sometimes very high weights are assigned to specific observations, additionally, a statistical model was built to generate a larger set of synthetic training data before the ML techniques were applied. We conclude that multivariable data-driven damage models should correct for the sample selection bias that arises from a model transfer setting, as especially on the MBE large reductions are possible, amount to more than 30% error reduction. For a large model transfer (e.g., data from small typhoon to predict damages from an extreme typhoon), the ANN method seems to further improve the predictions compared to the RF method, probably because the method is better capable of extrapolation. These sample selection bias correction methods are especially important in reducing MBEs for the micromodels and lead to up to 50% reduction on MBE, compared to reductions up to 10% on the MAE. For macromodels the correction methods are shown to also reduce the MAEs, with a reduction up to 20%. Synthetic data points generated from the sample selection bias correction methods are shown to considerably improve the models for the MAE criteria, and more than half of the improvement is introduced by the synthetic data for the MAE metric. Future studies that correct for a sample selection bias should therefore consider extending the data set using synthetic data generation after the sample selection bias correction. This study shows that in the future, data-driven damage models should consider sample selection bias correction methods when a model transfer is required. This helps to reduce the MBE and to better predict outlier observations. To correctly predict these outlier cases synthetic data generation or similar techniques can be used. In transfer cases where the simulation of extreme values beyond the observational data is required, ML techniques should be considered that can allow extrapolation, such as ANN in this study. Further research could help establish a reliable impact-based forecasting system based on datadriven multivariable models. This system would be of great help for several sectors, ranging from insurance industry to humanitarian aid organizations. The insurance industry can apply this model to estimate risk premiums. Humanitarian organizations can use data-driven predictions to prioritize faster and better their preparation and aid distribution process in the early warning /early action phase, and after a disaster strikes. ACKNOWLEDGMENTS We would like to thank Ferdinand Diermanse (Deltares) for his help with the synthetic data generation from statistical models. Also, we are grateful to Aklilu Teklesadik (510) and Jannis Visser (510) for their role in the data collating for the case study in the Philippines. Furthermore, we would like to thank Jonathan Nuttall (Deltares), Hans Korving (Deltares), and Danish Shrestha for their help with the ML methods. This research was supported by the Dutch ministry of economic affairs and climate for their funding through the Deltares strategic research programs: Flood Risk Strategies, Enabling Technologies, and Consequences of Extreme Weather. This research was further supported by the European Union's Horizon 2020 research and innovation programme, through the IMPREX project (Grant Agreement no. 641811). The surveys to collect the empirical damage data in Germany were supported by the German Research Network Natural Disasters (German Ministry of Education and Research [BMBF], no. 01SFR9969/5), the MEDIS project (BMBF, no. 0330688), the project "Hochwasser 2013" (BMBF, 13N13017), and by a joint venture among the German Research Centre for Geosciences GFZ, the University of Potsdam, and the Deutsche Rückversicherung AG, Düsseldorf. This research also received support from the NWO-VICI (Grant no. 453-13-006).
2020-08-25T13:05:26.170Z
2020-08-24T00:00:00.000
{ "year": 2020, "sha1": "173ec9042f3cf5c0590cf2d19087c0e05370175e", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/risa.13575", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "684099be5e0361d0b8de0647a341824724122a57", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
246281222
pes2o/s2orc
v3-fos-license
A case–control study of BRCA1 founder mutations 185delAG and 5382insC in a cohort of Egyptian ovarian cancer patients using pyrosequencing technique Ovarian cancer (OC) is considered a leading cause of death among women with gynecological malignancies. OC, like breast cancer, shows a familial predisposition to germline mutations in genes BRCA1 or BRCA2, which have proved to play important roles in the incidence and progression of cancers. In Arab countries there are limited data concerning BRCA1 or BRCA2 founder mutations associated with familial ovarian cancer (FOC). Therefore, the aim of this pilot study was to assess two common founder mutations of BRCA1 (185delAG and 5382insC) in a cohort of Egyptian patients with FOC. The study included fifty female patients with FOC and twenty healthy controls. Clinical, laboratory, and pathological findings were assessed as well as response to therapy. Genetic testing for BRCA1 (185delAG and 5382insC) mutations was performed on peripheral blood samples using a short-fragment sequencer (pyrosequencer). The BRCA1 185delAG mutation was not observed in either the FOC patients or the controls. However, the carrier frequency of heterozygous BRCA1 5382insC mutation was 8%. All the FOC patients with a BRCA1 5382insC mutation had a positive family history of cancer (p = 0.009). All carriers of the BRCA1 5382insC mutation showed a preliminary good response to chemotherapy. The majority of carrier patients were diagnosed at an advanced stage of the disease with high-grade tumors and distant metastasis (75% of cases). The frequency of the BRCA1 5382insC mutation in FOC patients was 8%. The strong association between the mutation and the positive family history suggests that a wider screening for BRCA1 founder mutations would be valuable in predicting high-risk individuals. Background Among the most common gynecological malignancies, ovarian cancer (OC) ranks third after cervical cancer and uterine cancer. OC has a poor prognosis and a fatal outcome in those affected. The main reason for the high mortality rate in OC is due to the fact that in the early stages, most patients are asymptomatic or have non-specific symptoms. The silent spread of OC in conjunction with the lack of satisfactory screening tests mean that advanced, widespread disease often occurs before diagnosis [1,2]. The exact cause of familial ovarian cancer (FOC) is not well understood and may be attributed to genetic factors, environmental or lifestyle factors, or it may occur by chance [3,4]. It has been found that 8-15% of OC patients have germline mutations of the BRCA genes Page 2 of 9 Rizk et al. Egyptian Journal of Medical Human Genetics (2022) 23:11 (BRCA1 and BRCA2) and this is considered as one of the underlying causes of FOC [5,6]. BRCA 1 and 2 are tumor suppressor genes that play an important role in DNA repair and the maintenance of chromosomal stability through homologous recombination (HR). Any change in the nucleotide sequence of either BRCA 1 or BRCA 2 genes results in loss of heterozygosity, genome instability, and an increased risk of malignancy, usually as OC or breast cancer (BC) [7][8][9]. There are three common germline founder mutations implicated in FOC. Two founder mutations are found in the BRCA1 gene (185delAG and 5382insC), and one founder mutation in the BRCA2 gene (BRCA2 6174delT) [8]. Both BRCA1 185delAG and BRCA1 5382insC founder mutations are frameshift mutations that result in truncated non-functioning proteins. In the case of the BRCA1 185delA mutation, there is a deletion of adenine and guanine at position 185 of exon 2 of the BRCA1 gene [10]. With the BRCA1 5382insC mutation, there is an insertion of a cytosine nucleotide at position 5382 of exon 20. The BRCA1 5382insC mutation originated in northern areas of Russia and spread to Ashkenazi Jews and other populations in Eastern Europe [11,12]. Previous studies investigated the frequency of those mutations using different techniques. BRCA1 185delAG was studied using duplex/multiplex-PCR [13], while the 5382insC mutation was detected by ARMS-based PCR [14]. Both studies were performed in Eastern India [13,14]. However, there are a lack of reports regarding the carrier frequency of BRCA1 founder mutations in the Middle East and Africa. A recent study done by Ashour and Ezzat Shafik [15] on 104 epithelial OC patients of different ancestries (61.54% of patients were of Arabic origin) revealed 21 pathogenic variants in 22 patients of Arabic and Asian origins. This study was done through sequencing the translational exons of BRCA 1and 2 and the immediately adjacent introns. The increased use of germline genetic testing in patients with OC or BC has had a significant effect on cancer care [16] because the presence of BRCA1 or BRCA2 mutations can alter the management of OC patients to involve, for example, targeted therapy with Poly (ADP-ribose) polymerase (PARP) inhibitors [17]. The traditional Sanger sequencing is relatively expensive compared to the short-fragment sequencer (pyrosequencer) [18]. The use of the cost-effective method of pyrosequencing in detecting point mutations has a great significance in determining mutation carriers, whether in patients or at-risk relatives. Consequently, the purpose of our work was to detect two well-known BRCA1 founder mutations: 185delAG and 5382insC in Egyptian females with FOC using a pyrosequencer. Patients Seventy females were enrolled in the present study. Fifty FOC patients were recruited from the inpatient and outpatient clinics of the oncology department during the period from June 2017 to June 2018. Twenty healthy, age-matched females with a negative family history were enrolled as controls. All patients signed a written, informed consent, and the study was approved by the local ethics committee. The study protocol was in agreement with the Declaration of Helsinki guidelines1975, as revised in 2000. Females who participated in the study were selected by having one or more of the following inclusion criteria: 1. A first-degree relative (mother, sister, or daughter) or second-degree relative who has ovarian, colorectal, or breast cancer. 2. A personal history of breast cancer before age 40 years. 3. A personal history of breast cancer diagnosed before age 50 years, and/or one or more relatives (first or second degree) who have breast cancer or ovarian cancer or both. Any patient diagnosed with OC only and without personal or reported family history was excluded. Data were collected for age, menstruation, marital status, parity, and family history of breast, ovarian, and colorectal cancer. A full clinical examination was done. Radiological investigations including ultrasonography examination and computed tomography (CT) of the abdomen and pelvis were done at the time of diagnosis in conjunction with routine laboratory investigations. Tumor markers as carbohydrate antigens (CA 125, CA 19.9, and CEA) and histopathological examination after definitive surgical procedure, tumor stage, and metastasis were extracted from the medical records of each patient. Mutation analysis DNA extraction For mutation detection, DNA extraction was done using a QIAamp Blood Mini Kit (catalog no.51104). DNA concentration and quality were assessed by a NanoDrop spectrophotometer (NanoDrop ND-2000, ThermoScientific, USA). Pyrosequencing Before running the pyrosequencer, it is necessary to confirm the presence of a single clear PCR product over the gel. Therefore, PCR amplicons were checked over 2% agarose gel electrophoresis; a band sized 72 bp was detected in BRCA1 5382insC, while in BRCA1 185delAG, the product band size was 80 bp (Fig. 1). Pyrosequencing was then performed using Qiagen Pyromark Q24 GOLD kit (Catalog No. 970802), Samples were loaded on a Pyro-Mark Q24 following the manufacturer's instructions. Briefly, pyrosequencing is based on hybridization of the sequencing primer to a single-stranded PCR-amplified DNA template. The template is incubated with enzymes and substrates that are supplied in the pyrosequencing kit. During sequencing, the four nucleotides [adenine (A), thymine (T), cytosine (C), guanine (G)] are added sequentially. If a nucleotide is complementary to a base in the template strand, it will be incorporated into a DNA strand by a polymerase enzyme. Each incorporation event is accompanied by release of pyrophosphate (PPi). ATP sulfurylase converts PPi to ATP in the presence of adenosine 5' phosphosulfate. This drives the conversion of luciferin to oxyluciferin by luciferase enzyme, generating a visible light in amounts proportional to the amount of ATP. Light is detected using charged coupled devices (CCDs) and seen as a peak. Internal controls were obtained by negative insertion of nucleotide dispensations. Analysis of the results was done by PyroMark Q24 software in the form of a pyrogram for each sample, where peak heights are proportional to the nucleotidesˈ numbers that are incorporated with each dispensation. For BRCA1 185delAG, the intensities of C and T peaks were approximately half of those in the wildtype internal controls, indicating a deletion of CT in one allele. While for the BRCA1 5382insC mutation, incorporation of an extra C was present. Data were fed to the computer and analyzed using a Graphic PAD Prism software package version 6.0. Qualitative data were described using numbers and percentages. Quantitative data were described by the mean and standard deviation. Chi-square test, Fisher's exact test, or Monte Carlo correction were used to compare between the carrier and non-carrier groups. Student t test was used for normally distributed quantitative variables and Mann Whitney test for abnormally distributed quantitative variables. P-values less than 0.05 were considered statistically significant. Results The mean age of FOC patients included in the study was 50.08 ± 14.56 years. Comparing FOC patients and controls revealed no significant difference in age (Additional file 1: Table S1). Pyrosequencing of the DNA samples of FOC patients and controls revealed the absence of either heterozygous or homozygous BRCA1 185delA mutations (Fig. 2). On the other hand, as Fig. 2 shows, heterozygous BRCA1 5382insC mutation was found in 4 out of 50 FOC patients with a carrier frequency of 8% (95% CI 2.2-19.2). However, BRCA1 5382insC mutation was not detected among the controls. Demographic and clinical characteristics of BRCA1 5382insC mutation in carrier patients are summarized in Table 1. According to the presence of heterozygous BRCA1 5382insC, the FOC patients were further divided into two subgroups: carriers and non-carriers. The mean age of carriers was 48.25 years, and for noncarriers it was 56.50 years. Histopathological examination showed papillary serous invasive carcinoma in most of the FOC patients in the present study (44%): 45% of noncarriers and 25% of carriers (Additional file 1: Table S2). For the most part, OC was diagnosed in advanced stages; 60.9% of non-carriers and 75% of carriers were stage III-IV. High-grade tumors (G 3) were manifested in both groups. The difference between carriers and non-carriers regarding cancer stage, grade, or metastasis was not significant (p = 0.421, p = 1.000, and p = 0.14 respectively) (Additional file 1: Tables S3 and S4). In addition, Additional file 1: Table S5 shows that the difference between carriers and non-carriers concerning the three measured tumor markers (CA125, CA19.9, and CEA) was not significant (p = 0.174, p = 0.508, and p = 0.844 respectively). As Table 2 shows, there was no significant difference among the FOC subgroups regarding marital status, parity, and menopause. However, all carriers (100%) had at least one first-degree relative affected by cancer (OC, BC, or colon cancer) with a significant difference between both carriers and non-carriers (p = 0.009). A pedigree of the carriers' families is demonstrated in Fig. 3. With regard to the response to platinum-taxane chemotherapy among 46 non-mutation/non-carrier patients, twenty had a preliminary good response to chemotherapy, while all mutation/carrier patients showed a good response ( Table 2). Discussion OC is a worldwide gynecological malignancy. Although it is less common than BC, its importance comes from the occult nature of its spread leading to late diagnosis and poor prognosis. The 5-year survival rate is estimated to be approximately 30% in advanced stages [19]. The current study aimed to detect two common BRCA1 founder mutations (185delAG and 5382insC) to assess the carrier frequency in a cohort of Egyptian females with FOC using the pyrosequencing technique. Heterozygous BRCA1 5382insC mutation was found in 4 out of 50 FOC patients enrolled in the current study with a carrier frequency of 8%. This was within the previously reported range of 5-15% [20]. Synoweic et al. [21] reported BRCA1 5382insC mutation in 9.6% of Polish FOC patients. Additionally, Moslehi et al. [22] found a frequency of 6.7% in Ashkenazi Jewish patients. Some discrepancy in the frequency of BRCA1 5382insC mutation carriers has been found among different populations, and it seems to be affected by ethnicity, study design, and methods of detection. On the other hand, we did not find either heterozygous or homozygous BRCA1 185delAG, mutations among the FOC patients. Again, the frequency of the BRCA1 185delAG mutation is variable among different populations; a low carrier frequency of 1.1% was reported in Morocco [23], while a higher carrier frequency of 16.4% was found in India [24]. The main significant finding regarding the BRCA1 5382insC mutation in our study was the strong association with the history of cancer in at least one first-degree relative, particularly breast, ovarian, or colon cancer (100% in carriers vs. 84.8% in non-carriers; p = 0.009). This issue could be explained by the fact that germline mutations of BRCA1 pass through an autosomal dominant pattern during familial aggregation [22,25,26]. The strong association between a family history of OC and/ or BC in first-degree relatives and BRCA1 founder mutation highlights the importance of screening for BRCA mutations in high-risk families in order to provide proper genetic counseling and prophylactic management such as salpingo-oophorectomy, which may reduce the risk of developing OC by 90% [27,28]. Also in the present study, the 4 BRCA1 5382insC mutation carriers showed a preliminary good response to chemotherapy. Many studies showed BRCA1 mutation carriers had a better response to chemotherapy and prognosis than non-carriers [29][30][31], while several studies found no significant difference between carriers and non-carriers regarding the chemotherapeutic response [25]. BRCA mutations enhance the DNA damaging effect of chemotherapeutic agents such as platinum, causing an increase in platinum sensitivity, which makes it the cornerstone of the initial regimen of treatment in such cases. This can be explained by it impairing the DNA repair mechanism of mutated cells, so they remain susceptible to damage by cytotoxic drugs [29]. However, platinum resistance may occur in some BRCA1 mutation carriers and results in poor prognosis and/or disease recurrence. In those patients, a shift to alternative drugs such as PARP inhibitors might be valuable [32,33]. Intracellular PARP1 and PARP 2 enzymes play a crucial role in DNA repair mechanisms. Inhibition of such enzymes by PARP inhibitors leads to the accumulation of single-stranded DNA breaks and impaired DNA repair mechanisms with subsequent cytotoxicity and enhanced apoptotic effect [34]. The age of onset and tumor histopathology stage were also compared between carrier and non-carrier patients. It was observed that BRCA1 5382insC mutation was associated with advanced-stage, high-grade OC tumors and evident metastasis in 75% of enrolled patients. However, the small sample size rendered this association insignificant. Overall, despite good facilities for diagnosis and management, there are no adequate data on the prevalence of BRCA1 germline mutations in Arab women with FOC; this is in addition to insufficient research on the underlying genetic mechanisms of OC and its familial aggregation. Based on the literature review, we can claim that the current study is the first in Egypt to apply screening for BRCA1 founder mutations in FOC patients using the pyrosequencing technique. Pyrosequencing is characterized by its high accuracy to pick up single nucleotide changes in short DNA fragments. Furthermore, uncomplicated automation in comparison with other methods, along with its reasonable cost, renders it more accessible for predicting disease outcome and high-risk individuals [35,36]. Conclusion The frequency of BRCA1 5382insC mutation in a cohort of Egyptian FOC patients was 8%. There was a strong association between family history of HBOCrelated tumors and BRCA1 mutations. Moreover, all BRCA1 5382insC mutation carriers showed a preliminary good response to chemotherapy. Therefore, screening for BRCA1 5382insC mutation is valuable for the prevention of OC and for offering appropriate genetic counseling to high-risk families. A large-scale follow-up study with an expanded screening panel of BRCA mutations including those that have been recently recognized in Arabic patients [15] is highly recommended to assess the underlying molecular mechanisms.
2022-01-26T14:38:32.793Z
2022-01-26T00:00:00.000
{ "year": 2022, "sha1": "f2ef110bd9cf43323fe27ca00cdb575e4e12881c", "oa_license": "CCBY", "oa_url": "https://jmhg.springeropen.com/track/pdf/10.1186/s43042-022-00226-8", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "f2ef110bd9cf43323fe27ca00cdb575e4e12881c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
2317081
pes2o/s2orc
v3-fos-license
Severity of Psoriasis Differs Between Men and Women: A Study of the Clinical Outcome Measure Psoriasis Area and Severity Index (PASI) in 5438 Swedish Register Patients Background Psoriasis is a common skin disease and moderate to severe psoriasis is associated with a dose-dependent risk for metabolic and cardiovascular morbidity. It has previously been speculated that women have less severe psoriasis, as men are overrepresented in psoriasis registers and consume more care. Objective The objective of this study was to investigate, for the first time, the sex differences in the severity of psoriasis using the gold standard of severity measurement, the Psoriasis Area and Severity Index (PASI), and the distinct elements of the PASI score. Design, Setting and Participants This was a cross-sectional study based on the national registry for systemic treatment of psoriasis in Sweden (PsoReg), with 5438 patients experiencing moderate to severe psoriasis. Differences in the PASI score and its elements at enrolment were tested by multivariable ordinal logistic regressions. Main Outcome Measures The different components of the PASI score were used to analyze the assessment of disease severity. For each body area (head, arms, trunk, and legs), the score of the plaque characteristics and degree of skin involvement were used as outcomes. Results Women had statistically significantly lower median PASI scores (5.4) than men (7.3) [p < 0.001], which was consistent across all ages. The difference remained statistically significant in a multivariable linear regression. The itemized PASI analyses from the Mann–Whitney–Wilcoxon tests and the adjusted ordinal logistic regressions confirmed that women had significantly lower scores than men in all areas of the body, except for the head. No differences in the use of medications prior to enrolment could be found that may cause this difference between the sexes. Conclusions As the PsoReg contains the detailed disease measurement PASI, which was traditionally used for selected participants in clinical studies only, a nationwide unselected population could be investigated. The fact that women have less severe psoriasis can explain the dominance of males in the systemic treatment of psoriasis. These findings motivate a gender perspective in the management of psoriasis and in the prevention and management of its comorbidities. Results Women had statistically significantly lower median PASI scores (5.4) than men (7.3) [p \ 0.001], which was consistent across all ages. The difference remained statistically significant in a multivariable linear regression. The itemized PASI analyses from the Mann-Whitney-Wilcoxon tests and the adjusted ordinal logistic regressions confirmed that women had significantly lower scores than men in all areas of the body, except for the head. No differences in the use of medications prior to enrolment could be found that may cause this difference between the sexes. Conclusions As the PsoReg contains the detailed disease measurement PASI, which was traditionally used for selected participants in clinical studies only, a nationwide unselected population could be investigated. The fact that women have less severe psoriasis can explain the dominance of males in the systemic treatment of psoriasis. These findings motivate a gender perspective in the management of psoriasis and in the prevention and management of its comorbidities. Introduction Psoriasis is one of the most common skin diseases, with a prevalence of approximately 2-4% in the Western world [1]. It is known that moderate to severe psoriasis is associated with a dose-dependent risk of cardiovascular morbidity, metabolic syndrome, and depression [2][3][4]. The prevalence of psoriasis is considered to be balanced between the sexes [5,6]; however, several studies have indicated sex differences in the treatment of psoriasis. In 1945, Romanus showed that a population of 550 Swedish patients had statistically significant differences in the severity of psoriasis (p \ 0.001) depending on their sex. The highest severity in his categorization ''continues symptoms and recommended hospitalization'' was fulfilled by 38.4% of men but only 28% of women [7]. It has also been shown that women receive topical treatments to a greater extent than men [8], while men are more likely to receive systemic treatment [9,10]. Furthermore, men are more often treated by dermatologists, whereas women are treated by general practitioners or treat themselves at home [11][12][13], which has led to the hypothesis that systemic psoriasis treatment for women of fertile age may be avoided [13,14]. However, we have previously shown that when adjusting for disease severity, among other factors, sex was not a decisive factor in the initiation of biological treatments, which have high acquisition costs [15]. The gold standard for assessing the severity of psoriasis is the Psoriasis Area and Severity Index (PASI), which combines the assessment of the severity of lesions and the extent of the affected area in a single index score [16]. We have previously observed that the PASI score was a more important factor than the Dermatology Life Quality Index (DLQI) in the decision to initiate biologic treatment, and that women tended to have lower PASI scores than men [17]. Therefore, the observed treatment differences between the sexes might be due to less severe disease in women. Consequently, we performed this study to test the hypothesis that, at enrolment in the PsoReg, severe psoriasis is less common in women. By analyzing the distinct elements of the PASI score, we also wanted to investigate whether any pattern could be discerned between men and women in the different body parts and/or dimensions of PASI measures. Study Population Data were retrieved in May 2016, and, in total, 5438 patients with moderate to severe psoriasis were registered in the PsoReg at this time. PsoReg, initiated in 2006, is the national registry for systemic treatment of psoriasis in Sweden. Patients are registered at local, regional, and university hospitals, as well as in private practices and at treatment centers initiated by the Swedish Psoriasis Patient Organization (PSO) [18]. To be included in the PsoReg, patients need to be diagnosed with moderate to severe psoriasis and treated with, or considered for, systemic treatment by a specialist in dermatology. Thus, patients with mild psoriasis who are treated in primary care are not eligible for inclusion in the register. Nationwide, 65% of all biologically treated psoriasis patients and approximately 45% of all systemically treated psoriasis patients are estimated to be included in the PsoReg. Definition of Variables PASI is a combined score consisting of several dimensions of psoriasis, and is based on four areas: head, arms, trunk, and legs. Furthermore, for each body area, three plaque characteristics are assessed by the degree of erythema (redness), induration (thickness), and desquamation (scaling). The scores of the clinical signs in each area are summed and are finally weighted according to the area's proportion of the body, before being converted to the final score, which ranged from 0 to a theoretical maximum of 72. For each patient, information on sex, age, body mass index (BMI), disease duration, diagnosis of psoriatic arthritis (PsA), smoking status, PASI score, and the season in which assessment of the PASI score was carried out was retrieved from the PsoReg. Statistical Analysis Patient characteristics were analyzed at enrolment to examine differences between women and men. Continuous variables (age, disease duration, and BMI) were analyzed using the Student's t test, while categorical variables (smoking, PsA, obesity, and season of the PASI evaluation) were tested using the Chi square test. The difference in the aggregated PASI score was analyzed using a Mann-Whitney-Wilcoxon test and a multivariable linear regression. At enrolment, a kernel-smoothing estimation was used to plot the difference in PASI score for both men and women, according to age at enrolment. The differences in the independent PASI components between women and men were first investigated using the Mann-Whitney-Wilcoxon test separately for the different assessments: degree of involvement, erythema, induration and desquamation, and within each body region (head, trunk, arms, and legs). To be able to adjust for potential confounders and effect modifiers, a multiple linear regression was used to analyze the weighted aggregated PASI score, while multiple ordinal logistic regressions were used to analyze the different components of the PASI assessment. For each body area, four different regressions were fitted, where the score of the plaque characteristics (erythema, induration, and desquamation) and degree of skin involvement were outcomes. Age (continuous), sex (dichotomous), BMI (continuous), disease duration (continuous), PsA (dichotomous), smoking status (dichotomous), and season (categorical) were included as independent variables. The season variable was categorized into four different periods; winter (December, January, February), spring (March, April and May), summer (June, July, August), and autumn (September, October, November). The assumption of proportionality in the ordinal logistic regressions was tested by estimating the odds ratios in logistic regressions to make sure that the estimates did not vary between the two methods. Furthermore, different thresholds were also tested for the various categories to ensure that the results were stable. To illustrate differences between sexes in the treatment of psoriasis on a national level, independent from the PsoReg, aggregated information on all dispensed prescriptions for (biologic treatment) ustekinumab (Anatomical Therapeutic Chemical [ATC] code L04AC05) and (topical treatment) calcipotriol (± corticosteroid; ATC codes D05AX02, D05AX52) was collected from the Prescribed Drug Register (PDR). Both substances are indicated for psoriasis and PsA, but not for other indications. All statistical analyses were performed using SAS version 9.3 (SAS Institute, Inc., Cary, NC, USA). A p value B0.05 was considered to be statistically significant. Ethics This research was approved by the Umeå Ethical Review Board, and patients were recruited into the study after informed consent had been obtained. Results This study included 3252 (59.8%) men and 2186 (40.2%) women. For a subset of subjects (n = 3125; 57.5%), information on dispensed medications was available for analysis. At enrolment, women had significantly lower (p \ 0.001) median PASI (5.4, interquartile range [IQR] 2.7-9.9) than men (7.5, IQR 3.6-12.2). Furthermore, women were older (p \ 0.001), consisted of a higher proportion of smokers (p = 0.003), and had a higher extent of PsA (p = 0.015) compared with men. BMI was higher in the male group (p = 0.003), but there was no significant difference in the proportion of patients with obesity (BMI [30) [p = 0.896]. Disease duration was longer for women compared with men (p = 0.002), and no difference was found between men and women when comparing the time of year the PASI measurements were conducted (p = 0.315) ( Table 1). The kernel-smoothing plot shows that, in the PsoReg, women had a lower PASI score at enrolment compared with men, irrespective of age at registration. Both men and women had a declining trend in PASI throughout the age ranges (Fig. 1). In the multiple linear regression analysis with PASI as the dependent variable, after adjusting for age, BMI, disease duration, PsA, smoking status, and season, sex was a significant explanatory variable (p \ 0.001), with women having a lower PASI score than men. Dimensions in Each Body Area The difference between men and women's PASI score elements at enrolment was tested using the Mann-Whitney-Wilcoxon test, and using the plaque characteristics (erythema, induration, and desquamation) and degree of skin involvement in each body region. Results showed that women had a significantly lower PASI score in all components in each body region, except for the characteristics 'head area', 'head induration', and 'head desquamation', where the scores were almost identical for women and men (Fig. 2). The aggregated weighted PASI sums for each area (head, trunk, arms, and legs) were used as outcomes in the multiple ordinal regressions, with the same result as in the analysis of each element for the body regions described above. Ordinal Logistic Regressions The assessment of plaque characteristics and the degree of skin involvement for each body region were used as the outcome in the multiple ordinal logistic regressions. The models were adjusted by age, BMI, smoking, season, and PsA, and the results were consistent with those of the Mann-Whitney-Wilcoxon tests. Significant differences were found for all plaque characteristics and the degree of skin involvement in the arms, legs and trunk body regions, where women had lower scores. No differences could be identified with respect to the head. The largest differences between men and women were found in desquamation [odds ratio men vs. Fig. 3). To compare the use of specific antipsoriatic drugs, independent from the PsoReg, the number of patients nationwide prescribed one of the two commonly used psoriasis-specific treatments-ustekinumab or calcipotriol (with or without corticosteroid)-was investigated in the PDR. Discussion We observed that women have less severe psoriasis compared with men, after controlling for several possible confounders. By analyzing the separate PASI elements, we further confirmed that the lower PASI score in women is consistent and is not a result of chance variations. In a multivariable linear regression, controlling for several other factors, this difference remained statistically significant. The difference in all but one of the distinct elements of the PASI score was also significant, both in unadjusted comparisons of medians and in the ordinal logistic regression models adjusted by age, BMI, disease duration, concomitant PsA, smoking status, and season. However, with respect to the head, women's and men's PASI scores were equal (Figs. 2, 3). This is probably a consequence of sex-and gender-specific differences in hair growth, care, and styling, with women both (1) 'shielding' their scalp from the beneficial effect of sunlight on psoriasis [19,20], and (2) provoking their head psoriasis via the Koebner reaction [21], to such an extent that it falls into a similar range as males. Previous studies have shown that specialist treatment in psoriasis care is unbalanced, with men being more likely to undergo specialist treatment than women [9,13]. In the PsoReg, approximately 60% of registered patients are men. Most other European Registries for systemic psoriasis treatment show an even larger dominance of registered men: Denmark 66%, Germany 60%, Italy 67%, The Netherlands 68%, and Spain 63% [22]. We employed the PDR to evaluate the nationwide prescription of two psoriasis-specific treatments, independent from the PsoReg. Both the biologic ustekinumab and topical treatments with calcipotriol were, to a larger extent, prescribed to men (55-60%). The PASI score, at enrolment in the PsoReg, shows that women have significantly lower disease activity compared with men across all ages, including women of both fertile-and non-fertile age (Fig. 1), making it unlikely that a restrictive prescription of systemic psoriasis treatment for women of fertile age is a relevant factor. Other differences between men and women that were shown at enrolment are merely a reflection of demographic factors that can be observed in the general population, and indicate a representative enrolment in the PsoReg. On average, men have a higher BMI than women, while a higher proportion of women smoke and suffer from joint disease. In a descriptive study from Ireland, it was observed that twice as many men (n = 93) received systemic treatment compared with women (n = 53) [p = 0.015], and that women had less severe psoriasis (experienced by the clinicians) compared with men [10]. In a retrospective study from Japan, Sakai et al. explored the prognostic factors for the long-term outcome of plaque psoriasis patients (109 men and 60 women) in a logistic regression, and found that men developed more severe psoriasis Fig. 2 The unweighted mean PASI are shown in the spider chart. * indicates statistical significance in Mann-Whitney-Wilcoxon test at 5%significance level, PASI Psoriasis Area and Severity Index according to the Dermatology Index of Disease Severity during follow-up compared with women (p = 0.046), after adjusting for age and BMI [23]. Furthermore, in a questionnaire survey based on 5739 patients, Zachariae et al. showed that men received more systemic treatment in the form of retinoids, calcipotriol, and psoralen plus ultraviolet A (PUVA) as well as non-PUVA phototherapy than women, after adjusting for a self-assessed disease severity measurement [11]. These findings correspond with our results as a majority of patients registered in the national registry for systemic treatment of psoriasis in Sweden (PsoReg) were men (59.1%) and had more severe psoriasis than women. In several other studies, it has been shown that women received less (systemic) psoriasis treatment compared with men; however, as these studies did not consider disease severity [9,13], different interpretations were made. Previous attempts to take disease severity into consideration when analyzing the consumption of care have been carried out by either questionnaires or the critical assessment tool Dermatology Index of Disease Severity [24]. The objective disease measurement PASI is traditionally used in randomized clinical trials only [25]. The registry-based integration of the objective disease measurement PASI enabled a detailed analysis of disease activity in a nationwide unselected population. As such, this study showed sex differences in the severity of psoriasis with a high level of detail. PsoReg-independent analyses of the PDR corroborated these findings. Limitations The generalizability of this study is limited to patients with moderate to severe psoriasis in need of systemic treatment managed by dermatology specialists. The PASI is the gold standard for assessing the severity of psoriasis. The assessment is an objective description of disease severity, and PASI has good inter-and intra-rater reliability [26,27]. There is a theoretical possibility that the observed differences between the sexes with regard to disease severity are due to the selective recruitment of patients to PsoReg. For this to be true, dermatologists would either be more inclined to register men with high disease activity than women with high disease activity, or, conversely, women with a low PASI score would be more likely to be registered in PsoReg than men with a low PASI score. However, Fig. 3 The odds ratios from the ordinal logistic regressions adjusted by age, body mass index, disease duration, psoriatic arthritis, smoking status and season), with women as reference (= 1.0) we have not looked at other dimensions of psoriasis severity, such as the DLQI. Considering this, if women are more inclined to be enrolled in the PsoReg due to a high DLQI score, while having a lower PASI score, this could theoretically explain some of the discrepancies between the severity of sexes shown in this study. However, one would still wonder why women with a high PASI score did not find their way into the register. Taken together, these scenarios are not likely, especially when considering (1) the dominance of men registered in the registries for systemic psoriasis treatment all over Europe; (2) previous descriptions of higher consumption of systemic psoriasis treatment by men [9,10]; and, finally, (3) the PsoReg-independent dominance of men in the PDR for both psoriasis-specific treatments (calcipotriol and ustekinumab) demonstrated here. Conclusions For more than 70 years it has been speculated that women have less severe psoriasis compared with men. By investigating, for the first time, the sex differences in the severity of psoriasis using the gold standard of severity measurement-the PASI score-and the distinct elements of the PASI score, we were able to corroborate this thesis in a nationwide population. Further research is needed to substantiate this finding in different populations.
2017-11-27T03:57:00.870Z
2017-03-24T00:00:00.000
{ "year": 2017, "sha1": "8248a62423c48033adeb506ded2a46fcb7d57eb0", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc5506504?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "8248a62423c48033adeb506ded2a46fcb7d57eb0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
19776783
pes2o/s2orc
v3-fos-license
Indidence of tuberculosis in children in the state of são Paulo, Brazil, under spatial approach The aim of this study was to identify spatial patterns in the incidence of childhood tuberculosis in cities in the state of São Paulo. An ecological and exploratory study was carried with data on new cases of tuberculosis in children 0 to 14 years old for the period 2001 to 2005 and from 2006 to 2010, obtained from DATASUS. Data of the population of this age group were collected and raised rates per 100 000 inhabitants. Moran’s index (I) was calculated for both periods. Thematic maps with the rates and its difference besides Moran maps, maps with Kernel densities, educational level and income were constructed using using TerraView software. The average rates were 3.23 / 100 000 inhabitants in the first period (2881 cases reported) and 2.13 / 100 000 inhabitants in the second period (2513 cases reported); the Moran index in the first period was I = 0.03 (p = 0.16) and I = 0.06 (p = 0.01) in the second period; the thematic map identified 462 municipalities with higher interest rates in the second period; the kernel map identified higher density rates in the metropolitan region of São Paulo, west coastal cities and in the first period and the second period, the metropolitan region of São Paulo and coastal cities. The data presented in this study provide informations to local and regional managers to implement policies for tuberculosis control. Introduction Tuberculosis (TB) is a chronic infectious disease that has long affected humanity, and remains a serious public health problem to this day.With the development of new and potent chemotherapy in the 1960s it was thought that the disease would have a very effective control 1 . Nowadays, two billion people (one third of the world population) are infected with M. tuberculosis.Among of these, eight million will develop the disease and two million die each year 2 .Brazil is among the 22 countries responsible for 82% of TB cases in the world, being the thirteenth in absolute numbers and comprises 35% of the cases reported in the Americas region 3 .Brazil has an annual incidence of 43 cases per 100 000 inhabitants (85 000 new cases/year), an incidence rate of positive pulmonary form of 26/100 thousand (49 000 new cases/year) and the mortality rate of 2.6/100 000 inhabitants (5000 deaths/year) according to WHO estimates 3 .In the Southeast region about 33 000 cases were reported in 2006; and the State of São Paulo had the highest number of cases in the same year, with about 15 000 new cases 4 . Five hundred and thirty thousand new cases of TB cases in children up to 15 years old were estimated by WHO in 2012 worldwide, equivalent to 6% of all cases, with 174,000 deaths from this disease 5 .In Brazil, 15% of cases notified TB cases occur in children under 15 years old 6,7 . The number of cases of TB in children is directly related to the prevalence of the disease in adults, reflecting the continued transmission in the community.Thus, the presence of the disease in children should be seen as a sentinel to the public health, because it refers to a recent infection due to contact with contagious adults 8 . Geo referencing of health events is very important for analysis and evaluation of risks to the public health and the use of thematic maps can explore local and regional determinants of certain events and establish associations between these events and determinants, and evaluate interventions [9][10][11] . The aim of this study was to identify the spatial pattern of incidence of tuberculosis in children in the municipalities in the state of São Paulo in two periods. methodology An ecological and exploratory study was carried out with data on the incidence rate of TB in children 0-14 years old in the municipalities in the state of São Paulo.These data were obtained from Datasus website covering the period from 2001 to 2010, and was divided into two periods: 2001-2005 and 2006-2010.Population data of this age group were collected and incidence rates of childhood TB cases per 100,000 inhabitants were calculated.The digital base of municipalities was obtained from Brazilian Institute of Geography and Statistics (IBGE) 12 . Moran's index, with corresponding p-value was calculated; this index calculates the spatial correlation of the rates obtained and their values range from -1 to +1.Values near zero indicate absence of spatial autocorrelation, the events are at random.The closer to 1, the greater the similarity between neighbors and negative values mean that are dissimilar 13 .Difference in incidence rates of childhood TB was obtained by subtracting the data of the first period incidence, 2001-2005, from the second period values, 2006 to 2010.Mean values for each period were compared using Student's t test.Values of the population aged 15 or older who has a high educational level were used to create rates that indicate the proportion of the population with this educational level; proportion values of people who earn less than half the minimum wage household income were also analyzed.These data were obtained from Datasus website. Kernel estimator created the density map of the incidence rates of TB in children 0-14 years old.The method is based on calculations of the cases density (number of cases per area), producing a surface where areas with probabilities closer cases at higher risk, whose denominator for rate is in another layer, the density of people (inhabitants per area or population density) also as continuous surface 13 .Kernel density maps, with incidence rates of TB in children on both periods, were built with 150 columns, quartic function, density calculation and adaptive radius.Both maps were categorized according to density levels vary with color and tonality. Thematic maps were constructed with the incidence rates of TB in children; with differenc-es in incidence rates, showing where there was a worsening of the incidence rates of TB in children and with rates of education and income.Moran's map identifying municipalities that deserve more attention from administrators was also constructed. Terra View 4.2.2 software released by INPE was used.Alpha = 5% was the level of significance. results Two thousand eight hundred eighty-one new cases of childhood TB were identified in the period 2001-2005, in the cities of the São Paulo State; these data represent an average incidence rate of 3.23 cases/100,000 inhabitants (SD = 6.56, ranging between 0.00 and 67.70); 2513 new cases were identified in the period 2006-2010, in 645 municipalities of São Paulo; these data represent an average incidence of 2.13 cases (standard deviation = 3.69 ranging between 0.00 and 22.95)/100,000 inhabitants.Comparing these rates, they are significantly different (p-value < 0.01). Thematic maps of the rates in the first and second period are in Figure 1A (2001 to 2005) and 1B (2006-2010), respectively.In the first period, Figure 1A, municipalities located primarily in the Paraíba Valley, coastal cities, São Paulo metropolitan region, Central region of the state and far west region, where rates of childhood TB are high, can be observed.The Northwest region had lower rates.In the second period, Figure 1B, there was little change, and it is possible to observe a decrease in the incidence rates of childhood TB in regions of the Paraíba Valley, coastal cities, São Paulo Metropolitan Region, Central Region and the far west, but there was an increase in these rates in the northern state, on the border with the state of Minas Gerais. The difference in incidence rates of childhood TB, subtracting the data of the first period of the values in the second period, identified cities where there was an increase in these rates.This difference in rates did not show any significant spatial autocorrelation because Im = -0.01(p-value = 0.28).Large number of municipalities (462) showed an increase in rates, but those in the metropolitan region of São Paulo, municipalities that border the Dutra Highway and coastal cities (183) decreased (Figure 2). The Kernel maps in Figure 3A The proportion values of the population aged 15 or older who presents high education level were placed on a map (map not shown); in both periods cities with higher rates and higher educational level are located in the Metropolitan Region of São Paulo, coastal cities, Paraíba Valley and upstate, following the edges of Dutra, Anhanguera and Bandeirantes highways; these cities have over 30% of the population with high educational level. In the first period, largely on the state of São Paulo there were more than 50% of municipalities with household income up to half minimum wage.The metropolitan region of São Paulo, municipalities that border the Dutra Highway and coastal cities had better income, with up to 40% of the cities with low income according to the map (map not shown).In the second period, this region had 30% of cities with low income.The rest of the state had rates of 40% to more than 50% of the cities with household income up to half minimum wage (map not shown). Discussion This study identified spatial pattern for the distribution of rates of childhood TB in the state of São Paulo in individuals up to 14 years old; it was also possible to identify cities where there was an increase in these rates and also those cities where there was a decrease.This is the first study carried out in the state of São Paulo using the tools of spatial analysis with data on incidence of tuberculosis in children up to 14 years old.The disease was approached in two periods.This approach, mapping disease, has been an important tool in the field of public health with advances in analytical techniques that have been developed in recent years 11,14,15 . The Emergency Plan for Tuberculosis Control was released in 1996, but there were difficulties in the decentralization process and for expanding basic network since the formalization of the program in 1999 The current National Tuberculosis Control Program (NTCP) was approved and placed on the agenda public policies in Brazil only in 2004 16,17 .Comparing the two periods there was a decrease in the incidence rates of TB in children from the first to the second period (3.23 for 2.13 cases/100 000 inhabitants), possibly due to the implementation of the current NTCP and strengthening the primary care through access to diagnosis and treatment, which may have contributed to the decline in incidence rates of TB in the state of São Paulo 9 . Rates found in this study for the first and second period are close to those found in the state of Minas Gerais (3.52 to 3.35 cases/100 000 inhabitants).On the other hand, these rates are lower than the rates found in the states of Rio de Janeiro (14,98-13.28cases/100 000 inhabitants) and Bahia (7.63 to 6.33 cases/100 000 inhabitants) 18 .In a study of 1996 data from the São Paulo Paraíba Valley, the cities showed an incidence in this age group of 10.4/100 000 inhabitants 18 . The thematic map with the difference between the rates according to the periods showed an increased incidence of childhood TB in 462 cities in the state in the second period, and lower rates occurred in 183 cities.Despite the decrease in the number of cases and also lower rates of TB incidence in the second period, there was an increase in this rate in most cities; however the decrease numerically exceeds the increase in incidence rates of TB in this age group with the greatest impact on reducing the rates that increase them. Nuclei of higher concentration of TB cases were identified by Kernel estimator, and are located in the metropolitan region of São Paulo, coastal cities, Western and Northern State.In the second period, with lower rates, there was a change in the concentrations of cases, as evidenced by the Kernel estimator, but remaining high in the metropolitan region of São Paulo and coastal cities, possibly because they are areas of great expansion and human settlement. In the first period, although Moran rate's presented absence of significant spatial autocorrelation, 10 cities were identified with a high priority for intervention by managers, and are located in the metropolitan region of São Paulo, coastal cities and two in the far west.In the second period there is positive spatial autocorrelation, the map showed 33 cities with high priority for intervention, located in the Metropolitan Region of São Paulo, coastal cities, two in the far west and two in the far north.Through this analysis, it was possible to identify clusters of cities that should be under investigation for decrease rates of TB in childhood. In this study the distribution of incidence rates and the Kernel density of the highest rates were in cities with better socioeconomic con-ditions in the state of São Paulo.Inequalities in housing, income distribution and access to education affect the disease in geographic areas characterized by poverty pockets inside these counties and these unjust differences place groups at a disadvantage in relation to the opportunity to be and stay healthy 19 . Schooling in people's lives reflected an access to knowledge and ability to understand disease prevention and prescribed therapy.The low income and low educational level configure a set of unfavorable socioeconomic conditions 20 .However, it is interesting to note that the regions which have higher incidence rates of tuberculosis in children do not coincide with areas of low income and low education in the state of São Paulo, but in places where there are better conditions and more access to education. This fact can be explained because to notify TB is compulsory; thus, in regions where there are better wage conditions there may be better access to health care.Whereas in regions with low incidence of TB in childhood, the disease is not being notified correctly.Unemployment, low education level and low income are factors that increase vulnerability to TB, so, that may hinder access to health services to obtain a correct diagnostic 4 .The fact that higher rates in those cities with better income and schooling exist would be due to improved care in health facilities that would count with more experienced technical personnel who "think" about tuberculosis, and with better infrastructure, could make more diagnosis of childhood TB. The data source used -National Disease Information Agency (SINAN) can be included as a possible limitation of this study because it may contain errors pointing out diagnosis, despite being an official, stable and reliable source and widely used in technical and scientific papers.Ecological studies do not have individual information on exposure and disease, and thus one cannot evaluate the comorbidities 11 .It was difficult to compare the findings of this study with others because there are few studies with the same way of approach on tuberculosis in children by using the state as an area study. The epidemiological situation of TB in children is another possible limitation.Difficulties with access to health services and diagnosis should also be considered.Underreporting of cases of TB in children may occur due to the difficulty to confirm the diagnosis of TB in this age group, since 80% of childhood cases are negative on sputum examination 8 .This study provided information about the spatial distribution of childhood TB new cases in the cities of São Paulo, and it identifies cities that should require intervention of municipal and state management through decentralization of TB for primary health care and increase coverage of family health strategy, since childhood TB refers to infection due to recent contact with contagious adults.If the Program for Tuberculosis Control does not diagnose and treat early tuberculosis in adults, it will not be reduced in children. Collaborations TS Venâncio, TS Tuan and LFC Nascimento participated equally in all stages of preparation of the article. Aknowledgments TS Venâncio thanks São Paulo Research Foundation -FAPESP for the scholarship grant.
2017-06-10T01:23:11.944Z
2015-05-01T00:00:00.000
{ "year": 2015, "sha1": "2d8ccd496c8eba5b8dbed369806cd3cd448072e6", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/csc/a/v5W93fbHFw87ZbpFDxr8zkL/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "f34af86a043cdaa89c58b5d4450e4db4692c6c89", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
252259311
pes2o/s2orc
v3-fos-license
Curcumin Stimulates the Overexpression of Virulence Factors in Salmonella enterica Serovar Typhimurium: In Vitro and Animal Model Studies Salmonella spp. is one of the most common food poisoning pathogens and the main cause of diarrheal diseases in humans in developing countries. The increased Salmonella resistance to antimicrobials has led to the search for new alternatives, including natural compounds such as curcumin, which has already demonstrated a bactericidal effect; however, in Gram-negatives, there is much controversy about this effect, as it is highly variable. In this study, we aimed to verify the antibacterial activity of curcumin against the Salmonella enterica serovar Typhimurium growth rate, virulence, and pathogenicity. The strain was exposed to 110, 220 or 330 µg/mL curcumin, and by complementary methods (spectrophotometric, pour plate and MTT assays), we determined its antibacterial activity. To elucidate whether curcumin regulates the expression of virulence genes, Salmonella invA, fliC and siiE genes were investigated by quantitative real-time reverse transcription (qRT-PCR). Furthermore, to explore the effect of curcumin on the pathogenesis process in vivo, a Caenorhabditis elegans infection model was employed. No antibacterial activity was observed, even at higher concentrations of curcumin. All concentrations of curcumin caused overgrowth (35–69%) and increased the pathogenicity of the bacterial strain through the overexpression of virulence factors. The latter coincided with a significant reduction in both the lifespan and survival time of C. elegans when fed with curcumin-treated bacteria. Our data provide relevant information that may support the selective antibacterial effects of curcumin to reconsider the indiscriminate use of this phytochemical, especially in outbreaks of pathogenic Gram-negative bacteria. Introduction Diarrheal disease is an important global health problem and is the third cause of child mortality [1]. Salmonella is one of the most frequent bacteria causing diarrheal diseases [2]. Salmonella enterica serotypes include numerous pathogens of warm-blooded animals, including humans. Salmonella enterica serovar Typhimurium (S. Typhimurium) has been considered the prototypical broad-host-range serotype. It is a frequent cause of acute selflimiting food-borne diarrhea in numerous species, including humans, livestock, domestic fowl, rodents, and birds [3]. Recently, nontyphoidal Salmonella variants were associated PCR Identification of Genes Encoding Specific Virulence Factors of S. Typhimurium Virulence factors are essential for the ability of bacteria to cause disease. Salmonella has the ability to survive long-term frozen storage; however, it has been reported that isolated virulent bacterial strains became avirulent during storage or passages. The above is probably due to the loss of virulence plasmids [26]. A polymerase chain reaction (PCR) method confirmed that our storage and growth conditions did not affect the presence of virulence factors from the bacterial strain. Figure 1 shows the presence of three genes of S. Typhimurium involved in epithelial cell adhesion and invasion (invA, fliC and siiE). Curcumin Did Not Show an Antibacterial Effect. To determine whether CUR kills S. Typhimurium and to inv ferent environmental or growth conditions on bacteria cell survi posed to dimethyl sulfoxide (DMSO) or CUR for 2 h, and the spectrophotometric (OD600), pour plate and MTT assay. The res 2, indicate that the treatment with 110, 220 and 330 µg/mL of CUR growth. After 4 h of incubation, curcumin provoked a significant stimulation. Untreated and DMSO-treated Salmonella strains growth phase with similar growth rate ( Figure 2A). After 12 h maintained a significant growth increase in the presence of CUR 110 µg/mL 3.8 ± 0.05 SD, 220 µg/mL 4.6 ± 0.06 SD and 330 µg/m 2B). By the pour plate, there was an increment in the number o lowing a dose-response profile (Table 1, Figure 3A). The percen Typhimurium increased significantly from 35% to 57% (Figure 3 sistent with the MTT assay, where formazan cell viability/crystal curcumin concentrations. When formazan was solubilized, the curcumin-treated cultures was higher than the negative control dependent profile (DSMO 0.1925 ± 0.004 vs 110 µg/mL 0.2095 ± 0.04, 330 µg/mL 0.2758 ± 0.018) (Figure 4). Curcumin Did Not Show an Antibacterial Effect To determine whether CUR kills S. Typhimurium and to investigate the effect of different environmental or growth conditions on bacteria cell survival, 10 7 CFU/mL was exposed to dimethyl sulfoxide (DMSO) or CUR for 2 h, and the effect was evaluated by spectrophotometric (OD600), pour plate and MTT assay. The results, presented in Figure 2, indicate that the treatment with 110, 220 and 330 µg/mL of CUR did not inhibit bacterial growth. After 4 h of incubation, curcumin provoked a significant dose-dependent growth stimulation. Untreated and DMSO-treated Salmonella strains reached the exponential growth phase with similar growth rate ( Figure 2A). After 12 h of incubation, Salmonella maintained a significant growth increase in the presence of CUR (DMSO 3.6 ± 0.09 SD vs 110 µg/mL 3.8 ± 0.05 SD, 220 µg/mL 4.6 ± 0.06 SD and 330 µg/mL 4.9 ± 0.04 SD) ( Figure 2B). By the pour plate, there was an increment in the number of colonies with CUR following a dose-response profile (Table 1, Figure 3A). The percentage of overgrowth in S. Typhimurium increased significantly from 35% to 57% ( Figure 3B). These results are consistent with the MTT assay, where formazan cell viability/crystal formation increases with curcumin concentrations. When formazan was solubilized, the absorbance at 550 nm of curcumin-treated cultures was higher than the negative controls, maintaining the dose-dependent profile (DSMO 0.1925 ± 0.004 vs 110 µg/mL 0.2095 ± 0.01, 220 µg/mL 0.2297 ± 0.04, 330 µg/mL 0.2758 ± 0.018) (Figure 4). It has been demonstrated that CUR attenuates the virulence pathogens by the down Virulence Factors Are Upregulated by Curcumin It has been demonstrated that CUR attenuates the virulence pathogens by the downregulation of transcription of virulence genes [27,28]. We measured fliC, siiE and invA expression levels by relative-quantitative RT-PCR to determine whether CUR directly affected bacterial virulence. All genes showed significant gene expression changes in response to CUR treatment. In S. Typhimurium, treatment with CUR provoked the increase in mRNA expression for siiE, invA and fliC. Among the three genes, siiE had the higher range of mRNA expression (18-and 28-fold), with the concentrations of 220 and 330 µg/mL of CUR, respectively ( Figure 5A), while invA had the smallest range (0.4-and 0.9-fold), with the same doses ( Figure 5B). Finally, fliC had also significantly increased mRNA expression (5-and 10-fold) with 220 and 330 µg/mL of CUR, respectively ( Figure 5C). Curcumin Enhanced the Pathogenicity of S. Typhimurium in C. elegans. Earlier reports have shown that S. Typhimurium can kill C. elegans [24,29]. To validate nematode survival in the presence of this pathogenic strain, 8 × 10 8 cells/mL were used as bacterial food for C. elegans, and nematode survival was evaluated. As a negative control, nematodes fed with E. coli OP50 were used. The results obtained show that the mean and maximum life expectancy of nematodes fed with the pathogenic strain decreased significantly, with respect to worms fed with the OP50 strain (negative control) ( Figure 6A). To validate whether CUR increased the virulence of Salmonella, 8 × 10 8 cells/mL were exposed for 2 h to DMSO and 110 and 330 µg/mL of CUR, and subsequently used as nematode food. The results show that feeding C. elegans with CUR-treated bacteria significantly shortens the lifespan of the nematode by 66% at the 330 µg/mL concentration compared to DMSO treatment ( Figure 6B). The above effect correlates with the overexpression of virulence factors in S. Typhimurium due to the use of CUR. Survival curve of worms fed with DMSO-treated bacteria were not different from the curve of worms fed with untreated bacteria ( Figure 6, LogRank p = 0.968 for S. Typhimurium, n = 90). OP50-fed worms Curcumin Enhanced the Pathogenicity of S. Typhimurium in C. elegans Earlier reports have shown that S. Typhimurium can kill C. elegans [24,29]. To validate nematode survival in the presence of this pathogenic strain, 8 × 10 8 cells/mL were used as bacterial food for C. elegans, and nematode survival was evaluated. As a negative control, nematodes fed with E. coli OP50 were used. The results obtained show that the mean and maximum life expectancy of nematodes fed with the pathogenic strain decreased significantly, with respect to worms fed with the OP50 strain (negative control) ( Figure 6A). To validate whether CUR increased the virulence of Salmonella, 8 × 10 8 cells/mL were exposed for 2 h to DMSO and 110 and 330 µg/mL of CUR, and subsequently used as nematode food. The results show that feeding C. elegans with CUR-treated bacteria significantly shortens the lifespan of the nematode by 66% at the 330 µg/mL concentration compared to DMSO treatment ( Figure 6B). The above effect correlates with the overexpression of virulence factors in S. Typhimurium due to the use of CUR. Survival curve of worms fed with DMSO-treated bacteria were not different from the curve of worms fed with untreated bacteria (Figure 6, LogRank p = 0.968 for S. Typhimurium, n = 90). OP50-fed worms showed the usual lifespan, ranging from 16 to 22 days. Discussion Diarrheal diseases are one of the leading causes of death in children under 5 years and adults over 65 years. After Rotavirus, enteric bacteria are an important cause of morbidity and mortality. S. Typhimurium is included among the major isolated agents in developing countries [1,30]. Antibiotics are effective in life-threatening cases caused by bacterial pathogens; however, due to increased resistance and the potentially serious side effects of combinatorial therapies, there is a pressing need to have new alternatives [31][32][33][34]. In recent years, CUR, the principal and most active curcuminoid of Curcuma longa L. (C. longa), has gained considerable attention, due to its antimicrobial activity in different strains of bacteria. In 2016, Hayati Gunes et al [19] found that CUR has high antibacterial activity against E. coli, in relation to other bacteria, with a minimum inhibitory concentration (MIC) for CUR of 163 µg/mL. Others found that in combination with antibiotics, the CUR antibacterial activity ranges from 125 to 500 µg/mL [35]. Although most studies suggest that CUR has activity against both Gram-positive and Gram-negative bacteria [11][12][13][16][17][18][19][20][21], its activity against S. Typhimurium is considerably controversial. Meanwhile, in chicken, treatment with C. longa prevents intestinal colonization by S. Typhimurium [36]. In a murine model, CUR increases the pathogenicity of this bacteria [23]. In addition, reports are showing that the efficacy of antibiotics is directly related to the level of inoculum size. Bacteria might appear susceptible when the inoculum is low density (10 5 CFU/mL) but resistant if the inoculum size is increased (high density ~10 9 CFU/mL; depending on the clinical strains) [37][38][39]. In this report, we explore the antibacterial efficacy of CUR, and the killing assay was performed using 10 7 CFU/mL. In the present study, even though we followed the procedure reported by Hayati Gunes et al. [19], CUR at 110, 220 and 330 µg/mL for 16-18 h at 37 °C, 250 rpm, was not active against S. Typhimurium Discussion Diarrheal diseases are one of the leading causes of death in children under 5 years and adults over 65 years. After Rotavirus, enteric bacteria are an important cause of morbidity and mortality. S. Typhimurium is included among the major isolated agents in developing countries [1,30]. Antibiotics are effective in life-threatening cases caused by bacterial pathogens; however, due to increased resistance and the potentially serious side effects of combinatorial therapies, there is a pressing need to have new alternatives [31][32][33][34]. In recent years, CUR, the principal and most active curcuminoid of Curcuma longa L. (C. longa), has gained considerable attention, due to its antimicrobial activity in different strains of bacteria. In 2016, Hayati Gunes et al [19] found that CUR has high antibacterial activity against E. coli, in relation to other bacteria, with a minimum inhibitory concentration (MIC) for CUR of 163 µg/mL. Others found that in combination with antibiotics, the CUR antibacterial activity ranges from 125 to 500 µg/mL [35]. Although most studies suggest that CUR has activity against both Gram-positive and Gram-negative bacteria [11][12][13][16][17][18][19][20][21], its activity against S. Typhimurium is considerably controversial. Meanwhile, in chicken, treatment with C. longa prevents intestinal colonization by S. Typhimurium [36]. In a murine model, CUR increases the pathogenicity of this bacteria [23]. In addition, reports are showing that the efficacy of antibiotics is directly related to the level of inoculum size. Bacteria might appear susceptible when the inoculum is low density (10 5 CFU/mL) but resistant if the inoculum size is increased (high density~10 9 CFU/mL; depending on the clinical strains) [37][38][39]. In this report, we explore the antibacterial efficacy of CUR, and the killing assay was performed using 10 7 CFU/mL. In the present study, even though we followed the procedure reported by Hayati Gunes et al. [19], CUR at 110, 220 and 330 µg/mL for 16-18 h at 37 • C, 250 rpm, was not active against S. Typhimurium (data not shown). It is well documented that the bacterial growth rate determines the bacterial susceptibility to antimicrobials; bacterial overgrowth provokes the nutrients deprivation that induces modifications of the cell envelope [40], and generally, there is no correlation with the antimicrobial concentration. Considering this, CUR treatment was performed throughout each phase of growth. Our results provide evidence that CUR did not inhibit the growth of Salmonella, but promoted a significant overgrowth instead, after 4 h of incubation. Many bacterial species and antibiotic classes exhibit heteroresistance, meaning that a susceptible bacterial isolate harbors a resistant subpopulation that can grow in the presence of an antibiotic. In this work, CUR was in contact with Salmonella for only 2 h, after which it was removed, but, interestingly, in Salmonella, the overgrowth continued after 12 h of incubation. This suggests that CUR enhances the speed at which cells proliferate and that the modification is transmitted to new generations. It has been reported that S. Typhimurium has some genes with diverged expression domains that are involved in different metabolic pathways compared with E. coli, leading to their better survival and propagation [41][42][43]. More studies are necessary to identify how CUR regulates the growth in Salmonella. On the other hand, the standard optical method for quantifying cell density (OD 600 nm) cannot distinguish live from dead bacteria or even particles. Therefore, in order to improve the results, viability and metabolic activity was validated by the pour plate method and MTT assay [44][45][46]. Our results confirm that, after 12 h of incubation, CUR does not affect the growth of S. Typhimurium; we have metabolic active growing cells. CUR has been found to modulate the activity of several key transcription factors and, in turn, the cellular expression profiles [47]. In bacteria and parasites, CUR is reported to modulate the virulence factor expression [27,28]. The pathogenicity of bacteria is related to many and strain-specific virulence factors. In Salmonella, we analyzed three virulence genes, invA for the Salmonella genus, fliC, and siiE for Typhimurium serovar. Our result showed that all genes were found to be upregulated by CUR. For invA, a gene that mediates invasiveness, the overexpression was only 0.9-fold. The higher overexpression levels were observed with fliC and siiE (10-and 28-fold, respectively). In another Salmonella species, flagella could be dispensable for host cell adhesion, but for S. Typhimurium, the flagellum is a key virulence-associated phenotype. A functional flagellum is necessary for epithelial cell invasion and macrophage uptake; besides, it participates in proinflammatory cytokine expression. In the case of the adhesin SIIE, some studies show that the infection of host organisms by Salmonella involves the cooperative activity of the Salmonella pathogenicity island 1 (SPI1)-encoded type III secretion system (T3SS) and SIIE. Without the function of the SPI4 T1SS or SiiE, Salmonella is highly reduced in adhesion [7,48,49]. Our results suggest that CUR enhances the adhesion ability of Salmonella. Further studies are needed to elucidate the exact mechanism by which CUR upregulates the expression of the major virulence factors of S. Typhimurium [23,50,51]. The higher pathogenic potential, which bacterial strains exposed to CUR possess, was validated using the nematode C. elegans. It is known that S. Typhimurium is pathogenic to C. elegans; even though the nematode expresses numerous antimicrobial proteins, this bacterial strain proliferates and establishes a persistent infection in the intestine of the nematode [52]. In this study, the overexpression of virulence factors in Salmonella by CUR correlates with the short lifespans of C. elegans in a lifespan assay. The rate of mortality of C. elegans fed with untreated S. Typhimurium was similar to that found by other authors [25]; the life expectancy of the nematode was reduced by 66%, in comparison to the E. coli OP50 strain. In nematodes fed with CUR-treated S. Typhimurium, there was a direct correlation between the overexpression of virulence genes and the mortality rate; the complete mortality occurred after 10 and 7 days with 110 and 330 µg/mL, respectively, suggesting increased bacterial infection after exposure to CUR. In contrast, with other Gram-negative bacteria, it has been reported that CUR reduced the production of virulence factors, affecting the adherence and the formation of biofilm [28]. It is important to emphasize that further studies are necessary. Maintenance and Preservation of Microorganisms The bacterial strain was grown in nutritive agar (plates) (Becton Dickinson, Maryland, USA) at 37 • C for 18-20 h. The cultures were stored at 4 • C, with streak plating onto fresh agar plates every seven days. A glycerol stock of bacteria was stored at −80 • C. Extraction of Genomic DNA Genomic DNA was obtained from S. Typhimurium cultures using the DNeasy ® Blood & Tissue kit (QIAGEN, Hilden, Germany), following the manufacturer's instructions. The DNA was stored at −20 • C. Purity and concentration were determined by 1% agarose (Ultra-Pure-Agarose, Invitrogen, Carlsbad, CA, USA) gel electrophoresis and by spectrophotometry, respectively. Electrophoretic gels were stained with GelRed (Nucleic Acid Gel, Biotium, Landing Pkwy, CA, USA) and visualized on a trans-illuminator (UVP Benchtop 2UV, Fisher Scientific, Waltham, MA, USA). Presence of Virulence Genes The pathogenicity of Salmonella spp. has been related to numerous virulence genes. The invasion protein InvA is one of the most studied virulence factors. Flic-encoded flagellin protein, and the giant, non-fimbrial adhesin protein SIIE have been also implicated in successful host infection [10,54,55]. The expression of invA (GenBank Accession M90846.1), fliC (GenBank Accession KF589316.1) and siiE (GenBank Accession AJ576316.1) was validated in the bacterial strain. A specific region of each gene was amplified from genomic DNA (DNeasy ® Blood & Tissue, IAGEN) by PCR using the following primers: siiE sense Spectrophotometric Method The antibacterial activity of CUR was determined by a growing strain in Luria Bertani broth (LB) (Sigma-Aldrich, Missouri, USA), at 37 • C, 250 rpm. Cultures were allowed to grow until they reached OD600 0.08 (10 7 colony forming units CFU/mL). Cells were pelleted by centrifugation at 1844× g for 5 min (Sigma 1-14K 12092 rotor), resuspended in 3 mL of PBS containing 110, 220 and 330 µg/mL of CUR and incubated for 2 h at 37 • C, 250 rpm. Untreated and 1.2% DMSO-treated cultures were used as negative controls. After the incubation period, cells were harvested by centrifugation, washed with PBS twice to remove the CUR, and grown in LB medium to achieve the exponential phase [18]. Bacterial growth (OD600) in LB medium was measured on a microplate reader (BioTek Synergy HT, Winooski, VT, USA). All experiments were performed in triplicate. Pour Plate Method S. Typhimurium strain was exposed to DMSO, 110, 220 or 330 µg/mL of CUR for 2 h following the procedure described above. After CUR treatment, cells were harvested by centrifugation as mentioned above. The pellets were resuspended in 3 mL of PBS, and serial dilutions were performed. For the pour plate method, 100 µL of each dilution was added by pipette to the center of sterile disposable Petri dishes. Then, cooled but still molten agar medium was poured into each Petri dish. The plates were incubated overnight at 37 • C. The dilutions chosen produced between 30 and 300 separate countable colonies. The growth percentage was calculated as ((B2-A2)/A2 100)), where A is the number of colonies untreated, and B is the number of colonies in the presence of CUR. All experiments were performed in triplicate. Assay MTT According to previous reports, MTT assay modified by Wang et al. [44] was performed to determine the viability of S. Typhimurium after DMSO, 0, 110, 220 or 330 µg/mL of CUR exposition. Briefly, a bacterial strain was grown at 37 • C in LB broth until the OD600 reached 0.1, and then DMSO or CUR was added to each cell culture. After incubation for 2 h at 37 • C at 250 rpm, cultures were centrifuged at 1844× g for 5 min (Sigma 1-14K 12092 rotor). The resulting bacterial pellets were washed three times in PBS and resuspended in 1 mL of LB. Aliquots of the bacterial cultures (20 µL) were placed on 0.6 mL tubes, which had been preheated to 37 • C for 10 min. Then, 2 µL of MTT (5 mg/mL, Sigma-Aldrich, M5655 St Louis, MO, USA) was added to each tube. After incubation, for 20 min at 37 • C, the tubes were centrifuged at 10,000× g for 1 min (Sigma 1-14K 12092 rotor) in order to precipitate the bacteria and formazan crystals; 20 µL of the medium was removed, and the crystals were dissolved with 250 µL of DMSO. Finally, the coloration was read at 550 nm after 15 min in a microplate reader (BioTek Synergy HT, Winooski, VT, USA). All experiments were performed in triplicate. Statistical Analysis All data were presented as mean values with standard deviations and analyzed using two-way ANOVA, followed by Dunnett's multiple comparisons test (GraphPad Prism version 6.01 for Windows, GraphPad Software, La Jolla, CA, USA). p-values of ≤0.05 were considered significantly different. Relative-Quantitative RT-PCR The effect of CUR on the expression of invA, fliC and siiE, genes associated with the virulence of S. Typhimurium, was evaluated by semi-quantitative qRT-PCR using the primers described above. First, the bacteria strain was exposed to DMSO, 110, 220 or 330 µg/mL of CUR for 2 h following the procedure described above. After CUR remotion, they were grown overnight in LB medium. Total RNA was obtained from DMSO, or CUR treated bacterial cultures using a Total RNA Purification kit (NORGEN), following the manufacturer's instructions. cDNAs were synthesized by a reverse transcriptase reaction (Verso cDNA Synthesis Kit, Thermo Scientific) using 1 µg of RNA and Oligo dt20 primer (Integrated DNA). Relative-quantitative RT-PCR was performed in a StepOneTM Real-Time PCR System (Applied Biosystems TM , Foster City, CA, USA) using Maxima SYBR Green qPCR Master Mix (Thermo Scientific) to evaluate the amplification reaction. The gene expression was normalized to the expression level of glyceraldehyde 3-phosphate dehydrogenase genes (GenBank accession no. DQ644683.1) using the following primers: gapdh sense 5 -GGT TTT GGC CGT ATC GGT CGC A-3 and gapdh antisense 5 -ACC GGT AGC TTC AGC CAC TAC G-3 . Melting curves confirmed the absence of primer dimerization. The amplification conditions were as follows: hot start at 95 • C 10 min, 40 cycles of 95 • C 15 s, 60 • C 30 s and 72 • C 30 s. The comparative ∆∆Ct method calculated changes in expression [57]. Significant differences (defined as p < 0.05, indicated by asterisks in figures) were calculated by ANOVA tests using the GraphPad Prism version 6.01 for Windows (GraphPad Software, La Jolla, CA, USA). Error bars indicate standard deviations for experiments with more than one trial. Maintenance and Preservation of C. elegans The wild-type C. elegans variety Bristol N2 strain used in this study was provided by the Caenorhabditis Genetics Center (CGC, Minneapolis, MN, USA). Adult worms were used for all experiments, age-synchronized according to standard methods. Nematodes were grown and maintained monoxenically at 20 • C, on nematode growth medium (NGM) [58]. Animals were grown on Petri plates seeded with Escherichia coli strain OP50 as a food source [59,60]. Pathogenicity Assays of Salmonella Strains on the C. elegans Model A pathogenicity assay is a lifespan assay, where C. elegans strains on NGM agar plates are fed with pathogenic bacteria (known here as killing plates) instead of the regular E. coli OP50 [29]. In this study, S. Typhimurium was grown overnight in LB at 37 • C, 250 rpm and then resuspended at an OD600 = 1. Then, cells were harvested by centrifugation, and the pellets were resuspended in 3 mL of PBS containing DMSO, 110 or 330 µg/mL of CUR and incubated for 2 h at 37 • C, 250 rpm. The killing plates were prepared by dropping 10 µL of bacterial suspension (8 × 10 8 CFU/mL) onto NGM agar plates and incubated for 16 h at room temperature [61]. After that, 30 age-synchronized L4 worms were transferred to the killing plates. Since C. elegans starts laying eggs on day 1 of adulthood, the worms were transferred to fresh killing plates from the 2nd to 14th days to prevent a mistaken offspring score. In the remaining days of the trial, the transfers were carried out within a longer time (when food ran out) because the worms were in the non-reproductive phase. Worms that were alive, dead, or missing were determined and counted every other day along the time course of dying for the population, using a touch movement assay for death [62]. This assay consists of visually inspecting the worm for movement; if there is movement, then it scored as alive, and if there is no movement, even when its body is gently touched with a worm picker, it is scored as dead. Missing worms, those lost or burrowing into the medium or climbing the plate walls and drying up, were censored from the analysis. NGM agar plates seeded with untreated S. Typhimurium and E. coli OP50 were used as positive and negative controls. Every experiment was repeated three times. Data were analyzed using the Kaplan-Meier survival test and weighted log-rank tests [63]. Actual P-values are included in the figures; asterisks indicate significant differences. Differences were considered significant at p < 0.05. Conclusions This research provides new evidence on the antibacterial activity of CUR against one of the major enteric bacterial pathogens, Salmonella. We demonstrated that CUR increases the cell proliferation of S. Typhimurium, and we also observed a deregulation of three genes involved in the pathogenicity of S. Typhimurium leading to an increase in virulence in agreement with the results of in vivo assays. These results urge us to reconsider the indiscriminate use of CUR, especially in outbreaks of pathogenic Gram-negative bacteria.
2022-09-15T15:15:11.229Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "8d481217b815bc4bd1ade5b13715958f2cf29f00", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6382/11/9/1230/pdf?version=1662791605", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dcbc3728fc45d3421049d0efafd1bbea6cf49dc8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
214209637
pes2o/s2orc
v3-fos-license
Evaluation of Different Heat Flux Products Over the Tropical Indian Ocean Net heat flux (Qnet) and its components from four reanalysis (NCEP‐2, CFSR, ERA5, and MERRA) and two blended products (OAFlux & TropFlux) are compared with in situ observation (two Research Moored Array for African‐Asian‐Australian Monsoon Analysis and Prediction buoys and one Woods Hole Oceanographic Institution buoy) over the north Indian Ocean to quantify their uncertainties in daily, seasonal, and annual scales. These comparisons provide the present status of Qnet error in most state of the art reanalysis/blended products. The root‐mean‐square error (RMSE) remains similar to the RMSE a decade earlier, despite more observation and improved models and reanalysis methods. However, there is a clear separation of flux quality from the older generation of reanalysis (NCEP‐2) to the newer production of reanalysis (MERRA, CFSR, and ERA5). While individually ERA5 provides the best estimate, the ensemble mean (i.e., average of ERA5, CFSR, MERRA, TropFlux, and OAFlux) is very close to ERA5 both in terms of correlations and RMSE and provides the most reliable estimate by virtue of removal of some of the uncertainties in estimation of flux by each of the flux products. A significant reduction of RMSE in Qnet estimates from 100 W/m2 (in NCEP‐2) to 45 W/m2 (in the ensemble mean) is considerable progress. It is noteworthy that all the recent flux products estimate the increasing trend of Qnet in the north Bay of Bengal and subseasonal fluctuations with significant fidelity. Also, in the south of equator location vigorous subseasonal fluctuations in boreal winter are well captured. We believe that this is significant progress in the estimation of Qnet over the Indian Ocean. Introduction Accuracy in estimation of the air-sea fluxes over the ocean surface is essential as they represent a medium of communication between the ocean and atmosphere, leading to air-sea interaction processes on diurnal to interdecadal variability. Surface net heat flux (Qnet) plays a crucial role in controlling feedback between the ocean and atmosphere . Accurate turbulent flux estimates are essential to assess the energy budget (Dong & Kelly, 2004) as it modulates the mixed layer temperature (Niiler & Kraus, 1977), sea surface temperature (Yu et al., 2006, heat content (Garternicht & Schott, 1997;Godfrey, 1995), and ocean circulation (Schott et al., 2009). Typically, changes in the Ocean Heat Content of the upper ocean layers can be quantified through the estimation of imbalance of surface flux components. Thus, accurate surface net heat flux through the heat budget in the upper ocean can give insight into the roles of the atmosphere and ocean in the evolution of sea surface temperature (SST), heat content (Dong & Kelly, 2004), and ocean-atmosphere feedback with changing climate. Therefore, the correct representation of surface fluxes is essential for improving climate models and their forecast skills. Qnet is a combination of radiative fluxes (shortwave and longwave radiation) and turbulent heat fluxes (latent and sensible flux). So any uncertainties in the estimation of net heat flux are contributed by the uncertainties in each of the components. In situ flux measurements acquired from buoys and ships have long been used as a reference base to quantify the accuracy of surface heat flux products from reanalysis and satellite-based products (e.g., Brunke et al., 2011;Cronin et al., 2006;Josey, 2001;Pinker et al., 2009;Smith et al., 2001;Yu et al., 2004a;Yu et al., 2006). Previous studies have reported significant uncertainty in the estimation of Qnet products over the global oceans (e.g., Large & Yeager, 2009;Schott et al., 2009;Yu et al., 2006). These uncertainties are mainly due to the errors in the input parameters (e.g., Large & Yeager, 2009;Rahaman & Ravichandran, 2013) and the bulk algorithms used for estimation of flux component (Brunke et al., 2003;Kalnay et al., 1996;Uppala et al., 2005). Several studies addressed the uncertainties in the estimation of Qnet in various flux products. For example, a recent paper by Valdivieso et al. (2017) reviewed the surface heat fluxes from an ocean reanalysis perspective over the globe using 16 monthly flux products. Bentamy et al. (2017) accessed various turbulent heat flux derived from satellite-based and atmospheric reanalysis products in spatial and temporal scales and found that all of them exhibit similar space and time patterns with significant differences in magnitude in some specific regions. More recently, Yu (2019) addresses the dominant source of uncertainties of surface flux products and the reliability of using these products for the budget closures. However, this kind of study of systematic evaluation is lacking over the Indian Ocean. It has been reported in many studies that Indian Ocean SST is crucial for ISMR prediction and is mostly governed by Qnet on seasonal and intraseasonal time scales (e.g., Parampil et al., 2016;Sengupta & Ravichandran, 2001). Hence, in this study, we focus on quantifying the uncertainty (error) in estimating the Qnet over the Indian Ocean (IO) by the latest hybrid flux products and latest reanalysis flux products to ascertain the progress made on the unacceptably large errors known to exist in earlier products . This is important as oceanatmosphere interactions are critical in maintaining the annual cycle of SST and precipitation (Webster et al., 1998), interannual variability (Saji et al., 1999), and the monsoon intraseasonal oscillations (Sengupta & Ravichandran, 2001). Further, uncertainties of the order of 10 Wm −2 in Qnet estimate may result in uncertainty of 0.5°C in SST over three months, implying a strong need for reduction of the uncertainties in flux to less than 10 Wm −2 (Roberts, 2011). Besides, seasonally reversing differences in Qnet over the northern and southern tropical IO drive a shallow meridional circulation (Schott & Mccreary, 2001) in the tropical IO. While models do simulate the shallow meridional circulation and its variability, it is poorly constrained due to the absence of adequate current observations. For confidence in the simulated shallow meridional circulation by the models, their ability to reproduce Qnet correctly is critical. Hence, a comparison of model-simulated Qnet with "observation" and reliable estimate of Qnet in flux products over the IO region is essential. Over the Indian Ocean basin, since the study by who showed that the uncertainties in net heat flux could be as significant as 60-100 W/m 2 over the region, no comprehensive evaluation studies have been carried out over the Indian Ocean afterward. In the backdrop of availability of a new hybrid flux product, namely, the TropFlux and latest reanalysis products like CSFR and ERA5 together with some new in situ observations from Research Moored Array for African-Asian-Australian Monsoon Analysis and Prediction (RAMA) moorings and the Woods Hole Oceanographic Institution (WHOI) buoy (see section 2), we ask: is the uncertainty in Qnet estimation reduced in new individual flux products? What is the remaining interproduct uncertainty? This study is motivated to answer these questions. In this study, we have used six widely used flux products in their common time frame (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009). Four flux products are based on reanalysis, and the rest two are blended/hybrid products. The reanalysis consists of NCEP-2 (Kanamitsu et al., 2002), Modern-Era Retrospective analysis for Research Applications (MERRA; Rienecker et al., 2011), NCEP CFSR (Saha et al., 2010), and ECMWF ERA5 (Olauson, 2018) together with two hybrid products are OAFlux and TropFlux (Kumar et al., 2012). The MERRA, CFSR, and ERA5 are more recent reanalysis data sets employing improved model and analysis systems, and NCEP-2 is a relatively older reanalysis. Inclusion of NCEP-2 allows us to quantify the level of improvements achieved in the recent reanalysis. Both the hybrid flux products (OAFlux and TropFlux) use International Satellite Cloud Climatology Project (Zhang, 2004) for radiation component (shortwave and longwave radiations) and surface meteorological fields from a combination of buoy, ship, reanalysis, and satellite data for estimating turbulent flux component (latent and sensible). Although previous studies have shown the superiority of hybrid flux products as compared to the reanalysis flux products over the globe (Yu et al., 2006Farmer, 2012;Liu et al., 2015) for the estimation of net heat flux, still the uncertainties are quite significant. In recent times, when all the flux products have considerably improved due to ((1)) enhanced sampling from moored buoys and another in situ platform, (2) significant increase in the coverage of satellite data from multisatellite observations, (3) improved retrieval algorithms for satellite humidity and air temperature with much-reduced uncertainties, and (4) better assimilation methods combined with availability of large number of quality data for the reanalysis model. Therefore, it is necessary to reassess the uncertainties in the latest flux data and assess their consistencies using the most recent in situ data. The paper is organized as follows. Section 2 describes the data sets used in the study. The annual and seasonal mean features of the different flux products are discussed in section 3. The contribution of heat flux and ocean processes toward SST variability is elaborated in section 4, and a summary of the study is given in section 5. Data and Methods The present study utilizes six flux data from three different sources. Two are hybrid flux products (OAFlux and TropFlux), four are from reanalysis (NCEP-2, MERRA, CFSR, and ERA5), and remaining two flux data are from in situ moorings (RAMA and WHOI buoys) which are described in detail below. Blended Flux Products The OAFlux products ( Yu et al., 2004a( Yu et al., , 2004b use the COARE bulk algorithm 3.0 to give the daily estimates of surface latent heat and sensible heat fluxes over the global ice-free ocean. The input parameter required for the bulk algorithm is calculated from the combination of International Satellite Cloud Climatology Project (ISCCP) for radiative downward fluxes (shortwave and infrared fluxes) and surface meteorological state variables (10-m winds, 2-m air and sea temperature, 2-m air relative humidity) are objectively synthesized using buoy, ship, satellite, and reanalysis data. The ISCCP flux data (ISCCP-FD) are calculated at the surface using a complete radiative transfer model from the GISS GCM with improved observations of the physical properties of the surface, atmosphere, and clouds based on the ISCCP data sets. The ISCCP-FD data are available on 3-hr interval over the globe covering the period July 1983 through December 2009. The data were daily averaged and linearly interpolated onto a 1°grid to combine with OAFlux latent and sensible heat flux components. The resulting net heat flux product covers the period from July 1983 to December 2009. We have used a monthly product of net shortwave radiation flux (NSWR), net longwave radiation flux (NLWR), latent heat net flux (LHF), sensible heat net flux (SHF), and Qnet for our study. The TropFlux product (Kumar et al., 2012) has been developed under collaboration between the Laboratoire d'Océanographie: Expérimentation et Approches Numériques from Institut Pierre Simon Laplace, France and National Institute of Oceanography/CSIR, India and distributed through Indian National Center for Ocean Information Services, India. The TropFlux heat fluxes are also estimated using the COARE 3.0 algorithm. TropFlux uses bias and amplitude corrected ERA-I (10-m winds, 2-m air and sea temperature, 2-m air relative humidity, and downward radiative fluxes) and ISCCP (shortwave radiation) fluxes. All bias corrections are derived based on comparisons with the Global Tropical Moored Buoy Array data (Kumar et al., 2012). At present TropFlux provides data of 1°× 1°spatial resolution for the entire 30°N-30°S region. Daily and monthly data of NSWR, NLWR, LHF, SHF, and Qnet variables at the surface are used during the period of 2000 to 2009. Reanalysis Flux Products NCEP-2 (NCEP-NCAR reanalysis; Kanamitsu et al., 2002) is an update to NCEP-1 (Kalnay et al., 1996) that rectified errors and updated parameterizations of physical processes. It is generated from a frozen forecastanalysis platform that consists of a T62 model (equivalent to a horizontal resolution of about 210 km) with 28 vertical levels and 6-hr intervals. It is using a state-of-the-art analysis/forecast system to perform data assimilation using past data from 1979 through the previous year, currently available from January 1979 to August 2019. Components of radiative heat flux on monthly scale for DSWR (downward shortwave radiation flux), USWR (upward shortwave radiation flux), DLWR (downward longwave radiation flux), ULWR (upward longwave radiation flux), and turbulent heat flux (LHF and SHF) are directly obtained from the APDRC site (http://apdrc.soest.hawaii.edu/datadoc/ncep_mon.php). Qnet from NCEP-2 is calculated as where NSWR = DSWR − USWR and NLWR = ULWR − DLWR. The MERRA data set (Rienecker et al., 2011) was released in 2009. It is based on a version of the Goddard Earth Observing System Data Assimilation System version 5 atmospheric data assimilation system that was frozen in 2008. MERRA data spanned the period 1979 through February 2016 and were produced on a 0.5°× 0.66°grid with 72 layers. MERRA was used to drive stand-alone reanalysis of the land surface (MERRA-Land) and atmospheric aerosols (MERRAero). The shortwave radiation scheme of Chou and Suarez (1999) and the long-wave radiation scheme of Chou et al. (2001) are utilized in MERRA products. Roberts et al. (2011) and Brunke et al. (2011) analyze surface turbulent fluxes over the ocean from MERRA and other data products. Daily and monthly data of NSWR, NLWR, LHF, and SHF are downloaded (http://apdrc.soest.hawaii.edu/datadoc/merra.php), and the Qnet is calculated using equation (1). The NCEP CFSR (Saha et al., 2010) is the coupled assimilation reanalysis which uses the NCEP coupled forecast system (CFSv2) model. The atmospheric model has a spectral resolution of T382 (38 km) and 64 hybrid vertical levels. The ocean model consists of the Geophysical Fluid Dynamics Laboratory modular ocean model (version 4p0d; Griffies et al., 2004), which is a finite-difference model at a resolution of 0.5°C with 40 levels in the vertical. The atmosphere and ocean models are coupled with no flux adjustment. CFSR uses simplified Arakawa-Schubert convection with momentum mixing. CFSR implements orographic gravity wave drag based on the approach of Kim and Arakawa (1995) and subgrid-scale mountain blocking by Lott and Miller (1997). It uses rapid radiative transfer model shortwave radiation with maximum random cloud overlap (Clough et al., 2005;Iacono et al., 2000). In CFSR implementation, both shortwave and longwave radiations are invoked at 1-hr interval. It is also coupled to a four-layer Noah land surface model (Ek et al., 2003) and a two-layer sea ice model (Wu et al., 2005). The NCEP CFSR uses the gridded statistical interpolation data assimilation system for the atmosphere. Flow dependence for the background error variances is included as well as first-order time interpolation to the observation (Rancic et al., 2008). Variational quality control of observations (Anderson & Järvinen, 1999) is also included. An ocean analysis for SST is also performed using optimum interpolation. A full range of observations is used as in the other reanalyses, which are quality-controlled and bias-corrected, including satellite radiances. Observations on ocean temperature and salinity are also used. Further description of the CFSR is available in the study by Saha et al. (2010). Data from recently released fifth generation of ECMWF reanalysis, ERA5 (Hersbach & Dee, 2016), are also used in the study. ERA5 is a climate reanalysis data set, covering the period 1950 to present. Data processing for ERA5 is carried out by ECMWF, using ECMWF's Earth System model IFS, cycle 41r2. A significant improvement is observed in the performance of ERA5 over ERA-Interim. ERA5 is highly accurate, representing the magnitude and variability of near-surface air temperature and wind regimes. The higher spatial and temporal resolution provided by ERA5 reduces significantly the cold coastal biases identified in ERA-Interim and increases the accuracy representing the wind direction and wind speed in the escarpment (Tetzner et al., 2019). In Situ Mooring Buoys The RAMA was designed to study the Indian Ocean's role in the monsoons (McPhaden et al., 2009 Daily and monthly products of NSWR, NLWR, LHF, SHF, and Qnet are downloaded from their website http://uop.whoi.edu/projects/Bengal/QCData.html. We have included this buoy data to check the consistency of quality of the flux products and to make the statistical comparison more robust. Taylor Diagram The Taylor plot provides the statistics of correlation, a root-mean-square error (RMSE), and the ratio of variances together (Taylor, 2001). The distance from the origin is the standard deviation of the field, normalized by the standard deviation of the observational climatology. The distance from the reference point to the plotted point gives the RMSE. The correlation between the model and the climatology is the cosine of the polar angle. Thus, the data which have the most significant correlation coefficient, smaller RMSE, and comparable variance will be near to the observation and is considered the best among all others. Validation With In Situ Buoy Data The validation of average daily Qnet and its all components with independent buoys at three different locations using Taylor plot brings out the comparative assessment of flux products with respect to in situ values (Figures 1a-1c). Two buoys are located in the Bay of Bengal basin and one over the equatorial Indian Ocean ( Figure 4d). The Taylor Figure 1a). The radiative components (SWR = solid square; LWR = solid delta) of ERA5 have a higher correlation (SWR = 0.80; LWR = 0.96) as compared to other flux products, except for CFSR-LWR which performs equally good with CC values of 0.96. The variance of the SWR component is overestimated (underestimated) in NCEP-2 (CFSR) by almost quarter (half) of the total observed variance and marginally underestimated by less than 10th of the observed variance in case of MERRA, ERA5, and TropFlux. For LWR, all products overestimate the observed variance, and among them, TropFlux has marginally higher overestimation as compared to ERA5, CFSR, and MERRA. In the case of the turbulent flux (LHF = solid star) too, ERA5 performs the best in terms of both correlation and variance. Thus, considering both correlation and variance, ERA5 stands out as the best match with RAMA buoy not only in terms of Qnet but also in all components of it. Earth and Space Science MERRA underestimates it by about 25%, ERA5, and TropFlux by 18-20% and NCEP-2 overestimates it by 30% (Figure 1b). For the two major components (SWR and LHF), ERA5 has the largest correlations but underestimates the variance of SWR by almost 35%, whereas in the case of LHF, the variance is very close to that of the buoy. However, for a robust statistical comparison of the flux products we have included all the collocated available data from the RAMA buoys during 2008 to 2015 irrespective of long gaps in between the data continuity. For buoy B1 (15°N, 90°E) and B2 (8°S, 67°E) the number of valid points is 1,879 and 915, respectively. For B3, we have considered one-year data which is 365 points. Tables 1-3 show this comparison statistics generated from five flux products (ERA5, CFSR, NCEP-2, MERRA, and TropFlux) together with the ensemble mean (EM) of the four data sets (ERA5, CFSR, TropFlux, and MERRA) with respect to three buoy locations. The values are almost similar to the results of Taylor plots with slight differences. Here too ERA5 (NCEP-2) shows the least (highest) RMSE and highest (least) CC in Qnet among all products at all the buoy locations. All the new products (MERRA, CFSR, ERA5, and TropFlux) have improved significantly as compared to older reanalysis products (NCEP-2), which is very clear in the scatterplot of correlation and RMSE at these buoy location as shown in Figures 2a-2c. The position of scatter Qnet data points at B1 buoy (15°N, 90°E) corresponding to NCEP-2, MERRA, CFSR, TropFlux, ERA5, and Ensemble is diagonally arranged from down-right toward top-left corner of the plot indicating the improvement of product quality in terms of increased correlation and decreased RMSE, respectively ( Figure 2a). Qnet at other two buoy location also has the similar diagonal spread except that ERA5 takes over Ensemble in terms of correlation but RMSEwise Ensemble still fares well and sometimes different product switch position at different buoy locations (Figures 2b and 2c). SWR and LHF data points too are arranged in similar way at all buoy locations except LWR and SHF which show improvements in terms of correlation but not in term of RMSE. Further from Tables 1-3 the large underestimation bias of NCEP-2, often going up to more than 100 W/m 2 during some part of the year, is reduced significantly with all the four products being clustered around the buoy values. Interestingly the ensemble mean has improved further and outperformed ERA5 in most of the cases in terms of RMSE. It demonstrates that while individually ERA5 provides the best estimate of all the flux Note. Root-mean-square-error (RMSE) of the data products with respect to buoy are in brackets. components, the ensemble mean being close to ERA5 both in correlations and RMSE for all flux components provides the most reliable estimate by virtue of removal of some of the uncertainty in estimation of flux by each of the flux products as also evident by the lowest RMSE among the products in almost all the variables. Notwithstanding significant error remaining in the estimation of Qnet by flux products when compared to buoy locations, a considerable reduction of RMSE of Qnet estimates from~100 W/m 2 in NCEP-2 to~45 W/m 2 in the ensemble mean is significant progress since last few decades. These improvements can also be seen in the statistical values shown in Tables 1-3. To further substantiate the Taylor plots (Figures 1a-1c) and Tables 1-3, we have also plotted the daily time series of Qnet from all the products and compared with RAMA and WHOI buoys at Bay of Bengal and southern western Indian Ocean location (Figures 3a-3c). The running mean of 15 days is used to smooth daily data by removing the high-frequency oscillations, which is also ideal for identifying the persistent biases in the Qnet values of the various products. The ensemble mean of all products except NCEP-2 is also plotted for the consolidated view. We have not included NCEP-2 for the ensemble mean generation since NCEP-2 performs as an outlier in comparison to observations and other individual products. Inclusion of NCEP-2 degrades the ensemble mean. All the individual product could able to capture the observed daily variation with good accuracy. Among all, ERA5 shows the closest match as compared to others. Two essential characteristics of the Qnet in the north Bay of Bengal (BoB; Figures 3a and 3c) are the steady increase from about −50 W/m 2 in June to about +100 W/m 2 followed by large amplitude (trough to peak often spanning 150 W/m 2 ) subseasonal oscillations of roughly one-month period during the summer monsoon season. In the south equatorial buoy location, there is no such increasing trend from winter to premonsoon months, and the subseasonal oscillations are vigorous in winter (December-April), but weak during the summer monsoon season. The causes behind this vigorous subseasonal oscillation during winter are discussed in the study by Saji et al. (2006) and Jayakumar and Gnanaseelan (2012). It is noteworthy that all the recent flux products estimate the increasing trend in the north BoB and subseasonal fluctuations with considerable fidelity. Also, in the south of the equator location, the lack of trend during the premonsoon months and vigorous subseasonal fluctuations in boreal winter are well captured. We believe that this is also major progress in the estimation of Qnet over the Indian Ocean. Mean Qnet Distribution As mentioned earlier, the reversing north-south dipole pattern of the Qnet is critical in driving a shallow meridional circulation in the tropical IO that plays a vital role in maintaining the annual cycle of the SST distribution. Therefore, in Figure 4, we examine the yearly mean Qnet distribution over the tropical Indian Ocean from all the products. Ideally, we would need in situ observations on 0.25 × 0.25 grid boxes to verify the accuracy of the spatial pattern from the products. In the absence of such observations, we shall use the ERA5 pattern as a reference and compare the patterns of other products with respect to it. Annually there is a net heat gain over most of the study domain (40°E-120°E, 30°S-30°N) in all the data sets except NCEP-2, which has very high heat loss particularly over the Southern Oceans (Figure 4a). We note that ERA5 has the lowest basin averaged annual heat gain (9.21 Wm −2 ) and is likely to be closer to the truth with CFSR, MERRA, TropFlux, and OAFlux overestimating this gain by on an average 15 Wm −2 (gain of these products ranging between 23 and 26 Wm −2 ) with respect to ERA5. The gain may be partly because of the Earth and Space Science POKHREL ET AL. limited domain considered here and partly true responsible for basin-wide increasing trend of SST in the region (Roxy et al., 2014). The similarity among the spatial patterns of the annual mean Qnet in five products except for NCEP-2 is rather striking. To quantify the similarity, correlation/RMSE between the ERA5 Qnet pattern and those of CFSR, MERRA, TropFlux, and OAFlux are calculated and indicated in Figure 4. With high pattern correlation (0.94) and low RMSE (14.64 Wm −2 ), the spatial pattern of annual mean Qnet in CFSR and ERA5 are quite close. The other three products (MERRA, OAFlux, and TropFlux) are also close to ERA5 with correlations ranging from 0.83 to 0.89 and RMSE ranging from 15.37 to 19.46 Wm −2 . The intermonthly standard deviation is more or less identical across the products, with minimal appreciable changes indicating the temporal variability to be better. Figure 5 shows the basin-averaged mean annual cycle of Qnet, shortwave radiation (SWR), longwave radiation (LWR), and latent heat flux (LHF). The yearly cycle was computed from the monthly data during the common time frame (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009). We note that the asymmetry between summer and winter Qnet (Figure 5a) is minimum for ERA5 consistent with minimum annual mean heat gain of 9.21 Wm −2 . The systematic negative bias of NCEP-2 Qnet of the order of 30-50 Wm −2 throughout the year causing underestimation of heat gain in winter and overestimation of heat loss during summer. We further note that this bias in Qnet in NCEP-2 arises due to systematic underestimation of the SWR and systematic overestimation of LHF throughout the year (Figures 5b and 5d). Improvement of model and analysis system in CFSR has overcome the systematic negative bias in NCEP-2 to slight positive bias throughout the year as compared to ERA5 10.1029/2019EA000988 Earth and Space Science ( Figure 5a). This improvement has come mainly from improved LHF estimation by CFSR (Figure 5d), while the positive bias seems to have come from about 15 Wm −2 of SWR throughout the year (Figure 5b). It is interesting to note that the OAFlux has almost identical positive bias in Qnet like CFSR (Figure 5a), but unlike CFSR, it arises due to underestimation of LHF (Figure 5d) rather than SWF as in CFSR. On the other hand, the Qnet in TropFlux is also close to CFSR (Figure 5a) but is a result of a more complex combination of underestimation of SWR during summer (Figure 5b) and systematic negative bias of~15 Wm −2 in LHF throughout the year (Figure 5d). Our analysis indicates that while the annual mean Qnet in the four products (ERA5, CFSR, TropFlux, and OAFlux) are close to each other, there is a need for further reduction of the uncertainty of approximately 20 Wm −2 still remaining among the products. Latitudinal Variation of Annual and Seasonal Qnet What is the latitudinal dependence of the uncertainty in the estimation of Qnet? In pursuit of an answer to this question, we plotted the latitudinal variation of annual zonal average (longitude averaged from 40°E to 120°E) Qnet from all the products and found that NCEP-2 remains an outlier and the other five products clustered together. Therefore, in Figure 6a, we present the latitudinal profile of zonally averaged annual mean Qnet from NCEP-2 (red) along with the latitudinal profile of ensemble mean of zonally averaged annual mean Qnet (black). The spread among the five products is shown by the shaded envelope with the latitudinal mean spread indicating the uncertainty in the zonal mean Qnet among the products. With Qnet from ERA5 agreeing closely at buoy locations and the other four products clustering around it with uncertainty of~10 Wm −2 provided confidence that all these products are close to observations. The ensemble mean clearly shows the heat gain zone of tropics is from 15°S to 20°N, and thereafter it reduces at higher latitudes ( Figure 6a). The significant heat loss zone over southern latitudes in NCEP-2 data is visible in zonal mean values too, where NCEP-2 zonal average Qnet is consistently below its compatriots (CFSR, ERA5, MERRA, TropFlux, and OAFlux) from 30°S to 20°N and the highest difference is around 20°S where NCEP-2 values are almost less than 60 Wm −2 from other flux products. The zonally averaged net heat flux in four different seasons shows the nature of latitude-wise pattern among the flux products (Figures 6b-6e). The close clustering of five products around ERA5 is evident in all seasons, and so is the NCEP-2 being an outlier in all four seasons. The reversal of the heat loss (gain) in the north (south) IO from December-January-February (DJF) (Figure 6b) to June-July-August (JJA) (Figure 6d) is noteworthy. The zonal average of Qnet in general shows that the regions in northern (southern) part of the equator gain (loss) most of the heat during spring (March-April-May (MAM); Figure 6c) and summer (JJA; Figure 6d) and loss (gain) heat during autumn (September-October-November; Figure 6e) and winter (DJF; Figure 6b), this is as per the solar position during the ensuing season. The uncertainty among the products is the least (10.80 Wm −2 ) during MAM, while during the other three seasons (DJF, JJA, September-October-November), it is around~15 Wm −2 . This kind of uncertainty among various flux products was also shown by almost a decade earlier, indicating that the improvements among flux products have not reduced the uncertainty to a great extent, and still, there is an urgent need for the development. Annual Mean Qnet Component Distribution It is insightful to examine the role of each component on Qnet to pinpoint the region-wise relative importance of each of them on surface heat budget (Scott & Alexander, 1999;Tomita & Kubota, 2004). Figures 7a-7x show the annual mean distribution of net shortwave radiation (SWR), net longwave radiation (LWR), latent heat (LHF), and sensible heat (SHF) from all the six flux products over the north Indian Ocean. The positive values of SWR represent heat gain by the ocean surface; however, in the case of LWR, LHF, and SHF positive (negative) values represents heat loss (gain) by the ocean surface. The cloud-free zone over the north-western Indian ocean has the highest amounts of SWR (close to 220-250 Wm −2 ) in all the flux products (Figures 7a, 7e, 7i, 7m, 7q, and 7u), which is only limited to north-western area of the Arabian Sea in NCEP-2 ( Figure 7a) but extensively extended to the western coast of Africa and equatorial Indian Ocean in case of rest all products. The Bay of Bengal and eastern equatorial Indian Ocean region have the least SWR values due to persistent clouds throughout the year. In general SWR distribution is almost similar in all products except older generation reanalysis (i.e., NCEP-2) where a meagre value of SWR is evident at equatorial Indian Ocean regions, and that has improved significantly in its newer 10.1029/2019EA000988 Earth and Space Science 10.1029/2019EA000988 Earth and Space Science generation of reanalysis (CFSR). This may be one of the probable reasons for highly negative net heat flux in NCEP-2 as compared to other flux products. Net longwave flux at the sea surface is the difference between the longwave radiated upward from the sea surface, and the longwave radiated downward from the atmosphere. It is contributed by the three components, the sea surface, clouds, and the atmosphere. Maximum heat loss at ocean surface due to LWR (close to 70-90 Wm −2 ) is concentrated over the northern tip of Arabian Sea, in all data (Figures 7b, 7f, 7j, 7n, 7r, and 7v), whose spatial extent is minimum in NCEP-2 data ( Figure 7b) and maximum in OAFlux data (Figure 7j). This region is the cloud-free zone, hence show large LWR in all products. MERRA (Figure 7f) data also show a high loss of LWR over the southern India Ocean, which is not very prominent in other data. Furthermore, this loss exists throughout the year, as evident in the monthly climatology of LWR, which is almost invariant (see Figure 5c). Equatorial Indian Ocean basin (15°S to 10°N) has the least loss of LWR in all the data, which is extended till far south (30°S) in NCEP-2 data. Overall, NCEP-2, followed by TropFlux, has the minimum loss of LWR over most of the basin, which is due to the less gain of SWR in the same data. TropFlux (Figure 7n), CFSR (Figure 7r), and ERA5 (Figure 7v) have almost identical LWR distribution and may be considered close to the observation. Latent heat flux distribution clearly shows that the highest evaporative zone is situated at the south of the equator (between 10°S and 20°S) in all the data sets (Figures 7c, 7g, 7k, 7o, 7s, and 7w), which are also trade wind regions with very high wind speed. This also confirms the previous studies (e.g., Pokhrel et al., 2012). The very high overestimation of LHF in NCEP-2 (180-200 Wm −2 ) is considerably reduced in the present-day reanalysis (MERRA, CFSR, and ERA5) and the blended products (OAFlux and TropFlux). The low (high) values of SWR (LHF) over the southern equatorial region in NCEP-2 product lead to highly negative net heat flux, as shown in Figure 4a. The sensible heat flux depends on the difference in temperature between the sea surface and the surrounding air. All the data products show almost similar distribution of SHF over the Indian Ocean basin (Figures 7d, 7h, 7l, 7p, 7t, and 7x) with ocean gaining (losing) sensible heat from northern (southern) Indian Ocean basin. Spatial Pattern of Seasonal Mean Qnet and Their Components To see whether the spatial distribution pattern of Qnet and its component is persistent throughout the year, we examine the spatial distribution of seasonal mean Qnet and its component over our study domain. For brevity, only the winter season (DJF) and summer season (JJA) of individual flux products are shown. Figure 8 shows the winter season distribution of Qnet and its major components from all the flux products. In general, the DJF season is characterized by the dipole kind of net heat flux distribution, where region south (north) of 10°N of the Indian Ocean basin gains (losses) heat flux (Figures 8a, 8e, 8i, 8m, 8q, and 8u). The heat gain region is much more prominent as compared to the heat loss region of the Arabian Sea and Bay of Bengal combined. The gain in net heat flux is least (highest) in NCEP-2 (CFSR), and rest have almost similar magnitude. The major components (SWR, LWR, and LHF) confirm the share of each, resulting in Qnet distribution. Spatial distribution pattern of Qnet follows the spatial pattern of SWR, which is the most dominant component among all. The SWR is more (less) over the southern (northern) Indian Ocean, this is due to the distribution clouds during the winter season and most likely is associated with the seasonal migration of Sun, which in general has very less spatial extent over the southern Indian Ocean during winter season (Figures 8b,8f,8j,8n,8r,and 8v). This is also shown by Bony et al. (2000) using INSAT data, where the percentage of clear sky pixels is higher than 60% over the southern Indian Ocean. In case of LWR, the heat loss on the order of 40-60 Wm −2 is predominant over most of the Indian Ocean, except the northern stretches of both Arabian Sea and Bay of Bengal where the heat loss is much higher (~90-110 Wm −2 ) in all the products (Figures 8c, 8g, 8k, 8o, 8s, and 8w). NCEP-2 has the least heat loss due to LWR (Figure 8c). However, this loss is higher (60 Wm −2 ) in CFSR (Figure 8s), indicating the improvement in the reanalysis model from the previous generation. The higher or lower heat loss of LWR is due to higher or lower heat gain from SWR. In case of LHF too, NCEP-2 has the highest evaporative loss (~180-200 Wm −2 ) at the southern and northern ocean again (Figure 8d), which is almost less by the margin of 40-50 Wm −2 in rest of the flux products (Figures 8h, 8l, 8p, 8t, and 8x). Thus, comparatively, the fewer values of Qnet in NCEP-2 data result from less gain in terms of SWR and more loss in terms of LHF. Thus, CFSR shows remarkable improvement in all flux components as well. In the JJA season, Qnet distribution is roughly opposite to that of the DJF season with southern (northern) zones of the Indian Ocean domain having heat loss (gain) with few exceptions (Figures 9a, 9e, 9i, 9m, 9q, and 9u). Similar to the DJF season, NCEP-2 (Figure 9a) is the outlier as compared to other products in terms of maximum heat loss. Surprisingly the latest reanalysis ERA5 (Figure 9u) shows the 10.1029/2019EA000988 Earth and Space Science heat loss over the north Arabian and Bay of Bengal region, which is almost absent in other flux products. To understand this, we have compared different Qnet during JJA with RAMA observation in the "B1" location for the year (2009,2010,2013,2014,2015,2017,2018) and WHOI buoy for year 2015 (Figures S1 and S2 in the supporting information). The same is compared with ERA5 for respective locations. It is observed that although the correlation is high (around Earth and Space Science Table 4) as compared with the annual time series for any year (Tables 1-3). Further, both reanalysis products (NCEP-2 and MERRA) also have heat loss over the southern Arabian Sea and southern Bay of Bengal region with different magnitude (Figures 9e and 9i). However, this is mostly absent in both the blended flux products as well the newest reanalysis (OAFlux and TropFlux (Figures 9i and 9m) and CFSR and ERA5 (Figures 9q and 9u)). Instead of heat loss, these two blended products and the newest reanalysis product (ERA5) show heat gain. This may be associated with the lack of representative assimilated data in the reanalysis product, which may be rectified in blended and latest reanalysis products by including the satellite data (Kumar et al., 2012;. The significant improvement in the SWR distribution is visible in CFSR ( Figure 9r) as compared to its predecessor NCEP-2 ( Figure 9b) wherein the heat gain from SWR is drastically increased by almost 100 Wm −2 north of 10°S implying a significant improvement in the physical processes affecting the SWR. In general, the heat gain due to SWR is higher (~180-240 Wm −2 ) over the northwestern part of the Indian Ocean basin as compared to the southern and eastern parts (~120-140 Wm −2 ) in all the flux products (Figures 9b, 9f, 9j, 9n, 9r, and 9v). The low values of SWR (~120-140 Wm −2 ) over the northern Bay of Bengal and eastern equatorial the Indian Ocean is due to the presence of deep clouds during the JJA season over this region, which obstructs the incoming shortwave radiation (Pokhrel & Sikka, 2013). Hatzianastassiou et al. (2005) have shown a strong anticorrelation between incoming shortwave radiation at the surface and cloud amount. Hence, the lower SWR values are associated with the regions where more convections and cloud cover present such as ITCZ, which is reflected in Figures 8 and 9. Further, the area and amplitude of Bay of Bengal heat gain vary in all products, which itself indicates a large number of uncertainties in the calculation of heat flux in various data sets. In LWR distribution NCEP-2 shows basin-wide low values (~30-50 Wm −2 ), unlike other products that show higher values over the southern Indian Ocean. These higher values over the southern Indian Ocean are mainly due to the absence of clouds (Figures 9c, 9g, 9k, 9o, 9s, and 9w). NCEP-2 has the least heat gain over the southern basin ( Figure 9b); correspondingly, it has the least LWR loss over the southern basin (Figure 9c). The LHF loss is the most important contributor toward the Qnet, which is predominantly higher in the NCEP-2 (Figure 9d) product as compared to the other products (Figures 7h, 7l, 7p, 7t, and 7x). This implies that the major contributor toward the highly negative Qnet over the Southern Oceans in the NCEP-2 data is due to higher latent heat loss, as this loss is subdued in other products resulting in very less negative Qnet over the Southern Oceans. The annual and seasonal mean of Qnet and its major components in the ensemble mean are shown in (Figures 10a-10t). The ensemble mean Qnet distribution is the most reliable spatial distribution since its spatial pattern matches with satellite-based observations (see Paramil et al., 2016, Figure 2). The heat loss region of the southern Indian Ocean is now stabilized in the ensemble annual Qnet distribution (Figure 10a) as it is neither very high as that of CFSR (Figure 4e) nor very less as that of OAFlux (Figure 4c). Similarly, the season-wise distribution of Qnet has realistically captured the north-south dipole from the southern minimum in Qnet in MAM and JJA (Figures 10e and 10i) to northern minimum in September-October-November and DJF (Figures 10m and 10q; Parampil et al., 2016). Qnet Components are now much better represented annually and seasonally due to cancellation uncertainties in individual components. This establishes that the ensemble mean being the most stable product of net heat flux in the present scenario when the individual flux uncertainties are still high. Lead-Lag Correlation of Qnet and Its Components With SST As evident from Figure 3, the Intraseasonal Oscillation (ISO) features in Qnet is prominently visible in all the buoy locations. Many previous studies showed that SST ISO is mainly governed by Qnet over BoB (Sengupta & Ravichandran, 2001). Now to test the robustness of the products, it will be worthwhile to check whether the subseasonal relation of Qnet and its components with SST is reproduced accurately? The lead-lag correlation of Qnet, SWR, LHF, and wind speed with SST over the Bay of Bengal RAMA buoy location (15°N, 90°E) is plotted using buoy, and ensemble mean data using daily June to September data (Figures 11a and 11b). The correlation at 99% significance level is marked as black dash lines. Qnet leads SST by approximately six to eight days; this nature of the relationship is precisely captured by Ensemble, but the lead time is less (approximately three to four days). Similarly, two most significant drivers of Qnet, namely, SWR leads SST and LHF lags SST in both Buoy as well as Ensemble data, only the lead/lag days varies by one to two days and moreover the lead/lag has a stronger relationship in buoy data which is weaker in case of Ensemble mean data. Wind speed too lags SST by 10-12 days. This result corroborates a larger number of previous researchers who have studied the relation of SST and Net heat Flux and its components (e.g., Wu et al., 2008;Zhang et al., 1995;Sengupta and Ravichandan, 2001). Thus, the conformity that the Ensemble product can capture the physical relationship in the intraseasonal time scale between SST and other Flux components establishes its robustness. Conclusion Surface flux products are known to have significant uncertainties that arise from both the uncertainties in the input variables (u, v, U, qa, Ta, and Ts) and the uncertainties in estimation of transfer coefficients (cd, ce, and ch) in the bulk flux algorithms (Brunke et al., 2003;Isemer et al., 1989;Josey et al., 1999;Valdivieso et al., 2017). In this study, we try to show the uncertainties of widely used flux products over the Indian Ocean region. A comparative assessment of surface net heat flux (Qnet) and its components from four reanalyses (NCEP-2, MERRA, CFSR, and ERA5) and two blended products (OAFlux and TropFlux) is done over the Indian Ocean for the 10 years from 2000 to 2009. The daily time series of Qnet and its components (viz., SWR, LWR, and LHF) from each product is validated using observation from three independent buoys. The analysis revealed that both new generation reanalysis ERA5 and CFSR compares well with buoy observation as compared to the older generation reanalysis NCEP-2. The blended product TropFlux is also close to observation. ERA5, in general, outperform all other products for most of the cases with highest correlation (0.89-15N, 90°E; 0.74-8°S, 67°E; and 0.86-18°N, 89°E), least RMSE, and most explained variance with respect to the in situ values. The time series analysis clearly shows that five flux products (ERA5, CFSR, MERRA, OAFlux, and TropFlux) can capture the observed variation with reasonable accuracy. However, NCEP-2 is unable to obtain the observed variation, particularly during winter and summer. These five fluxes are very close to each other in terms of day-to-day variation; therefore, the ensemble of these five products are created, and the ensemble mean shows the best fit with the observed variation by virtue of removal of some of the uncertainty in the estimation of flux by each of the flux products. A significant reduction of RMSE of Qnet estimates from~100 W/m 2 in NCEP-2 to~45 W/m 2 in the ensemble mean, which is a major progress in recent times. Time series comparisons clearly show the ability of all the products to capture the summer ISO feature at BoB buoy location and wintertime ISO at the southern Indian Ocean buoy location. The Qnet spatial distribution is strikingly close to each other except NCEP-2, which shows a significant negative bias over the southern Indian Ocean. This is confirmed by the in situ observations from the buoy located at the thermocline dome region (67°E, 8°S). A comparison with buoy observations showed that ERA5 performs best among all other products. It is hence assuming ERA5 as a reference to the pattern correlation between ERA5 and each of the product, which ranges from 0.83 to 0.94. The ensemble mean spatial distribution on the annual and seasonal scale shows the best realization since it gives the closest pattern seen in the previously reported observations (Parampil et al., 2016). The annual cycle of Qnet and its components (SWR, LWR, and LHF) brings out the improvement in newer generation CFSR with respect to older generation NCEP-2 by excellently improving LHF. The zonal mean of ensemble product shows that the heat gain zone of tropics is from 15°S to 20°N, after that it reduces at higher latitudes, and in the case of NCEP-2 heat loss zone is much bigger than the heat loss zone. The reversal of the heat loss (gain) in the north (south) IO from DJF to JJA is noteworthy. The average zonal uncertainty of 14.86 (10.80) Wm −2 among product is highest (lowest) at JJA (MAM) season. These uncertainties among the product are not reduced considerably, as reported by Yu et al. (2006), thus indicating a need for improvement in observations, assimilation schemes, and model physics to improve the reanalysis products. Except for the uncertainty in the ensemble net heat flux product, the intraseasonal lead-lag relation of SST with Qnet, SWR, LHF, and wind speed captures the known physical relation, thus justifying its robustness. This study will be useful to both modelers and observation to check the present status of the available observational flux products to improve the flux quality and to have proper flux to force the model.
2020-02-06T09:10:20.097Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "e06e704eb2e874cc7dad0de06d8ceef25be2e883", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2019EA000988", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "5073fbf7bb469236781dd8fccf65158f430a7544", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
49419367
pes2o/s2orc
v3-fos-license
Investigations of Structural Requirements for BRD4 Inhibitors through Ligand- and Structure-Based 3D QSAR Approaches The bromodomain containing protein 4 (BRD4) recognizes acetylated histone proteins and plays numerous roles in the progression of a wide range of cancers, due to which it is under intense investigation as a novel anti-cancer drug target. In the present study, we performed three-dimensional quantitative structure activity relationship (3D-QSAR) molecular modeling on a series of 60 inhibitors of BRD4 protein using ligand- and structure-based alignment and different partial charges assignment methods by employing comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) approaches. The developed models were validated using various statistical methods, including non-cross validated correlation coefficient (r2), leave-one-out (LOO) cross validated correlation coefficient (q2), bootstrapping, and Fisher’s randomization test. The highly reliable and predictive CoMFA (q2 = 0.569, r2 = 0.979) and CoMSIA (q2 = 0.500, r2 = 0.982) models were obtained from a structure-based 3D-QSAR approach using Merck molecular force field (MMFF94). The best models demonstrate that electrostatic and steric fields play an important role in the biological activities of these compounds. Hence, based on the contour maps information, new compounds were designed, and their binding modes were elucidated in BRD4 protein’s active site. Further, the activities and physicochemical properties of the designed molecules were also predicted using the best 3D-QSAR models. We believe that predicted models will help us to understand the structural requirements of BRD4 protein inhibitors that belong to quinolinone and quinazolinone classes for the designing of better active compounds. Introduction The bromodomain containing protein 4 (BRD4) is a key therapeutic target for Bromodomain and extra-terminal domain (BET) inhibitors, a group of pharmaceutical drugs that have recently gone under the clinical trials [1,2]. BRD4 plays a vital role in the expression of "tumor driving" oncogenes, as shown in myeloid leukemia, multiple myeloma, and basal-like breast cancer [3,4]. It has been observed that the BRD4 protein regulates the expression of the MYC transcription factor (a master regulator) in cellular proliferation of numerous cancerous pathways [5]. The decreased amount of BRD4 expression results in reduced activity of MYC oncogene, which is a potential therapeutic target in different cancer studies [5][6][7]. The inhibition of this protein is of significant interest for the usage of BET inhibitors as therapeutic interventions for the treatment of various cancer types, inflammatory reactions, and cardiovascular diseases [8]. The BRD4 protein interacts with different classes of compounds based on their chemical structures. These classes of compounds are known as thienotriazolodiazepine (JQ1, the very first BRD4 inhibitors reported in 2010), tetra hydro-quinoline, 3,5-dimethylisoxzole, and 2-thiazolidinone derivatives [9]. Several other known inhibitory molecules, such as MS417, AZD5153, ZL0420, and ZL0454, interact with the BRD4 protein to interrupt its cellular activities. The interaction with BRD4-inhibitor MS417 causes downregulation of NF-κB transcriptional activity, as observed in HIV-associated renal disease [10]. In another study, MS417 has been used in the treatment of colorectal cancer due to its inhibitory effects [11]. The compound AZD5153 is involved in the treatment of thyroid carcinoma, which activates apoptosis and caspase activities in the cell [12]. The latter two compounds, ZL0420 and ZL0454, have been recently identified for the treatment of airway inflammation in mouse models using molecular docking studies [13]. In the current study, we investigated structural requirements to design better active inhibitors of BRD4 protein from quinolinone and quinazolinone classes. We employed comparative molecular field analysis (CoMFA) [14] and comparative molecular similarity indices analysis (CoMSIA) [15] methods to drive three-dimensional quantitative structure activity relationship (3D-QSAR) models along with molecular docking simulations. In this case, structural properties were correlated with the biological activities of small molecules, which were further evaluated using different statistical methods. In CoMFA modeling, steric and electrostatic fields of molecules were correlated with their biological activities [16], while in CoMSIA modeling, hydrophobic, hydrogen bond donor and acceptor fields, along with steric and electrostatic fields were correlated with activities [17]. Afterwards, key structural features were identified based on the best generated model, and then, new molecules were designed to explore better active compounds. Statistical Analyses of CoMFA and CoMSIA Models Different CoMFA-and CoMSIA-based 3D-QSAR models were generated using partial least square method (PLS) by correlating biological activities of BRD4 inhibitors in a training dataset with their field descriptors. There are several factors that affect the quality of the developed CoMFA and CoMSIA models [18]. However, the alignment of the dataset molecule and the charges assigned to them are the two major factors that affect the predictability of the generated models [19]. In this study, alignment methods, such as ligand-and receptor-based, as shown in Figure 1, along with partial charges methods like Merck molecular force field (MMFF94), Gasteiger Huckle (GH), and Gasteiger Marsilli (GM) were evaluated to obtain the best predictive CoMFA and CoMSIA models [20]. The structure-based conformation alignment method with MMFF94 charges yielded the best models. The leave-one-out (LOO) cross validated correlation coefficient (q 2 ) value with both steric and electrostatic fields in CoMFA is 0.569, along with optimum number of components (ONC) = 5, standard error of estimate (SEE) = 0.102, non-cross validated coefficient (r 2 ncv ) = 0.979, F-value = 336.72, and r 2 pred = 0.816, as given in Table 1. The model shares 47% contribution of steric and 53% electrostatic fields. In CoMFA modeling, different charges did not influence the predictive quality of the models (data shown in Supplementary File, Tables S1 and S2). Similarly, models generated using ligand-based alignment and MMFF94 charge method yielded q 2 = 0.399 value with both steric and electrostatic fields in CoMFA with ONC = 2 (see Table 1). The effects of different charges on the statistical models in ligand-based alignment method are given in the Supplementary File , Tables S3 and S4. For CoMSIA models using the MMFF94 charge method with structure-based alignment yielded q 2 = 0.500 with ONC = 6, SEE = 0.094, F-value = 396.442, r 2 ncv = 0.982, and r 2 pred =0.834 (Table 1). Different field contributions, such as steric, electrostatic, hydrophobic, hydrogen bond donor, and hydrogen bond acceptor were 0.130, 0.345, 0.254, 0.127, and 0.144, respectively, for this model. The results showed that both electrostatic and hydrophobic fields played a substantial role in the CoMSIA model, while steric and electrostatic fields played an important role in the CoMFA models with 0.470 and 0.530 contribution, respectively. CoMSIA models generated based on GH and GM charges were not statistically significant. The reasons for the better performance of one charge method over the other in a CoMFA/CoMSIA model is unknown as of yet, as the prior literature provides conflicting results of different charge-based methods. Merck molecular force field (MMFF94); comparative molecular field analysis (CoMFA); comparative molecular similarity indices analysis (CoMSIA); ONC = optimal number of components; q 2 = cross-validated correlation coefficient; r 2 = determination coefficient; r 2 ncv = non-cross validated coefficient; SEE = standard error of estimate; F = Fischer's F-value; Pred-r 2 = predictive r 2 ; r 2 bs = r 2 obtained after bootstrapping; and SD bs = bootstrapping standard deviation. Further, receptor-based CoMFA and CoMSIA models were used to predict the activities of the training and test dataset compounds. The experimental and predicted activities are shown in Figure 2. The scatter plots depict that all points are situated around the diagonal lines, and there is no obvious deviated point present on them. The higher r 2 pred values for both CoMFA and CoMSIA models for external validation confirm that the models are of good quality. Similarly, internal validations like r 2 ncv , F-values, and r 2 bs (bootstrapping) values show that the models are quite stable and accurate which can be further used to design better active compounds. As the statistical significance of a model is affected by individual fields that are totally independent from each other, so the best models were selected to design new compounds with improved activities. The designing was performed with the help of indicated regions in 3D color contour maps [21]. CoMFA Contour Maps Contour maps are the graphical interpretation of the generated statistical models that provide information about the compounds in a 3D space where substitution with functional groups may alter the biological activities of the compounds. The CoMFA contour maps of steric and electrostatic fields from the best CoMFA model of a highly active compound are shown in Figure 3a,b. In Figure 3a, the green contour represents the favored area of bulky group substitution, while the yellow contour indicates that the lighter group substitution will be favorable to enhance the biological activities of the compounds. In Figure 3b, the blue contour indicates the electron-donating group, and the red contour represents that the electron-withdrawing group will be favorable to improve the activity. The obtained 3D-QSAR models were used to explain the structure activity relationship of the dataset compounds (see "material and methods" section). The bulky group substitution at the green contour near the R3 position of the highly active compound, as shown in Figure 3a, becomes the R1 position for the compounds (1-13); however, the lighter group at the yellow contour located towards N-methyl piperdine will enhance the biological activity of the compound. These contours may explain why compounds, such as 4-11, having bulkier groups with substituted phenyl rings, are more active than compounds 1-3, having lighter groups. Similarly, compounds 14, 17, and 28, having lighter groups at R2 position, are more active than compounds 32-38, which possess the bulkier groups at this position. In electrostatic contours, as shown in Figure 3b, the presence of the large blue contour near the R3 position of 3-methyl-1,7-naphthyridin-2(1H)-one suggests that electron-donating groups will likely enhance the biological activity of the designed compounds. Compounds 55, 57-59, which have pyridine rings with different electron-donating substituent groups, are more active than compounds 51, 54, 56, and 60 possessing only heterocyclic rings. Similarly, the large red contour near the R2 position suggests that an electron-withdrawing group is desirable for improving the activity. Therefore, compounds such as 14, 15, 28, and 30 are less active than 22 and 34 because they have an electron-donating group near the R2 position. CoMSIA Contour Maps CoMSIA contour maps of the steric and electrostatic fields were similar to CoMFA models. The contour maps of the hydrophobic, hydrogen bond donor, and acceptor fields are shown in Figure 3c-e. The hydrophobic field contour reveals that the big yellow contour at the R2 position is hydrophobic in nature, while the white contour at R3 is hydrophilic in nature to enhance the activity (see Figure 3c). In the dataset, we can observe that compounds 34, 37, and 38, having hydrophobic groups at the R2 position, are exhibiting higher activities compared to other compounds having hydrophilic groups at this position. Hydrogen bond acceptor and donor fields of the CoMSIA model are shown in Figure 3d,e, respectively. The magenta contour in Figure 3d indicates the region where hydrogen bond-accepting substituents enhance the inhibitory activity, while the red contour specifies the regions where hydrogen bond-accepting moiety may deteriorate the biological activity of the compounds. Further, the hydrogen bond acceptor contour in Figure 3d corresponds to the electron-donating group in the electrostatic field contour map of Figure 3b. Similarly, the cyan contour at the R2 position in Figure 3e favors the hydrogen bond-donating moieties for enhancing the activity, whereas the purple contour disfavors the areas for such moieties to increase the activity of the compounds. This hydrogen bond-donating contour is near to the electropositive contour in the electrostatic contour of Figure 3b. Hence, from the information in these contour maps, it is clear that the hydrogen bond acceptor/donor contours nearly coincide to the electron-donating and -accepting favored contours for enhancing the biological activities of the compounds. In these figures, the contribution of the hydrogen bond acceptor and donor regions is 70% for favorable and 30% for non-favorable areas in terms of enhancing the activity. Analysis of Structure-Based Generated Conformations Molecular docking studies were performed to assess the performance of the glide docking score. The co-crystal ligand (I-BET151) in human BRD4 was first removed from the protein structure and then sketched, minimized, and redocked at the binding site. The best binding mode, with a glide score of -6.76 kcal/mol, was superimposed on the co-crystal ligand as shown in Figure 4A. The docked pose exhibited a similar interaction pattern as was present in the crystal structure [22]. The ASN140 residue of the BRD4 protein and one water molecule were making the hydrogen-bonding interaction with nitrogen and oxygen atoms of the isoxazole moiety, respectively. The distances between these interacting atoms was 2 Å and 2.3 Å. The presence of water molecules at the bottom of the active site is essential, because co-crystal ligand interacts with TYR97 [23] indirectly by hydrogen bonding through a water molecule. The water molecules also prevent the direct contact of acetylated lysine of histone at the bottom of the active site [24]. Keeping this in mind, water molecules were kept in the binding site while docking all dataset compounds. The binding pose of the best active compound (42) with the co-crystal ligand is shown in Figure 4B. The best pose yielded a −6.70 kcal/mol glide score with side chain hydrogen bonding interactions. The hydrogen-bonding pattern is similar to the co-crystal ligand, in which ASN140 and the water molecules present at the bottom are interacting with carbonyl oxygen of 3-methyl-1,7-naphthyridin-2(1H)-one scaffold. Similarly, an extra hydrogen bond is also present in the form of a salt bridge between the ASP144 and N-H of the pipridine moiety of the compound. Designing of New Compounds and Their Physicochemical Properties' Calculations Using the best CoMFA and CoMSIA models generated by receptor-based modeling, ten new BRD4 inhibitors were designed using contour map information by attaching different substituents at various positions of 3-methyl-1,7-naphthyridin-2(1H)-one. The physicochemical properties, as shown in Table 2, were predicted using QikProp software. We have found that most of the newly designed compounds followed the Lipniski drug-like rules with almost one rule violation. The predicted octanol/water partition coefficient (QPlogPo/w) values between 1.232 and 4.433, HERG K+ channels (QPlogHERG) blocking IC 50 values between −6.894 and −5.899, caco-2 cell permeability (QPPCaco) values between 13.145 and 195.227, brain/blood partition coefficient (QPlogBB) values between −3.049 and −1.354, and human serum albumin binding (QPkhsa) values between −0.067 and 0.722 are within the acceptable ranges for 95% oral drugs as described in [25]. These physicochemical properties (e.g., QPlogPo/w and QPlogHERG) within the recommended ranges ensure the smooth distribution of drug functioning and prevention against sudden risks of cardiac arrest, respectively. Biological Activities Prediction of Newly Designed Compounds The biological activities of newly designed compounds listed in Table 3 were predicted using the best CoMFA and CoMSIA models. Before the activities' prediction, molecular alignment of the newly designed molecules was achieved using docking simulations. The docking studies showed that these compounds possessed similar binding modes, as shown in Figure 5. The interaction pattern is similar to that of the other compounds present in the dataset. All of the designed compounds make H-bonding with ASN140 and indirectly with TYR97 through water molecules that lie at the bottom of the active site. The docking scores and predicted activities are reported in Table 3. The predicted activities of these newly designed compounds are better than most of training dataset compounds. These results prove that generated 3D-QSAR models with significant predictive ability could be used for structural optimization of the newly designed compounds. Biological Data Collection The set of 60 chemically diverse inhibitors of BRD4 protein, having similar chemical scaffold, were collected from the literature. The compounds used as racemic mixtures for biological activities [26][27][28] were not selected for the final data set. The chemical structures of the compounds and biological data along with their pIC50 values are listed in Table 4. Further, for model building and validation, the selected compounds were divided into training and test datasets using a random selection method [29]. In total, 50 compounds were included in training, while 10 were added in test datasets to evaluate the predictability of the generated models. Dataset Compounds Modeling and Alignment The 3D structures of the inhibitors were sketched in the SYBYL-X 2.1.1 molecular modeling package. The omega tool from OpenEye Scientific Software [30] was used for conformational search of each molecule. Finally, the lowest energy conformer was selected from all the resulting structural conformations. The partial charges were calculated using different methods including GH, GM, and MMFF94 charges [31,32]. Because the alignment of molecules is believed to be the most crucial and important requirement for a 3D-QSAR model's robustness and predictability, the alignments were performed based on common substructures of the template molecule and all the other compounds present in the dataset. Due to the highest inhibitory activity, compound 42 was considered to be a template molecule. For ligand-based modeling, alignment was obtained after superposition of the lowest energy conformation of each molecule obtained from the omega tool on the compound 42. However, for structure-based modeling, the alignment was performed on the conformations obtained after the molecular docking simulations. CoMFA and CoMSIA Fields Calculations For CoMFA steric and electrostatic field calculations, a 3D cubic lattice with 2.0 Å grid spacing in the X, Y, and Z coordinates was generated using a default value in SYBYL. The energy cut-off was fixed to ±30 kcal/mol to get rid of high energy values. For the steric probe, an sp3-hybridized carbon was taken while +1.0 charge was selected as an electrostatic probe atom. The five fields of CoMSIA including steric, electrostatic, hydrophobic, and hydrogen bond donor and acceptor were also calculated using a probe atom with a radius of 1.0 Å. The default value of 0.3 for the attenuation factor (α) was used, which calculates the standard distance dependent similarity indices. Similarly, an indices calculation was performed by the following equation 1. All the numerical calculations were executed in the same way as for the CoMFA analysis [31,33,34]. In the above equation, A q means the similarity index of point q; k denotes the physiochemical properties of steric and electrostatic descriptors; ω probe,k represents the probe atom; i denotes the summation index of molecule j, while ω ik is the observed value k of a specific property of the atom I, and r is the atomic radius [35]. Partial Least Squares (PLS) Regression Analyses and Validations of the Models PLS regression analysis remained very useful for 3D-QSAR models in which CoMFA and CoMSIA descriptors were used as dependent variables and biological activities are taken as independent variables [36]. For selecting the best model, which probably has the maximum predictability, the cross-validation analysis using the LOO method was used, which can be defined by the following Equation (2): where each value represents the predicted (y pred ), experimental (y obs ), and mean (y mean ) values of the target property. By using ONC, non-cross validated analysis was performed without column filtering. Along with cross and non-cross validated r 2 , SEE values were calculated using SYBYL. For further validation of generated models, bootstrapping analyses were performed for 100 runs to evaluate the effectiveness of the derived models. Predictive r 2 , based on the test set molecules, was used to express the predictive ability of the generated models. The predictive r 2 is defined by the following Equation (3): In the above equation, SD is the sum of the squared deviations between the biological activities of the test set and the mean activities of the training set molecules. Whereas PRESS is the sum of the squared deviations between the observed and the predicted activities of the test molecules. Preparation of Ligands and Protein Structure For the molecular docking studies, dataset compounds were prepared using the ligprep tool embedded in Schrodinger software (www.schrodinger.com). The possible ionization states and stereoisomers were generated by keeping a maximum of 32 conformations of each molecule using an OPLS2005 forcefield. The BRD4 protein crystal structure [37] (PDB ID: 3zyu), having the resolution of 1.5 Å, was retrieved from the protein data bank (www.rcsb.org). Missing hydrogen atoms were added and other unnecessary crystal ligands such as 1,2-ethanediol and buffer reagents were removed. The water molecules were also removed except the four present at the bottom of the active site. Finally, restrained minimization was performed using the OPLS2005 forcefield to remove steric clashes. Meanwhile, conformation of entire protein-ligand complex was allowed to deviate 0.30 Å root mean square deviation from its native structure. Molecular Docking Protocol To predict the plausible binding modes of the compounds and get the structure-based alignment for the CoMFA and CoMSIA modeling, the prepared crystal structure of the BRD4 protein was employed for receptor grid generation using Schrodinger software. During the grid box generation, the active site was considered where the co-crystalized ligand (GSK1210151A) is present in the protein. The X, Y, and Z coordinates at 0.76, −8.19, and 22.37 were defined with 10 Å length in each dimension. The hydrogen atoms of the hydroxyl and thiol groups of amino acids present in the active site were permitted to rotate during the molecular docking simulations. No other torsional or positional restraint was applied except the hydrogen bond formation with the ASN140 residue. During the docking simulation, softening potential for the non-polar parts of the receptor was applied by adjusting the scaling factor of van der Waal's radii to 0.80 with a cut-off value of 0.15 along with other parameters' default settings. Among the all docking poses of the docked compounds, only the top five poses of each compound were subjected to minimization. Eventually, the best pose was selected based on the highest glide score and best binding mode using standard precision (sp) mode in glide [38]. Conclusions The BRD4 protein plays various roles in the progression of different types of cancers, which makes it an attractive drug target. In this study, we performed a 3D-QSAR modeling using CoMFA and CoMSIA approaches on series of 60 BRD4 protein inhibitor molecules containing quinolinone and quinazolinone as central scaffolds. Several statistical models were generated using the lowest energy-and structure-based bioactive conformations using different charge methods. The best predictive models were generated using different molecular alignment and charges-based methods. The docking-based alignment method with MMF94 charges yielded the best CoMFA and CoMSIA models. The contour maps suggest that the bulky electron-donating group near the R3 position and the lighter electron-withdrawing groups near the R2 position will help to enhance the biological activities of this series of compounds. Based on contour maps information of the best models, ten new compounds were designed, and their biological activities were predicted. Their binding interactions with BRD4 protein were also assessed using docking simulations. Finally, the predicted pharmacokinetic properties showed that most of the designed molecules are in the acceptable ranges of the majority of the oral drug values. Hence, we believe that developed 3D-QSAR models could be useful for the development of more potent BRD4 inhibitors.
2018-07-04T00:07:14.915Z
2018-06-25T00:00:00.000
{ "year": 2018, "sha1": "fe7f3305bbfddc754ef70b2a7e3e751731cf0d0e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/23/7/1527/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fe7f3305bbfddc754ef70b2a7e3e751731cf0d0e", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
243593291
pes2o/s2orc
v3-fos-license
Case Report on Rheumatoid Arthritis Rheumatoid arthritis (RA) is an autoimmune disorder that inhibits the body's immune system that induces inflammation in the injured areas of the body. It is commonly caused the joints of the palms, wrists, and knees. An inflaming rheumatoid arthritis joint contributes to joint tissue damage. This condition may cause chronic or long term discomfort, instability, and deformation. Symptoms include exhaustion, pain, and depression. If the patient does not undergo early diagnosis and care for symptoms, a series of symptoms can arise including Osteoporosis, rheumatoid nodules, dry eye and mouth, carpal tunnel syndrome Case Report: The female patient name is Yogita Shinde 65year-old religion Hindu lived in the kandhali. She was admitted to AVBR Hospital with the chief complaint of pain in her shoulder and hands, joints pain, swelling on both the hands. She started taking ibuprofen 800 mg 3 times per day to relieve discomfort and rigidity. Three months earlier, as she was doing her job, she had pain on her right and left shoulders. She still started to feel very sleepy and short-tempered. Tab ibuprofen was not an effective very long time for pain. One morning, Yogita couldn't lift her arms without the intense pain of her back. She was conscious that it was time for help. She had spoken to her parents, and they advised her to see a physician. The primary healthcare practitioner (PHP), who tested and carried out a variety of blood test. Positiverheumatoid factors, CCP antibiotics, higher ESR, and C-reactive protein were seen via the blood samples. These findings were communicated to Yogita and the Rheumatologist was directed at her PCP to see her as soon as possible. The primary health care practitioner inquired about the medical records of Yogita parents and grandparents, family conditions, medical and operative Case Study Dhanvijay and Pohekar; JPRI, 33(39A): 212-216, 2021; Article no.JPRI.70723 213 records of yogita, and details on their family and working lives. And after that, the physician started the treatment, after which Yogita feels better for some days. After a few weeks she having recurrent pain in her hand and foot, this is intolerable to her. And then she is admitted to AVBR Hospital on date 20th Sept 2020 INTRODUCTION Rheumatoid arthritis is an inflammatory pathological, chronic autoimmune disorder in diarthrodial connecting tissue. Typically, rheumatoid arthritis is characterized by healing and recurrence cycles. RA often has extracellular manifestation [1]. RA that causing joint inflammation and pain. This disease directly affects the immune system of the body which may not function correctly and attacks the joint covering [2]. The exact cause of RA is unknown. This is presumably attributed to a combination of genes and environmental influences [1]. The important disease condition that diagnoses early as a delay in treatment can worsening the prognosis, leading to more damage to tissues and organs including the lungs and heart and even death [2]. This disorder has several symptoms such as fatigue, anorexia, lack of weight, and generalized rigidity. In the following months and weeks, rigidity becomes localized RA typically involves joints that are affected by discomfort, radiance, reduced mobility, and inflammatory symptoms, including sweat, swelling, or tenderness [1]. Rheumatoid arthritis has a prevalence of about three cases per 10,000 people, and it is about 1% with global population with age and height between 35 and 50 years of age, rising. The main treatment objective of Rheumatoid arthritis is to manage infection, alleviate the pain, and eliminate rheumatoid arthritis disease. Treatments may include occupational or physical therapy, and exercise is generally used in recovery. NON-PHARMACOLOGIC THERAPIES The Non-pharmacologic therapies include the following: Rest therapy: When joint are inflamed, the risk of injury to the joint and nearby soft tissue structures (such as ligaments and tendons) is high. This is why inflamed joint should be rested. Maintaining good range of motion in the joint and good fitness overall are important in coping with the overall features of the diseases. [4]. Exercise: Ache and stiffness sometimes prompt people to become inactive with rheumatoid arthritis. Inactivity, however, can result in a lack of joint mobility, constriction, and a loss of muscle strength. These, in effect, reduce the flexibility of joints and raise fatigue. [5] Physical therapists and occupational therapists advised to do regular routinely exercises. This help to avoid and change these results. Valuable activities include: range -of -motion exercises to maintain and recover joint motion; strengthenhancing movements; and workout and some exercises like(swimming, walking, and cycling) to improve stamina [5]. Physical and Occupational Therapy Physical and occupational therapy may alleviate the pain, minimize inflammation, and help to maintain joint function and structure for patients with rheumatoid arthritis.  The application of heat and cold can relieve pain and joint stiffness.  Ache or hardness may be reduced by applying heat or ice.  Inflammation of the sheaths underlying tendons may be reduced by ultrasound.  Routinely exercises can enhance and improve the joints' range of motion. Occupational therapists also concentrate on supporting individuals with rheumatoid arthritis learn to interact regularly in working and recreational activity, with particular focus on managing and maintaining the hands and arms' good function [6]. Nutrition and Dietary Therapy To decrease stress on affected joints, weight loss may be advised for overweight and obese people. There seems to be a greater chance of developing cardiovascular disease in people with rheumatoid arthritis. High cholesterol can lead to diet modifications (a risk factor for coronary artery disease). In order to reach a desirable cholesterol level, a nutritionist may prescribe particular foods to consume or avoid changes in diet have been investigated as treatments for rheumatoid arthritis, but no diet has been proven to cure it. No herbal or nutritional supplements, such as cartilage or collagen, can cure rheumatoid arthritis. These treatments can be dangerous and are not usually recommended [7]. CASE HISTORY The female patient name is yogita shinde 65year-old religion by Hindu lived in the kandhali. She is a housewife and does various home activities, she lived in a joined family with her husband and son, who is the breadwinner of her family, Mr. Samir Shinde has completed his education at class 12th and he doing work as an ST-driver and he has bred winner of his family and monthly income is around 20,000 per month. The source of health care is a government hospital in Wardha. She was admitted in the AVBR Hospital with the chief complaint of pain in hands and shoulder, joints pain, swelling on both hands for the last 10 days before coming to the hospital. Before She came to the hospital she is admitted in ortho ward no 32, she was having intolerable pain in hands and swelling also present on her hands. She take tablet brufen 400 MG previously and no medical history in past and she has done her family planning (tubal ligation) other than she not having any type of surgical history. NURSING ASSESSMENT A detailed patient physical evaluation, involving inspection of all the joints, many of which were sore and swollen. Her Rapid 3 Score was 21.8, consistent with severe impairment and significant disease activity. The physician explained the results of the test with her and the laboratory samples from her PCP were checked. The physician prescribed x-rays of his arms, wrists, and shoulders. The physician also referred her to the rheumatologist. Together, they discussed with Yogita that Rheumatoid Arthritis (RA) is her most likely diagnosis. General details were discussed briefly regarding RA and common treatment options. Low-dose prednisone was recommended for Yogita and corticosteroid injections were given by the physician to both of her shoulders. The drugs' general side effects and potential outcomes were clarified with her and her symptoms would eventually be managed but not necessarily solved. Nurse advised there she is having any difficulty calls the physician and clear her doubts. Two weeks later, at her follow-up appointment with that of the rheumatologist, tests verified which her records and examination revealed, that she had developed rheumatoid arthritis. The shoulders of Yogita began to look a little better, but she still had discomfort and trouble in lifting her arms. For more examination and care of her shoulders and guidance on exercise change relevant to her practice, the Rheumatologist instructed the Physical Therapist (PT). Oral methotrexate and folic acid were administered and a low dosage of prednisone was recommended. The physician prescribed written information about the drugs and referred her to the pharmacist's office for a prescription analysis and to ask any questions. Specific information was given on rheumatoid arthritis. Her Rapid 3 Score, consistent with low disease activity and disease severity, had decreased to 4. There was slight joint stiffness even though she had 3 swollen and sore knees. Nursing Management Nursing care of the patient with RA should follow a basic plan of care. Nursing Assessment: The assessment of a patient with RA can contribute to its diagnosis.  History and physical exam: The history and physical examination address manifestations such as bilateral and symmetric stiffness, tenderness, swelling, and temperature changes in the joints.  Extra-articular changes: The patient is also assessed for extra-articular changes and these include weight loss, sensory changes, lymph node enlargement, and fatigue. The nurse must educate Yogita in:  The nurse has to provide a variety of comfort measures (eg. application of heat or cold, massage, position changes, rest, foam mattress, supportive pillow, splints, relaxation techniques, diversional activities).  Administer anti-inflammatory, analgesic, and slow-acting anti-rheumatic medications as prescribed by doctor order.  Administer analgesics medication to meet patient's need for pain management.  Encourage verbalization of feelings about pain and chronicity of disease. [8]  Assist in identification of pain that leads to use of unproven methods of treatment.  Provide instruction about fatigue, describe comfort measures and sleep routine (warm bath and relaxation techniques that promote sleep) and also explain importance of rest for relieving systematic, articular, and emotional stress.  Assisted in doing a daily activity of the patient and advised to perform self-care as much as possible.  Help patient identify elements of control over disease symptoms and treatment.  Encourage patient's verbalization of feelings, perceptions, and fears.  Develop plan for managing symptoms and enlisting support of family and friends to promote daily function [9]. Nursing Diagnosis Nursing diagnosis according to patient complaints are as follow: 1. Impaired physical mobility related to joint pain, stiffness. 2. Chronic pain is related to inflammation and intolerable pain. 3. Disturbed body image related to chronic disease activity, long-term treatment, and inability to perform the usual activity. Follow up and outcomes: At the time of discharge, the patient's condition was satisfactory. The relatives were informed about the prognosis of the disease, drug therapy, and the importance of taking medication in time. It is also told that they should come after 7 days for routine follow to see the disease outcome CONCLUSION Rheumatoid arthritis is a debilitating, chronic, inflammatory disease, capable of causing joint damage as well as long-term disability. Early diagnosis and intervention are essential for the prevention of serious damage and loss of essential bodily functions. This case presented the prolonged history of rheumatoid arthritis causing mainly involves the joints, generally multiple hands and leg joints, and more frequently involves the joints in the neck, wrists, and knees. According to the patient condition the treatment given by the physician and Nursing management goals should include the relief of symptoms, preservation of joint function, prevention of joint damage and deformity, maintenance of an acceptable lifestyle, and patient education. To achieve these aims the nurse should play a pivotal role within the multidisciplinary team, ensuring the highest quality of care. CONSENT Before taking this case, information was given to the patient and their relatives and informed consent was obtained from the patient as well as relatives. ETHICAL APPROVAL As per international standard or university standard written ethical approval has been collected and preserved by the author(s).
2021-09-09T20:46:01.649Z
2021-07-30T00:00:00.000
{ "year": 2021, "sha1": "3cbda90e15c6a4fbce8ac60981fbb9cbe0b6dd38", "oa_license": "CCBY", "oa_url": "https://www.journaljpri.com/index.php/JPRI/article/download/32162/60449", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3d76b00fd6e2d79c1287486690f6d64e2d496058", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
252533660
pes2o/s2orc
v3-fos-license
Minimally Invasive Tensiometry: A New Modality for Per-Operative Measurement of Medialization and Tension During Laparoscopic Hernia Surgery Background: Newly developed techniques for minimally invasive abdominal wall reconstruction (AWR) for complex ventral hernia are continuously evolving. In order to achieve hernia defect closure, the aponeurotic edges of the hernia defect need to be approximated. Currently, surgeons have no way to objectively measure and quantify the traction required to approximate these edges. This study presents minimally invasive tensiometry (MINT), a novel technology for measuring fascial tension, as well as initial experiences and results using it. Methods: The MINT device was designed using rapid prototyping principles. It was designed as an add-on tool for any existing laparoscopic instrument, enabling objective assessment of abdominal wall tension by the use of a manually operated linear spring. Pre-clinical measurements of medialization at 10 and 20 N of tension during AWR were performed on fresh-frozen Post-Mortem Human Specimens (PMHS). Results: Three specimens were included, and a total number of 36 measurements of medialization at three different levels of the abdominal wall were performed under structured and similar circumstances. Median total medialization with 20 Newton (N) of applied tension was 25 mm (mm) cranially, 37.5 mm at the umbilicus and 27.5 mm at the caudal level. The highest rate of medialization was seen at the umbilical level (2.25 mm/N). Conclusion: MINT is a novel non-invasive technique, which allows surgeons to intraoperatively measure fascial tension when performing AWR. The MINT device is easy to use and reproduce. The next step is to start performing clinical measurements applying MINT during AWR. INTRODUCTION There is an advent of minimally invasive techniques such as laparoscopic, robotic and hybrid techniques, for both complex and non-complex AWR.Outcomes are promising, with equal or lower recurrence rates compared to open surgical approaches, and lower rates of surgical site infection, seroma and hematoma (1,2,3,4,5).Additionally, due to less traumatic skin incisions, patients have less postoperative pain, a shorter length of stay and a more rapid return to physical activity such as sports and work (6,7,8).The ideal end situation during AWR is fascial closure, which is often augmented with mesh implantation, ideally in retro-rectus position (9).The abdominal wall defect (hernia) may be too large to close without the aid of techniques to gain additional medialization, such as retrorectus dissection combined with anterior or posterior component separation techniques (CST).The main difficulty is the tension in the lateral abdominal wall, which needs to be overwon to achieve permanent closure of the defect.The force, or traction, needed to overcome the myofascial tension can vary widely between hernias, mainly depending on patient characteristics, recurrence status and previous mesh placement.High tension on the approximated aponeurotic edges is associated with ischaemia and can lead to incisional hernia (IH) recurrence (10). Intra-abdominal hypertension and abdominal compartment syndrome, which can have devastating consequences, have also been described after larger hernia repairs (11).Additionally, in a prior experimental study, no correlation was found between hernia width and abdominal wall tension, meaning that the size of the defect alone is not an indicator for the expected tension on the abdominal wall (12). Several methods have been found in literature to objectively assess abdominal wall tension during open repair, but surgeons have no way to objectively measure either of these two factors, medialization and tension, during minimally invasive surgery (12,13).Minimally invasive tensiometry is being developed in order to fulfil this role.MINT is an add-on tool for existing laparoscopic instruments, which enables an objective assessment of abdominal wall tension by the use of a linear spring.Due to the many difficulties that come with the lengthy and costly development and regulatory approval of novel laparoscopic instruments, the choice was made to focus on the design of a simple, easy to administer add-on instrument that can be used with several different manufacturer's laparoscopic devices all over the world.This decreases time to value and speeds up the process of collecting relevant data for further research. This study presents a brief description of this new measurement device, as well as initial experiences and results using MINT on fresh-frozen PMHS. MINT Device Development Primarily, the device needed to enable quick and easy measurement of tension without significantly changing or elongating the procedure.To enable universal applicability, the device design needed to comply with several different shapes and sizes of laparoscopic instruments, without interfering with their functionality and safety.Additionally, the device needed to reliably, accurately and safely measure the tension exerted on the tissue during repair of the abdominal wall defect.Lastly, the 3D printed device has to be easy to fabricate on a large scale and should be sterilized without altering any functional status. Rigid clamping of the laparoscopic instrument needed to be enabled whilst resisting the pulling force (traction) necessary to close the defect.Thus, it was chosen to use a clamping mechanism relying on mechanical fastening.Due to safety and functionality restraints of laparoscopic instruments, the chosen point of attachment was the handle.Keeping requirements regarding safety, simplicity and sterility in mind, it was decided to go for a purely mechanical force sensor using a linear spring scale, inspired by methods used in open repair (12,13).Force is measured by assessing the elongation of the spring as a result of the pulling force (Hooke's law).The most important part of this design aspect is the stiffness of the spring.On one hand it needs to be stiff enough to give an accurate representation of the force exerted on it, on the other hand its elongation needs to be large enough so the surgeon can quickly and clearly assess the elongation on a scale during the procedure.The spring requires a working range between 10 N and 60 N, the upper limit was based on a literature review (14).The handlebar used to operate the device depicts the force exerted on the instrument during use. Rapid prototyping was used to manufacture the device as it brings many benefits related to manufacturing time, design possibilities, assembly, and costs.The material of choice was carbon fiber reinforced polyamide-12 (PA-12, "Onyx ™ ", created by Markforged), as PA-12 is not only known for its tensile strength, impact strength and toughness, it is also easier to sterilize than the frequently used polylactic acid (PLA).PA-12 is recommended for making parts of surgical instruments that need to be sterilizable by autoclaving (15).Results indicated that the prototype withstands at least five cycles of steam sterilization (14). The clamp design was inspired by the well-known C-clamp (Figure 1).It consists of a frame, shaped like a C, which forms into a slender, hollow rod-shaped segment that houses the spring and handle of the device.A block is used to enable clamping of devices with variable thickness (Figure 2).This block was designed so that it could only move up and down without moving forward, backward or sideways.The bulges on the lower beam of the frame created slots for the walls of the block to slide through whilst the device was being used (Figure 2).The two contact surfaces contain cutouts that house silicon pads.These did not only account for different handle shapes by cushioning the handle, but they also protected the handle from breaking during clamping and increased friction between the handle and the clamp (Figure 2).At the bottom of the block a cutout was created in which a stainless steel plate was placed (Figure 2).The sheet made sure that the bolt did not damage the block while adjusting the device and fastening the clamp.The slender rod shaped part of the housing contains a hole throughout its entire length. The handle, connected to the spring, is inserted here and connected to the housing using a pin (Figure 3).Holes on the side of the housing enabled easy assembly of the device.The device was designed in such a way that the pre-tension of the spring is taken into account whilst using the device.During operation, the handle of the device will be pulled out of the housing, displaying the measured force necessary to do so on an analog scale (10-60 N).The scaling was created after the spring constant was validated using weights.It was assumed that most force measurements would be within the working range of 10-60 N, as lower forces point to direct closure and higher forces will most likely indicate component separation. Specimen Selection Experiments were performed on fresh-frozen PMHS in the anatomical lab of the Erasmus University Medical Center, department of Anatomy and Neuroscience.Specimens were obtained from the university's body donations program, wherein donors consented to donation for scientific purposes.As these experiments were performed on PMHS and not on live human or animal subjects, per local regulations, no formal ethical committee approval was required.Due to Dutch privacy law, specimens' medical histories were unavailable, but specimens with visible abdominal scarring, which might allude to previous abdominal pathology, were excluded. Experimental Procedure Experiments were performed during a surgical skills training session, where experienced hernia surgeons educated other surgeons on the posterior CST-TAR.After completion of the posterior CST, a single 5-mm trocar was inserted at the umbilical level, as laterally as possible.Through this trocar, atraumatic fenestrated grasping forceps (Symmetry Surgical ™ Access 32- 5117R) were inserted and the contralateral anterior rectus sheath was grasped (Figures 4, 5).Consequently, the MINT device was attached to the handle of the instrument, by turning the rotation knob until sturdy clamping is achieved.The surgeon applied tension to the grip in order to pull the edge of the defect to the mid-line of the abdomen.Simultaneously, the displacement of the device relative to the trocar was read out using a ruler. Measurements of the abdominal wall were taken at three levels: cranial, umbilical and caudal, meaning that the anterior rectus sheath was grasped at these three levels.The tissue was pulled taut first, without visually straining the tissue.Then, tension was applied, and medialization was measured at 10 N and 20 N. Afterwards, the trocar was moved to the contralateral side and measurements were repeated for the other side of the abdominal wall.This allowed for a total of 12 measurements per specimen.After having acquired enough results, the MINT device was detached. Statistical Analysis Data was collected in a spreadsheet.Medializations in millimetres and tensions in Newton were summarized and a median was calculated.For each of the three measurement levels (cranial, umbilical and caudal), between 0 to 10 and 10-20 N of tension, the medialization gained for every Newton of applied tension was calculated (mm/N), as well as the inverse (N/mm).No statistical tests were performed, presented results are of descriptive nature. RESULTS Three fresh-frozen post-mortem human specimens were included.All were males, over the age of 70.In total, 36 separate measurements were performed: 12 in each specimen. Medialization The median amount of medialization across the three included specimens was determined at cranial, umbilical and caudal levels.Figure 6 displays these results. In the first 10 N of applied tension, a median increase in medialization of 15 mm was found cranially.Umbilically, this was also 15 mm.At the caudal level, 12.5 mm was observed.When a total of 20 N was applied, an additional median medialization of 10 mm was observed at the cranial level.At the umbilicus, this was 22.5 mm.Caudally, the additional median medialization was 15 mm. Total medialization with 20 N of applied tension was 25 mm cranially, 37.5 mm at the umbilicus and 27.5 mm at the caudal level.Additionally, the relationship between tension and medialization was calculated and it is displayed in Table 1.The lowest amount of medialization per N was seen at the cranial level, followed by the caudal level and the highest amount was seen umbilically. DISCUSSION MINT was developed in order to enable hernia surgeons performing minimally invasive surgery to intra-operatively measure whether they can achieve adequate medialization of the rectus sheath, and to record what amount of tension is required to do so.In these first preclinical experiments using MINT after posterior CST, median medialization with 10 N of tension was between 12.5 and 15 mm.At 20 N of tension, a medialization between 25 and 37.5 mm was measured.Ultimately, MINT could aid in decision making on whether additional CST or intra-operative fascial traction will be necessary, or whether retro-rectus dissection will suffice. One of the key aspects of MINT is the non-invasive way of measuring.No additional devices are introduced into the patient's abdomen.Surgeons can grasp the rectus sheath tissue using the instruments that they are already familiar with and that they would typically use.The MINT device is merely clamped to the handle of the laparoscopic instrument of choice in order to achieve measurements.Measurements can be made rapidly.While the time to measure was not included as a parameter in the present study, anecdotally, measurements cost approximately 30 seconds to 2 minutes, which was mainly dependent on finding the right position to grasp the tissue. In anatomical experiments, performing retro-rectus dissection and posterior CST has shown to yield 5.8 -9.9 cm of absolute anterior rectus sheath medialization (16,17).The medialization measured during the present experiments is much lower at approximately 2 cm, however it cannot be compared because it does not represent absolute medialization.There is a lack of a reference point in these measurements, because before any resistance is experienced from pulling the tissue, so before it is pulled taut, there is already significant medialization achieved, but this medialization was not measured.S1.A key difference between fresh frozen and in-vivo experiments is muscle tension, of course completely absent in a cadaver.Findings in our study can therefore not simply be extrapolated to patients.However, during AWR, muscle relaxants are administered perioperatively in order to facilitate closure.In that respect, a fresh-frozen model represents the 'ideal' situation, which is one without any muscle tension.In living humans, muscle tension will be higher than the tensions found in the present experiments, due to the remaining muscle activity. Not every aspect of the MINT device was tested in this study.For instance, device handling has not been tested yet, however this is an aspect of MINT that is under active development.Eventually, this will be structurally evaluated by surgeons.Further improvements to the MINT device are ongoing.An example of improvements that are being worked on is to reduce the number of components that could potentially loosen or even drop, whilst retaining the ability to sterilize the device.Clamping of the device to the variety of laparoscopic instruments that exist needs to be completely reliable and secure. The first goal of AWR is primary closure of the anterior fascia.Presently, there is no way to preoperatively determine whether, for example, posterior CST or intraoperative fascial traction will be necessary in order to achieve adequate fascial medialization and closure.MINT could be used to intraoperatively measure fascial tension and this data should then be saved, along with patient characteristics, medical history, medication use and diagnostic results (e.g.CT-scans), in a database (Figure 7).Such a database can subsequently be used to develop a prediction model, which might be of additive value in deciding on the most appropriate surgical technique in the preoperative setting.Outcomes such as postoperative complications (surgical site infection, seroma, hematoma), pain, length of stay and recurrence could be evaluated in future comparative studies with and without using the additional preoperative information and the intraoperative use of the MINT device.Because MINT tensiometry is non-invasive and takes little effort, getting surgeons to participate in clinical evaluation studies is expected to be relatively easy.Already, several Dutch hernia surgeons have expressed interest in participating. CONCLUSION MINT is a novel non-invasive measurement technique, which allows surgeons to intraoperatively measure fascial tension when performing abdominal wall reconstruction.The MINT device is undergoing active development.The next clinical step is to start applying MINT in studies across centres in the Netherlands, creating the foundation for a database that could allow preoperative predictions with regard to which surgical technique to use and what outcomes to expect for a specific patient.Ultimately, this leads to patient-tailored operation planning and a reduced hernia recurrence rate. FIGURE 2 | FIGURE 2 | The clamping/adjustment bolt-and-nut mechanism ((A): Assembly of device, (B): Detailed view of the block that creates the contact surface between the clamped instrument and the device). FIGURE 4 | FIGURE 4 | Fully assembled MINT device with laparoscopic instrument clamped in. FIGURE 6 | FIGURE 6 | Median medialization at three levels of the abdominal wall (n = 12 measurements per level).Source data can be found in Supplementary TableS1.
2022-09-27T14:10:26.837Z
2022-09-27T00:00:00.000
{ "year": 2022, "sha1": "ac893c723d4192f718af26fe0efabe6e35e36a30", "oa_license": "CCBY", "oa_url": "https://www.frontierspartnerships.org/articles/10.3389/jaws.2022.10850/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ff097efbcf2c29598b2cb22ea8d8922ab50a9b68", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
252180737
pes2o/s2orc
v3-fos-license
SLITRK1-mediated noradrenergic projection suppression in the neonatal prefrontal cortex SLITRK1 is an obsessive-compulsive disorder spectrum-disorders-associated gene that encodes a neuronal transmembrane protein. Here we show that SLITRK1 suppresses noradrenergic projections in the neonatal prefrontal cortex, and SLITRK1 functions are impaired by SLITRK1 mutations in patients with schizophrenia (S330A, a revertant of Homo sapiens-specific residue) and bipolar disorder (A444S). Slitrk1-KO newborns exhibit abnormal vocalizations, and their prefrontal cortices show excessive noradrenergic neurites and reduced Semaphorin3A expression, which suppresses noradrenergic neurite outgrowth in vitro. Slitrk1 can bind Dynamin1 and L1 family proteins (Neurofascin and L1CAM), as well as suppress Semaphorin3A-induced endocytosis. Neurofascin-binding kinetics is altered in S330A and A444S mutations. Consistent with the increased obsessive-compulsive disorder prevalence in males in childhood, the prefrontal cortex of male Slitrk1-KO newborns show increased noradrenaline levels, and serotonergic varicosity size. This study further elucidates the role of noradrenaline in controlling the development of the obsessive-compulsive disorder-related neural circuit. in a Sema3a-induced endocytosis NRP/L1CAM complex. Based on this model, S330A would be predicted to be less efficient than WT Slitrk1 in suppressing endocytosis, but this is not observed. This apparent discrepancy should be discussed on page 19. Reviewer #3 (Remarks to the Author): This manuscript is an extension of authors' previous study on SLITRK1 knockout (KO) mice, which described the altered anxiety-like behavior and abnormalities in noradrenergic functions. They employed both gain-of-function and loss-of-function studies, revealing that Slitrk1 suppresses the noradrenergic projection connectivities that might be involved in a subset of behavioral deficits. Particularly, they asked whether two SLITRK1 missense mutations linked to schizophrenia/bipolar disorder (S330A and A444S) could disrupt any known SLITRK1 functions (e.g. neurite outgrowth). Moreover, they found that L1-CAM binds to SLITRK1 in a nanomolar affinity. I am impressed by large amount of data from various approaches; but the following points should be completely addressed for consideration in Communication Biology. Major points: 1. Authors need to present summarized/representative results throughout the manuscript by at least three biological replicates. I noted that there are some experiments where the number of samples is below 3. These should be completely addressed. 2. Some of image qualities are not excellent. For example, in Figure 6D, I don't see that authors could conclude any clear conclusions. More importantly, the control experiments appeared not to work -SLITRK1 was reported to induce VGLUT1 clustering in previous studies. 3. Many representative images do not match with the respective quantification results (e.g. Figure 2A and 2B; Figure 3A and Figure 3B; Figure 6A and 6B). 4. I don't understand why the authors treated the SLITRK1 ECD recombinant proteins into cultured LC neurons in Figure 4. I suppose that authors should employ transfection experiments because SLITRK1 is a transmembrane protein. In addition, the representative images do not clearly reflect the message in the Figure 4B. 5. Figure 8: the authors should present the data that show the direct binding of SLITRK1 with Neurofascin and L1CAM, not just by showing Kd values (panel B). Authors did not present any compelling results to demonstrate the interactions of SLITRK1 with these proteins. More rigorous biochemical and biophysical analyses are critical. 6. I am not persuaded by authors' representative images/data that S330A mutations has phenotypes in many cases. 7. Sexual dimorphism is an important/exciting issue in this field. However, authors did not integrate the major findings related to this issue in the current manuscript. Could authors show for example that differences in the norepinephrine levels between sexes are causally involved in behavioral deficits and neurite outgrowth/synaptic impairments? 8. Data presentations: this is serious -bar graphs should be presented in a consistent manner throughout the paper. Minor points: 1. Discussion section is too lengthy. Authors need to remove/trim considerable parts and move them to the Introduction section. 2. Authors need to discuss in detail why varicosity size in the NET fibers are opposite in SLITRK1 KO mice during development (i.e., P7 vs. 6M). 3. Page 8, line 4: "but females" should be changed to "but not females" 4. Figure S4: These data should be quantified. 5. Where are the representative images for Figure 3I and 3J in Figure 3H? 6. Where are the representative images for BSA groups in Figure 8F? 7. Statistics: # should be added in the legend of Figure 2B. In addition, † † need to removed. In Figure 2E, † need to corrected. Importantly, authors should be very careful to indicate all the statistics in data and legend (e.g. statistics missed in Figure 2E for MHPG and MHPG/NA groups). 8. Figure 2B: further experiments are necessary for female mice. Reviewer #4 (Remarks to the Author): Summary: This manuscript reports an array of neurodevelopmental differences in mice lacking SLITRK1, which is a transmembrane protein associated with OCD-related disorders and known to function in neurite outgrowth and synapse formation. The paper builds on the authors' previously published work, which implicated noradrenergic mechanisms in the anxiety-like behaviors of slitrk1-deficient mice. Here, the authors report structural, behavioral, and neurochemical differences in slitrk1-deficient mice, with an emphasis on the neonatal prefrontal cortex. The authors also report two novel missense SLITRK1 mutations associated with schizophrenia and bipolar disorder and link these mutations to structural and functional deficits in the noradrenergic system. The paper is ambitious in scope, technically rigorous, and provides further evidence that cellular and molecular differences in noradrenergic signaling may contribute to numerous neurodevelopmental disorders, including OCD, schizophrenia, and bipolar disorder. Major points: This work is of interest to at least three specialized audiences -those who examine SLITRK protein functions in neurodevelopment, those interested in the development of the prefrontal cortex, and those interested in the cellular and molecular changes that contribute to OCD and other pervasive but poorly understood neurodevelopmental disorders. With 9 Figures and 13 Supplementary Figures, the paper is ambitious in a way that ultimately compromises its cohesion and readability. The authors characterize a knock-out mouse at the behavioral, anatomical, and molecular levels, provide evidence for changes to multiple neurotransmitter pathways, perform a genome wide association study, report novel SLITRK1interacting proteins, and examine the complex molecular mechanisms mediated by these protein interactions. The reason why some figures are relegated to Supplementary files (while others are not) is somewhat difficult to discern. It is an impressive and technically rigorous body of data, but it may benefit from being divided into two smaller, more cohesive manuscripts. Two general issues regarding the clarity of the manuscript should be addressed. First, the final paragraph of the introduction should be revised and expanded to articulate the overall rationale and major findings of the paper with greater specificity. Second, the Results section could benefit from clear statements of rationale as each experiment is introduced and described. This was done nicely in the Discussion, but bears repeating in the Results as well. In Figures 2 and 3 (and others), mean values of WT controls are set to 1, but no mention of the absolute values can be found in the figure legends or results text. A supplementary data file containing absolute measurements would be a positive addition to the supplementary materials. Figure 5, which describes the genome wide association study that identified new SLITRK1 missense mutations, may be better suited to the Supplementary data. The claim that Slitrk1-knockout mice exhibit reduced GABAergic synapse density in the PFC is not adequately supported by the evidence presented in Figure S8. One or more additional inhibitory synapse markers would strengthen this claim considerably. Minor Points: In Figures 2-4, the WK labels below the X axes are redundant with the shading of the bars and the associated key. Removing the WK labels would improve the clarity of these figures. Statistical analyses are appropriate throughout, and the level of experimental detail provided in the Methods section is adequate for reproducibility. Point-by-point responses to reviewers' comments: Reviewer #2: Hatayama et al investigate the role of the disease-associated gene Slitrk1 in brain development. The authors perform an in-depth analysis of behavioral and anatomical parameters in Slitrk1 KO mice. Differences in ultrasonic vocalizations and noradrenergic fiber density in prefrontal cortex (PFC) are observed. Analysis of the molecular mechanisms involved points to interactions of Slitrk1 with L1 family proteins and endocytosis. Overall this is an interesting study that reveals the consequences of impaired Slitrk1 function on the development of noradrenergic circuitry. Specific comments 1. The manuscript contains a very large amount of data and the results are not always easy to follow. The paper would benefit from reorganization to facilitate reading, for example by reducing supplemental data and grouping data in main figures in line with the text (as an example, now the text moves from Fig. 2 to Fig. 3, then back again to Fig. 2 to discuss metabolite levels and back again to Fig. 3 to discuss VGAT/VGLUT1 staining -this is confusing). The same applies to the order in which figure panels are discussed (for example, Fig. 4D is discussed before 4C in the text). Some supplemental data is not commented on (for example, Fig. S2B, vertical screen test significant difference only at P15). Response: After reading the three reviewers' comments, we realized that the previous manuscript was not well organized and forced the readers to seek the data many times. According to this comment, we have reorganized the results so that the readers can follow the study without unnecessarily going back-and-forth. The Supplementary Figures were limited to only those figures which the readers could follow the study without seeing, and the number was reduced from 12 to 9. References to supplemental data sometimes appear incomplete (for example, page 7, reference to Supp. FigS4 for increased density of NET-positive fibers likely should include Fig. S5 as well). Finally, the text should be proofread to check for omissions, typos, and to make sure sentences flow well (as an example, 'but females' page 8 probably means 'but less in females' or similar). Response: We have corrected and checked the reference to the supplemental data. Several instances, including the one pointed out, were corrected to improve the flow. The revised text was finally corrected by a professional scientific Editor once again. Fig. 2A and S5A are the same for P7 WT and KO panels. From the text it appears that there should be data on P3 but this is missing. Images in Response: Figure Response: Statistical tests were performed by two-tailed unpaired t-test and two-way ANOVA (genotype and sex as main factors). A data point indicates a mean value for each mouse. The mean values were derived from three qPCR experiments that were performed in duplicate. In this revision, we carried out multiple test correction as per the Benjamini−Hochberg procedure and confirmed that the changes in the three genes were significant after the multiple test correction. 4. Page 10, 'Compared with control treatment, Sema3A treatment led to decreased and increased complexity of proximal and distal neurites, respectively (Fig. 4C); moreover, it decreased the neurite length (Fig. 4D). However, there was no significant between genotype difference in the complexity ( Supplementary Fig. S10-Sema3a).' It is unclear to me how the data in 4C, D are related to S10 and what is meant by this sentence. Response: We apologize for the confusion caused by our unclear description. "Control treatment" indicated to the BSA-treated one, referring to the BSA-Sema3A difference in the KO samples. To avoid confusion, we have moved the graphs for between-genotype difference (previous Figure S10) to the bottom of new Figure 6c and referred to the individual graph in the main text. 5. Fig. 4C, D: Slitrk1-ECD treatment has no effect on intersections and branch length in Slitrk1 KO neurons -please discuss. Response: We agree that this is an important finding. In the last paragraph of Molecular mechanism underlying Slitrk1-mediated suppression of NA projections chapter, we have discussed this as follows: "The fact that the neurite suppressive effect of SLITRK1 ECD requires Slitrk1 in LC neuron suggests that Slitrk1 and Slitrk1 ECD may compete for the L1 family proteins. However, we cannot exclude other possibilities, such as homophilic interaction via Slitrk1 ECDs or it's binding to unidentified targets." 6. Fig. 6E: the effect of A444S is not significant, yet it is concluded that both S330A and A444S impair inhibitory synapse-inducing ability of Slitrk1, this is also repeated in Discussion. This should be corrected. Response: This comment may have been raised due to an unclear indication of statistical significance in Figure 6E graph. In the bar graphs of the revised manuscript, we have consistently used rotated-square-brackets to indicate the two groups for the comparison. The corresponding The differences between WT, S330A and A444S are very small, how was significance determined? From the legend it appears that a t-test was used, which is incorrect for multiple conditions here, further it is not clear if data points represent individual cells or independent experiments, in case of the former, corrections should be made to account for independent experiments. In general, statistical analysis should be carefully checked throughout the paper. Response: In the revised manuscript, we carried out Dunnett's or Steel's test for the mutant analysis (one-to-many comparison). This has also been written in Experimental design and statistical analysis subsection of the Methods section and in the Figure legend. 7. Fig. 7: this experiment lacks a proper control such as GFP or a non-relevant membrane protein. Currently data is compared to a frame-shifted Slitrk1 mutant that is predicted to generate a cterminally truncated ECD fragment. A proper control to account for the experimental procedure Fig. 8F, H: the authors show Slitrk1 binding affinity for L1 family member Neurofascin is reduced for Slitrk1 S330A (Fig. 8B). According to the model the authors discuss, Slitrk1 might displace L1CAM in a Sema3a-induced endocytosis NRP/L1CAM complex. Based on this model, S330A would be predicted to be less efficient than WT Slitrk1 in suppressing endocytosis, but this is not observed. This apparent discrepancy should be discussed on page 19. Response: Because the binding to L1 was comparable between SLITRK1 WT and S330A as shown above, the endocytosis-suppressing activities of the two proteins may be equivalent. Reviewer #3: This manuscript is an extension of authors' previous study on SLITRK1 knockout (KO) mice, which described the altered anxiety-like behavior and abnormalities in noradrenergic functions. They employed both gain-of-function and loss-of-function studies, revealing that Slitrk1 suppresses the noradrenergic projection connectivities that might be involved in a subset of behavioral deficits. Particularly, they asked whether two SLITRK1 missense mutations linked to schizophrenia/bipolar disorder (S330A and A444S) could disrupt any known SLITRK1 functions (e.g. neurite outgrowth). Moreover, they found that L1-CAM binds to SLITRK1 in a nanomolar affinity. I am impressed by large amount of data from various approaches; but the following points should be completely addressed for consideration in Communication Biology. Major points: 1. Authors need to present summarized/representative results throughout the manuscript by at least three biological replicates. I noted that there are some experiments where the number of samples is below 3. These should be completely addressed. Response: We noticed that the previous Supplementary Figure 8 contained the results of n = 2 and n =3. The figure and related description were removed from this study. In relation to this revision, we removed the quantitative analyses of synapses in Slitrk1 KO mice (previous Figure 3H-J, S9D, E). This was done because we had to simplify the entire study in response to the comments from the other reviewers. The synaptic phenotype will be described elsewhere. 2. Some of image qualities are not excellent. For example, in Figure 6D, I don't see that authors could conclude any clear conclusions. More importantly, the control experiments appeared not to work -SLITRK1 was reported to induce VGLUT1 clustering in previous studies. Response: We replaced Figure 6D (new Supplementary Figure 7) images. In our experiments, SLITRK2 induced both inhibitory and excitatory synapses (new Supplementary Figure 7). In this regard, a control may be working. Concerning the discrepancy with previous reports, several experimental parameters are different among the studies (see the Table below). Most critically, previous studies used artificial signal peptide sequence and N-terminal epitope tag, and the timing of the co-culture was earlier than ours. The difference between SLITRK1 and SLITRK2 in our condition may reflect differences in some inherent properties of these proteins. We feel that it would be worth reporting the result in this condition. 3. Many representative images do not match with the respective quantification results (e.g. Figure 2A and 2B; Figure 3A and Figure 3B; Figure 6A and 6B). Response: We agree with this comment. We have replaced the images in Figure 2A "We also investigated the effect of SLITRK1 ECD because SLITRK1 ECD is known to be cleaved by α/γ secretase at the transmembrane region 17 ". To consider the role of Slitrk1 in LC neurons (cell-autonomous function), we compared the LC neurons from Slitrk1 WT and KO mice. In the revised figure (new Figure 6b, c), we replaced the representative images in Figure 4B and merged graphs in Figure 4C and Figure S10. We hope that these corrections can improve the readability and visibility. Response: Accordingly, we performed pull-down assay using purified SLITRK1 ECD, Neurofascin ECD, and L1CAM ECD proteins. The results are indicated in new Supplementary Figure 8c. The results for the pulldown assay using brain lysate and SLITRK1 ECD are also indicated for L1CAM, Neurofascin, and NCAM (new Supplementary Figure 8b). In addition, we measured the Kd values the SLITRK1 mutants (new Figure 10b), and the original result for mass spectrometry was indicated in new Supplementary Table 2 upon suggestion from another reviewer. 6. I am not persuaded by authors' representative images/data that S330A mutations has phenotypes in many cases. Response: Representative images for Figure 6D (new Supplementary Figure 7) were replaced with better images. The figures indicating S330A's effect on "later" neurite development (previous Figure S10I-K) were incorporated into new Figure 8 (e-g). These figures most clearly indicate the S330A phenotypes. We agree with the reviewer in that S330A phenotypes are not so drastic. However, this may be a very important mutation with not only pathological, but also evolutionary significance (please see the last paragraph of the Discussion section). We therefore conducted the experiments carefully. In this regard, we added the following description in the Experimental design and statistical analysis subsection of the Methods section: "The experiments for assessing the new SLITRK1 mutations were carried out in a plasmid identity-blinded manner. The expression constructs for SLITRK1 WT and its mutants were verified by sequencing after each plasmid preparation." 7. Sexual dimorphism is an important/exciting issue in this field. However, authors did not integrate the major findings related to this issue in the current manuscript. Could authors show for example that differences in the norepinephrine levels between sexes are causally involved in behavioral deficits and neurite outgrowth/synaptic impairments? Response: We agree with the reviewer. We observed several sexual dimorphisms in Slitrk1 KO phenotypes (body weight, USV, NA contents in PFC, and SERT varicosity size in PFC) and discussed the possible basis of this observation (e.g. VMAT2, COMT) in the first subsection of the Discussion. The possible causal relationships were described for the synaptic, behavioral, and developmental phenotypes in the same subsection. In addition, we referred to the male-predominant occurrence of early-onset OCD in the "Excessive NA projections as a disease core mechanism underlying OCRD" subsection. 8. Data presentations: this is serious -bar graphs should be presented in a consistent manner throughout the paper. Response: Accordingly, we changed the graph design for consistentcy. All data for WT mice are indicated by open circles or bars and those for KO mice are indicated by red circles and bars. Data for SLITRK1 proteins and its mutants are indicated by different colors (WT, blue; S330A, green; A444S, yellow). Statistical significance was indicated in a consistent manner. Minor points: 1. Discussion section is too lengthy. Authors need to remove/trim considerable parts and move them to the Introduction section. Response: We moved the background information on the neural circuit basis of OCD and molecular determinants of the monoaminergic fibers from the Discussion to Introduction. This revision may be beneficial for the readers to read the Results more easily. 2. Authors need to discuss in detail why varicosity size in the NET fibers are opposite in SLITRK1 KO mice during development (i.e., P7 vs. 6M). Response: We added the following discussion in the first chapter of the Discussion section. "Contrarily, the NET + varicosity size in the PFC of male Slitrk1 KO was larger at P7 but smaller than those of WT at 5W or 6M (Figure 2). While the increase at P7 can be interpreted as a feedback from excessive NA via α2 autoreceptor, the decrease at later stages suggests the presence of some adaptive mechanisms for the excessive NA. As a candidate mediator of such adaptive response, VMAT2 should be noted because VMAT2 is a critical regulator of presynaptic NA storage in brain, and VMAT2 expression is dynamically regulated both during development and upon acute and chronic drug exposure (Eiden and Weihe, 2011)." 3. Page 8, line 4: "but females" should be changed to "but not females" Response: We have corrected it. Response: We have quantified the NET + fiber area. The results are indicated at the bottom of the images. 5. Where are the representative images for Figure 3I and 3J in Figure 3H? Response: We have removed the corresponding figures in this revision. Please see the response to Major point 1. 6. Where are the representative images for BSA groups in Figure 8F? Response: The representative images for the BSA control experiments were added to Figure 8F (new Figure 10f). 7. Statistics: # should be added in the legend of Figure 2B. In addition, † † need to removed. In Figure 2E, † need to corrected. Importantly, authors should be very careful to indicate all the statistics in data and legend (e.g. statistics missed in Figure 2E for MHPG and MHPG/NA groups). Response: Symbols for the statistical test results were changed to be consistent throughout the graphs. Complete information for the statistical tests was included in the figure legends. 8. Figure 2B: further experiments are necessary for female mice. Response: An additional experiment was done for female mice. Total n became eight for each genotype. Reviewer #4 (Remarks to the Author): Summary: This manuscript reports an array of neurodevelopmental differences in mice lacking SLITRK1, which is a transmembrane protein associated with OCD-related disorders and known to function in neurite outgrowth and synapse formation. The paper builds on the authors' previously published work, which implicated noradrenergic mechanisms in the anxiety-like behaviors of slitrk1-deficient mice. Here, the authors report structural, behavioral, and neurochemical differences in slitrk1-deficient mice, with an emphasis on the neonatal prefrontal cortex. The authors also report two novel missense SLITRK1 mutations associated with schizophrenia and bipolar disorder and link these mutations to structural and functional deficits in the noradrenergic system. The paper is ambitious in scope, technically rigorous, and provides further evidence that cellular and molecular differences in noradrenergic signaling may contribute to numerous neurodevelopmental disorders, including OCD, schizophrenia, and bipolar disorder. Major points: This work is of interest to at least three specialized audiences -those who examine SLITRK protein functions in neurodevelopment, those interested in the development of the prefrontal cortex, and those interested in the cellular and molecular changes that contribute to OCD and other pervasive but poorly understood neurodevelopmental disorders. Two general issues regarding the clarity of the manuscript should be addressed. First, the final paragraph of the introduction should be revised and expanded to articulate the overall rationale and major findings of the paper with greater specificity. Second, the Results section could benefit from clear statements of rationale as each experiment is introduced and described. This was done nicely in the Discussion, but bears repeating in the Results as well. Response: We revised the final paragraph as follows: "In this study, we firstly examined the neonatal phenotypes of Slitrk1-deficient mice. Because abnormalities in NA fiber development were observed, we investigated the molecular mechanism underlying Slitrk1-mediated control of the neurite development. Further, we conducted a re-sequencing analysis of patients with schizophrenia (SCZ) and bipolar disorder (BPD) to identify functionally defective and significantly enriched missense mutations. The analysis identified a SLITRK1 mutation that affects the NA fiber development-controlling and L1 family protein-binding abilities of SLITRK1. Finally, we sought to discuss the pathogenesis of OCRD, focusing on the role of neonatal NA-mediated neural circuit modification." We have added text that clarifies the logical flow in several subsections of the Result section. In Figures 2 and 3 The claim that Slitrk1-knockout mice exhibit reduced GABAergic synapse density in the PFC is not adequately supported by the evidence presented in Figure S8. One or more additional inhibitory synapse markers would strengthen this claim considerably. Response: We removed all results concerning the quantitative analysis of the synapse markers in Slitrk1 KO mice. Please see our first response. Minor Points: In Figures 2-4, the WK labels below the X axes are redundant with the shading of the bars and the associated key. Removing the WK labels would improve the clarity of these figures. Response: We removed the WK label, and the results in all the graphs are indicated in a consistent color code (WT = white, KO = red).
2022-09-11T06:18:07.709Z
2022-09-09T00:00:00.000
{ "year": 2022, "sha1": "764eaaa86b5ae325bc92395ca2e35766e13b9345", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "2e44c752d834651eb103bfb02883bf321d94207d", "s2fieldsofstudy": [ "Biology", "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
259008134
pes2o/s2orc
v3-fos-license
Free Radicals and Obesity-Related Chronic Inflammation Contrasted by Antioxidants: A New Perspective in Coronary Artery Disease We are surrounded by factors called free radicals (FR), which attach to the molecules our body is made of, first among them the endothelium. Even though FR are to a certain extent a normal factor, nowadays we face an escalating increase in these biologically aggressive molecules. The escalating formation of FR is linked to the increased usage of man-made chemicals for personal care (toothpaste, shampoo, bubble bath, etc.), domestic laundry and dish-washer detergents, and also an ever wider usage of drugs (both prescription and over the counter), especially if they are to be used long-term (years). In addition, tobacco smoking, processed foods, pesticides, various chronic infectious microbes, nutritional deficiencies, lack of sun exposure, and, finally, with a markedly increasing impact, electromagnetic pollution (a terribly destructive factor), can increase the risk of cancer, as well as endothelial dysfunction, owing to the increased production of FR that they cause. All these factors create endothelial damage, but the organism may be able to repair such damage thanks to the intervention of the immune system supported by antioxidants. However, one other factor can perpetuate the state of inflammation, namely obesity and metabolic syndrome with associated hyperinsulinemia. In this review, the role of FR, with a special emphasis on their origin, and of antioxidants, is explored from the perspective of their role in causing atherosclerosis, in particular at the coronary level. Introduction Biogerontologist Denham Harman was the first to discover the concept of free radicals in 1954, while researching an explanation for aging. In his opinion, aging and the degenerative diseases that accompany it were primarily attributable to free radical attacks on cell constituents and connective tissues [1]. Natural antioxidants, on the other hand, are molecules that protect cells against free radicals' damage; they are critical for maintaining optimum health in both animals and humans [2]. The right balance between the two assures a good equilibrium for the body, shielding from damage to the tissues and organs. The aim of this review is to shed light on the complex interplay between free radicals and inflammation in the pathogenesis of coronary artery disease, and to discuss the potential of antioxidants used as a therapeutic strategy to counteract their harmful effects. The Free Radicals Free radicals are byproducts of normal cellular metabolism, and can be described as atoms or molecules with one or more unpaired electrons in their valency shell or outer orbit that can exist independently [3]. These unpaired electrons usually give the free radical a high level of reactivity. Since electrons have a strong tendency to exist in a paired rather than an unpaired state, free radicals steal electrons from other atoms indiscriminately, Figure 1. Mechanisms of free-radical-induced pathology in cardiovascular diseases. Free radicals, induced by triggers such as pollution and other toxic substances, can trigger oxidative stress, leading to oxidized low-density lipoprotein (LDL), endothelial dysfunction, inflammation, and DNA and protein damage. These factors can contribute to the development and progression of atherosclerosis and coronary artery disease. The practical approach to investigating free radicals is to analyze the by-products of free radical pathology, as it is extremely difficult to measure free radicals directly. Free radicals, in fact, exist for only a minuscule amount of time (fractions of seconds) [27]. Some of these by-products of oxidative stress (biomarkers) can be measured and have been proposed and investigated in relation to cardiovascular diseases, such as isoprostanes, malondialdehyde, oxidized LDL, glutathione, and myeloperoxidase [28,29]. Recent studies reported that some oxidative stress biomarkers, such as oxidized LDL, asymmetric dimethylarginine, total thiol, and malondialdehyde-modified LDL were associated with cardiovascular events in patients with stable coronary artery disease [30]. However, none of them have been established as a reliable and independent predictor of cardiovascular events in clinical practice. This may be due to several limitations, such as lack of standardization, variability, confounding factors, and low specificity. Moreover, oxidative stress biomarkers may reflect different aspects of the oxidative process, such as lipid peroxidation, protein oxidation, and antioxidant capacity, and may not capture the overall oxidative balance in the body [29]. Therefore, to establish the clinical utility of oxidative stress biomarkers, more large-scale clinical trials are needed to determine their value in addition to established models of cardiovascular risk prediction. Furthermore, future research should focus on the development of improved biomarkers that can be incorporated into standardized clinical chemistry tests, facilitating their widespread use in clinical practice. Nevertheless, we believe that combining biomarkers of oxidative stress with non-invasive tests of endothelial coronary function, such as the cold pressure test [31,32], which can now be performed with high feasibility using enhanced transthoracic Doppler echocardiography [33][34][35][36], may provide a powerful insight into coronary atherosclerosis. While biomarkers express the level of oxidative stress occurring, endothelial tests investigate the extent to which this oxidative stress penetrates the coronary endothelium, which can vary from individual to individual [31,37]. This combination can be particularly useful in detecting coronary atherosclerosis in its early stages [38] and assessing the vulnerability of plaques in more advanced cases [39]. Stress The pressures common in industrial societies can trigger stress responses. We all know that stress conditions are widely considered by clinicians as one of the most important causes of the development of various diseases. In fact, due to the increased respiratory oxygen use and metabolic turnover during periods of stress, free oxygen radicals are created in abundance. A high oxygen intake is necessary to fulfill the increased energy demand under stressful situations brought on by unfavorable environmental factors, difficult and heavy work, and psychological trauma [40]. Several studies in animals demonstrated that exposure to external stimuli such as hypoxia, hypothermia, hyperthermia, and immobilization led to an increased formation of reactive oxygen and nitrogen species, causing severe damage to DNA, proteins, and lipids. This could be reduced by the use of antioxidants [41][42][43][44]. Moreover, the hormones that mediate the stress reaction in the body-cortisol and catecholamines-will themselves degenerate into particularly destructive free radicals [45,46]. It has also been demonstrated that myocardial necrosis produced by catecholamines could be attributed to a free-radicals-mediated mechanism, which may lead to a particular form of catecholamine-induced cardiomyopathy [47]. Researchers now know one way stress may cause disease: a stressful life mass-produces free radicals. Pollutants The pollutants produced by modern technologies often generate free radicals in the body. The food most of us buy contains farming chemicals, including fertilizers and pesticides, that produce free radicals when we ingest them. Nowadays, the use of this kind of substances is visibly growing because they are still the most effective and economical way to enhance the yield, regardless of its quality. It is widely known that exposure to pesticides is associated with the pathogenesis of several diseases including cancer and cardiovascular diseases. The reason for this is also their ability to increase ROS and oxidative stress production, which cannot be properly counteracted by the intracellular antioxidant system [48]. For instance, studies have shown that glyphosate, an herbicide widely used to control the weeds that compete with crops, can determine an overaccumulation of ROS and a reduction of the NADH and NADPH pool of the cells, compounding its cytotoxic and genotoxic effects [49,50]. Different studies, in fact, demonstrated increased serum biomarkers of oxidative stress in the blood of agricultural workers exposed to pesticides for a long time, together with a remarkable decrease in antioxidant enzyme levels [51,52]. Drugs Prescription drugs can often have the same effect as pesticides; their harmful side-effects may be caused by the free radicals they generate. Several studies have suggested the possible role of oxidative stress in clinically relevant drug-related side-effects. For example, doxorubicin, one of the most commonly used anthracyclines, has been implicated in the generation of ROS in cardiomyocytes and lipid peroxidation, clearly justifying the well-known cardiac toxicity typical of this class of antineoplastic drugs [53,54]. However, also, other widely prescribed drugs, such as paracetamol and common nonsteroidal anti-inflammatory drugs, have been associated with an increased production of reactive metabolites, a depletion of antioxidants, and the activation of proapoptotic proteins. These mechanisms may underlie their well-known hepatotoxic and nephrotoxic side effects [55][56][57]. Clearly, therefore, prescribing drugs indiscriminately without considering the role of drug-induced oxidative stress in the overall benefit-risk assessment, can be very dangerous. Processed Foods Processed foods, especially meat, frequently contain high levels of lipid peroxides, which do not only affect the organoleptic and functional characteristics of the food, but also contribute to producing free radicals and toxic substances that can harm consumers' health and contribute to the development of diseases [58]. Tobacco Smoking Cigarette smoking exposes smokers to a complex mixture of carcinogenic and poisonous compounds, as well as stable free radicals and ROS, which can contribute to the significant biological oxidative damage of the cells. There is also a well-documented synergistic effect with environmental respirable particles, such as asbestos fibers, coal dust, and diesel exhaust particles [59]. Tobacco-related free radicals mainly attack the arteries (coronaries in particular) and lungs. The immunologic battle at the level of the plaque exacerbates the situation, since more damage is provoked and the plaque core can enlarge, causing a rapid progression of the narrowing and eventually rupture of the plaque, giving rise to acute events (myocardial infarction and sudden death). Thus, smoking maximally provokes endothelial dysfunction [60], reduces coronary flow reserve [61], increases the progression of coronary artery disease [62], causes coronary spasm [63], and increases mortality [64]. Regarding the lungs, it is now known that much of the lung damage associated with smoking is caused by free radicals, which can overpower the lungs' antioxidant defenses and activate a number of proapoptotic and proinflammatory signaling pathways, leading to different lung diseases, such as asthma, Chronic Obstructive Pulmonary Disease (COPD), and cancer [65]. Air Pollution Air pollution has similar effects. In recent years, various researchers have focused on the so-called Environmentally persistent free radicals (EPFRs). EPFRs are a recently discovered class of combustion products that persist in fine particles for a long time. They can generate toxic ROS such as hydroxyl radicals that promote oxidative stress, mediating adverse health effects in respiratory and cardiovascular diseases, including COVID-19 disease [66,67]. Alcohol Alcohol is a potent generator of free radicals (although red wine contains antioxidants that counteract this effect). Through a variety of processes, largely occurring in the liver, alcohol stimulates the production of ROS and interferes with the body's natural defensive systems against these chemicals. For example, alcohol breakdown in the liver results in the synthesis of molecules that are then metabolized in the cell, resulting in the generation of ROS. Alcohol also increases the activity of cytochrome P450 enzymes, which contribute to ROS generation. Furthermore, alcohol can affect the amounts of specific metals in the body, promoting the creation of ROS. Finally, alcohol lowers the amounts of antioxidants that can remove ROS [68,69]. All these effects can have a fundamental role in the development of alcoholic liver disease and its progression to liver fibrosis. Cosmetics and Cleaning Products Cosmetics and household cleaning products have been linked to increased levels of oxidative stress in the body. This is largely due to the fact that many of these products contain man-made chemicals, such as phthalates, parabens, and triclosan, which are known to generate free radicals and induce oxidative stress [70][71][72]. Furthermore, these chemicals are absorbed through the skin and mucous membranes and bypass the liver, which would normally metabolize and detoxify them before their entry into the bloodstream. Additionally, inhaling these substances can cause oxidative stress and inflammation of the airways, effects that are similar to those observed in individuals suffering from COPD and asthma [73,74]. As a result, cumulative exposure to these chemicals over time can lead to an increased risk of chronic diseases, such as cardiovascular and respiratory disorders and cancer [75,76]. Therefore, it is important to limit exposure to these chemicals by choosing natural and organic personal care products and cleaning agents whenever possible, as well as avoiding any unnecessary and excessive use of these products altogether. Heavy Metals Heavy metals, including those used in prosthetic manufacturing, have been recognized as another important source of oxidative stress and inflammation in the body. Among the most common sources of heavy metals are amalgam fillings, which contain about 50% mercury and other metals such as silver, copper, and tin. Mercury is a highly toxic metal that can produce free radicals and reduce antioxidants such as glutathione [77]. Several studies have shown that people with amalgam fillings have higher levels of mercury in their hair, blood, urine, and tissues than those without [78][79][80][81][82]. Moreover, mercury can pass from the mother to the fetus or the infant, causing negative effects to cognitive-and neurodevelopment [83,84]. Mercury can indirectly lead to the development of atherosclerosis by raising the levels of total cholesterol, triglycerides, and LDL-C, while lowering the level of HDL-C. Thus, mercury can be regarded as a risk factor in the progression of atherosclerosis [85][86][87][88][89]. Other dental procedures that involve metal implants such as pins and capsules can also release ions and particles that interact with the surrounding tissues, inducing the expression of pro-inflammatory cytokines and chemokines. These molecules can recruit and activate inflammatory cells such as monocytes, macrophages, T cells, and mast cells to the arterial wall, where they can uptake oxidized LDL, transformed into foam cells. Foam cells are the main component of atherosclerotic plaques, which can grow and rupture, causing thrombosis and ischemia [90,91]. Some studies have described titanium, one of the metals most commonly used in dental implants, as a potential atherosclerosis risk factor [92,93]. Chronic Infections Moreover, in root canal infections, bacteria can locally produce ROS or induce oxidative stress in host cells, leading to inflammation and necrosis of the pulp tissue, pain and tooth loss, and also causing systemic inflammation and endothelial dysfunction [94]. Local oxidative stress can also facilitate the translocation of bacteria from the oral cavity to the systemic circulation, which can contribute to the development of atherosclerotic plaques and coronary artery disease [95][96][97][98][99]. Studies suggest that eliminating and reducing the presence of periodontal bacteria in subgingival plaque may be a crucial prophylactic measure for preventing both periodontitis and atherosclerosis [100]. Coronary Stents Coronary stents can also induce oxidative stress and inflammation in the vascular wall, which can lead to adverse outcomes such as restenosis, endothelial dysfunction, and stent thrombosis. A correlation between levels of pro/antioxidant and pro/anti-inflammatory markers and the development of in-stent re-occlusion lesions has been demonstrated. Imbalances in these biomarkers can lead to cross-activation of pro-inflammatory and prooxidative stress pathways, further exacerbating the formation of these complex lesions [101]. Oxidative stress and inflammation can modulate the expression and activity of various molecules involved in the vascular remodeling process, such as nicotinamide adenine dinucleotide phosphate oxidase (NOX), nitric oxide synthase (NOS) and proteins regulating mitochondrial function [102][103][104]. Therefore, targeting oxidative stress and inflammation may be a promising strategy to improve the outcome of coronary stent placement. Electromagnetic Radiation Lastly, free radicals can result from all types of electromagnetic radiation. Nowadays, technological devices have become indispensable parts of daily life. However, their harmful effects on the body, particularly the neurological system, are widely documented [105,106]. Recent studies did not only show that electromagnetic field exposure produces oxidative stress in diverse tissues, but also that it causes substantial changes in blood antioxidant marker levels, causing symptoms such as fatigue, headache, and cognitive impairment [107]. Some studies have suggested that exposure to radiofrequency electromagnetic waves (RF-EMF) from cell phones may induce oxidative stress, inflammatory response, and hypothalamic-pituitary-adrenal (HPA) axis deregulation, all of which are risk factors for atherosclerosis [108,109]. However, even if the cellular target of RF-EMF is still controversial, some studies have identified the plasma membrane as a possible site of interaction, where it could increase ROS formation by boosting the activity of plasma membrane NADH oxidase [110]. Moreover, the effects of RF-EMF on the genesis of heart tumors, cardiac arrythmias, and myocardial damage have been widely described [111][112][113]. Exposure to sunlight also generates free radicals that age the skin, causing roughness and wrinkles. If the exposure is prolonged, skin cancer may ensue [114]. All the above-cited damaging factors can, through the formation of free radicals, generate endothelial damage and trigger inflammation in the coronary wall. However, the immune system's normal repair effect (rejuvenation phase) should take place after eliminating the offender and stopping the process. Unfortunately, another factor comes into play at this point, perpetuating the inflammatory state at the coronary level and preventing the inflammation from stopping: metabolic syndrome, another inheritance of our modern society. The Major Role of Metabolic Syndrome in Endothelial Dysfunction Endothelial dysfunction plays a pivotal role in the pathogenesis of cardiovascular diseases and is an early marker of atherosclerosis [115]. Metabolic syndrome is a cluster of metabolic abnormalities including insulin resistance, hyperglycemia, hypertension, central obesity, and dyslipidemia, and is associated with an increased risk of cardiovascular diseases [116]. Insulin resistance and hyperinsulinemia, the hallmark features of metabolic syndrome, are key contributors to endothelial dysfunction by promoting a state of chronic low-grade inflammation [117]. Insulin resistance is caused by obesity, which results from an unbalanced diet rich in industrial refined products with a high glycemic index, combined with a sedentary lifestyle [118]. This promotes the storage of excess calories as fat, along with toxic waste (heavy metals, pesticide, etc.) and inflammatory fats such as arachidonic acid in visceral fat deposits [119]. This storage is not inert, because thanks to lipase activity, the fat moves continuously from the storage back into the blood and to all other organs, first and foremost the endothelium. This process has been referred to as the metastatic spread of toxic fat [120]. When this occurs, full insulin resistance takes place. Insulin resistance and hyperinsulinemia promote the production of pro-inflammatory cytokines such as tumor necrosis factor-alpha (TNF-α) and interleukin-6 (IL-6), leading to increased oxidative stress and impaired nitric oxide (NO) bioavailability, which in turn promotes endothelial dysfunction [121,122]. Furthermore, insulin resistance is associated with a decreased expression of endothelial nitric oxide synthase (eNOS) and increased expression of inducible nitric oxide synthase (iNOS), leading to a decreased NO production and increased production of ROS [123]. In addition, insulin resistance and hyperinsulinemia have been shown to promote the activation of the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) pathway, which is a key regulator of inflammation [124][125][126]. This pathway boosts the expression of genes involved in the production of pro-inflammatory cytokines, chemokines, adhesion molecules, and enzymes involved in the synthesis of eicosanoids, such as cyclooxygenase-2 and lipoxygenase, which play a role in the perpetuation of the inflammatory response [127]. Moreover, the insulin-induced strong activation of delta-6 and delta-5 desaturase, which are enzymes involved in the metabolic pathway of linoleic acid conversion to arachidonic acid, leads to an increase in the production of several pro-inflammatory eicosanoids including prostaglandins, thromboxanes, and leukotrienes, which contribute to the development of atherosclerosis [128][129][130][131]. Recommendations: To reduce the levels of linoleic acid in the diet, we recommend opting for fat sources that are low in omega-6 fatty acids, such as olive oil or nuts, while reducing the intake of red meat. Lowering insulin levels requires a reduction in the glycemic load of the diet, which can be achieved by increasing the consumption of fruits and vegetables as the primary sources of carbohydrates and reducing the intake of high-glycemic carbohydrates, such as grains and starches. To successfully tackle metabolic syndrome, it is important to consume grains and starches that come from unprocessed and unrefined wheat free of chemicals such as glyphosate. Additionally, the timing of the grinding process cannot be underestimated: wheat ground more than 4 weeks before is totally oxidized and does not contribute to improving health, but rather worsens metabolic syndrome [132]. The more oxidized the wheat, the higher the glycemic index and the greater the oxidative stress [118]. Therefore, wheat should be used soon after grinding and discarded after 4 weeks [132]. Additionally, it is important to limit the intake of saturated fatty acids, as they can activate the inflammatory pathway by indirectly activating NF-κB [133]. Finally, a constant intake of fish oils, which are rich in omega-3 fatty acids, can directly inhibit the formation of arachidonic acid or dilute its concentration in target cells' membranes, especially in adipose tissue, reducing the overall inflammation in the body [134]. Unfortunately, fish, especially larger species such as tuna, are often contaminated with chemicals, particularly mercury, due to the terrible man-made contamination of the sea and ocean [135]. Therefore, even the consumption of this otherwise healthy food should be limited to only once or twice a week at most. Alternatively, modulation of the inflammatory arachidonic acid can be achieved by consuming nuts and seeds, such as walnuts, which have a high content of linoleic acid, the precursor of all eicosanoids, but also by maintaining low delta-5 desaturase enzyme activity by keeping the insulin levels low in the blood [136]. Free Radicals Defenses Given the many sources of free radicals, it is not surprising that all aerobic forms of life maintain elaborate anti-free-radical defense systems, also known as antioxidant systems. Antioxidants are electron donors. They can break the free radical chain reaction by sacrificing their own electrons to feed free radicals, but without turning into free radicals themselves [137]. Some antioxidants are produced by the body, but some are not. In addition, the body's natural antioxidant production can decline with age [138]. The system is highly complex and not well understood. However, the most likely attack points for the excess free radicals are the essential fatty acids in the cell membrane. This implies the need to neutralize these oxidized lipids and so remove the source of oxidation from the body. This requires three distinct types of antioxidants: fat-soluble, surface-active, and water-soluble. Fat-soluble antioxidants such as vitamin E, coenzyme Q10, and beta-carotene neutralize free radicals in the membrane and become partially stabilized free radicals [139]. Then, they have to be shuttled to the blood stream in order to be eliminated by the liver or by the kidney. To do that, a water-soluble antioxidant such as vitamin C is needed [140]. However, there is another fundamental step in this work of neutralization, which is to carry the free radical from inside the membrane to the blood: this is the job carried out by the surface-active antioxidants [141]. This class of substances is not proteins or vitamins, but comes from the vegetables reign: they are polyphenols. Without them, the body has no chance to get rid of antioxidants and complete the detoxification process. However, such molecules have very important added properties: • Repairing damaged molecules-Some unique types of antioxidants can repair damaged molecules by donating a hydrogen atom. This is very important when the molecule is a critical one, such as DNA [142]; • Blocking metal radical production-Some antioxidants have a chelating effect-they can grab toxic metals such as mercury and arsenic, which can cause free radicals' formation, and "hug" them strongly so as to prevent any chemical reaction from taking place. Water-soluble chelating agents can also escort toxic metals out of the body through the urine [143]; • Stimulating gene expression and endogenous antioxidant production-Some antioxidants can stimulate the body's genes and increase the natural defenses [144]; • Providing a "shield effect"-Antioxidants, such as flavonoids, can act as a virtual shield by attaching to DNA to protect it from free radicals' attacks [145]; • Provoking cancer cells to "commit suicide"-Some antioxidants can provide anticancer chemicals that halt cancer growth and force some cancer cells to self-destruct (apoptosis) [146]. Chemical Structure and Biological Functions of Dietary Polyphenols Several thousand molecules with a polyphenol structure (i.e., several hydroxyl groups on aromatic rings) have been identified in higher plants, and several hundred are found in edible plants. These molecules are secondary metabolites of plants and are generally involved in defending against ultraviolet radiation or aggression by pathogens. These compounds may be classified into different groups as a function of the number of phenol rings that they contain, and of the structural elements that bind these rings to one another. Distinctions are thus made between phenolic acids, flavonoids, stilbenes, and lignans ( Figure 2) [147]. The flavonoids, which share a common structure consisting of two aromatic rings (A and B) that are bound together by three carbon atoms that form an oxygenated heterocycle (ring C), may themselves be divided into six subclasses as a function of the type of heterocycle involved: flavonols, flavones, isoflavones, flavanones, antho-cyanidins, and flavanols (catechins and proanthocyanidins) (Figure 3) [148]. In addition to this diversity, polyphenols may be associated with various carbohydrates and organic acids, and with one another. also affect the content of polyphenols, which are easily oxidized [150]. Methods of cul preparation also have a marked effect on the polyphenol content of foods [152]. O basis of these considerations, we believe that when it comes to obtaining nutrients diet-not supplements-should be the primary source. The consumption of a bala unprocessed diet full of high-quality, raw organic foods, especially fruits and vegeta assures the acquisition of the essential nutrients and antioxidants the body requir achieve and maintain optimal health. Most edible vegetables, especially the green leafy ones, are loaded with potent phytochemicals, which are plant compounds that act as antioxidants. These phytochemicals can help reduce inflammation and eliminate carcinogens, protecting the body from a va- More than 8000 phenolic structures are currently known, and among them over 4000 flavonoids have been identified [149]. The richest sources are fruit and vegetables. These substances are found in high concentrations in red wine, berries, and dark-colored vegetables; in fact, it is the polyphenols that give the intense color to some fruit and vegetables [150]. Polyphenol content is affected by several variables. Environmental factors have a major effect on polyphenol content: exposure to sunlight, for example. In addition, although very few studies have directly addressed this issue, the polyphenol content of vegetables produced by organic or sustainable agriculture is certainly higher than that of vegetables grown without stress, such as those grown in conventional or hydroponic conditions. This was shown recently in strawberries, blackberries, and corn [151]. Storage may also affect the content of polyphenols, which are easily oxidized [150]. Methods of culinary preparation also have a marked effect on the polyphenol content of foods [152]. On the basis of these considerations, we believe that when it comes to obtaining nutrients, the diet-not supplements-should be the primary source. The consumption of a balanced, unprocessed diet full of high-quality, raw organic foods, especially fruits and vegetables, assures the acquisition of the essential nutrients and antioxidants the body requires to achieve and maintain optimal health. Most edible vegetables, especially the green leafy ones, are loaded with potent phytochemicals, which are plant compounds that act as antioxidants. These phytochemicals can help reduce inflammation and eliminate carcinogens, protecting the body from a variety of health threats. However, to maximize the antioxidants in vegetables, they must be consumed raw, in a state closest to when they were harvested. Indeed, different types of heat treatment, such as boiling, baking, frying, or microwaving, can reduce the total antioxidant capacity of foods, affecting their ability to prevent lipid peroxidation [153,154]. Juicing is highly recommended so as to absorb all the nutrients in the vegetables-it is one of the healthiest antioxidant drinks that can be added to the diet. The pulp can also be eaten instead of throwing it away. Sprouts are also powerful sources of antioxidants, minerals, vitamins, and enzymes that promote optimal health. In particular, those of broccoli and red cabbage have been found to contain much more vitamin C and other radical scavenging activities than mature vegetables, and they also appear to be more palatable to young people [155]. Recommendations: Overall, incorporating fresh, organic vegetables and sprouts into the diet can help boost antioxidant intake, reduce inflammation, and promote optimal health. Choosing the right preparation methods can also help maximize the overall antioxidant content of food, ensuring that the full range of benefits provided by these healthy foods is obtained [150]. Fruits Fruits are a great source of nutrients, including vitamins, minerals, and fiber. In addition, many fruits contain phytochemicals, which are plant-based compounds that can provide health benefits. Fresh berries such as blueberries, blackberries, cranberries, and raspberries are the best antioxidant fruits, as they contain powerful phytochemicals that directly inhibit the DNA binding of certain carcinogens [156]. For example, anthocyanins, which are a type of flavonoid found in many berries, have been shown to inhibit the growth of cancer cells in laboratory studies [157]. Other phytochemicals found in berries, such as ellagic acid and quercetin, have also been shown to have anticancer effects [158,159]. Berries are also great sources of antioxidants such as vitamin C, carotenes, and carotenoids, as well as nutrients such as zinc, potassium, iron, calcium, and magnesium. Moreover, research has shown that daily consumption of antioxidant-rich fruits, such as berries, may help to improve various markers of cardiovascular health, including blood pressure, cholesterol levels, and endothelial function, leading to a significantly reduced risk of coronary heart disease [160][161][162]. Other antioxidant-rich fruits include citrus fruits such as oranges and lemons, which are high in vitamin C and flavonoids [163]. Apples, especially unpeeled, are also rich in antioxidants such as quercetin, catechin, and chlorogenic acid [164]. Grapes, especially red and purple varieties, are also high in antioxidants, including resveratrol [165]. In addition, fruits rich in potassium, such as bananas, cantaloupe, and avocados, have been associated with a reduced risk of cardiovascular disease [166]. Potassium helps to regulate blood pressure by counteracting the effects of sodium on the body. Recommendations: To increase your intake of antioxidants and phytochemicals, we recommend that you consume fresh berries regularly, as they have been shown to have anticancer and cardiovascular benefits. We also recommend that you include other antioxidantrich fruits in your diet, such as citrus fruits, apples, grapes, and bananas, as they can also provide you with health benefits. However, we advise that you should consume fruits in moderation, as they contain fructose, high amounts of which can be detrimental to health; this is true especially if too much fruit is consumed at dinner, since it strongly stimulates insulin production, especially in overweight people with insulin peripheral resistance or who are diabetic [167]. Nuts Pecans, walnuts, and hazelnuts are excellent antioxidant foods that can boost your heart health and overall health [168,169]. Nuts are known to contain high levels of monounsaturated and polyunsaturated fats, fiber, minerals, vitamins, and various bioactive compounds that offer numerous health benefits. Research has shown that incorporating nuts into your diet may help lower the risk of coronary artery disease and hypertension. The possible ways in which nuts can help prevent these conditions include improving the lipid profile of the blood, reducing insulin resistance, and modulating inflammation, oxidative stress, and endothelial function [170,171]. Recommendations: Look for nuts that are organic and raw, not irradiated or pasteurized. Peanuts are usually less than ideal, as they are usually pesticide-laden and can be contaminated with a carcinogenic mold called aflatoxin [172]. Herbs and Spices Aside from being an abundant source of antioxidants, these can have potential anticancer benefits. Herbs and spices differ mainly by source, as herbs typically come from the plant's leaves while spices come from the bark, stem, and seeds. Both have been used for thousands of years to flavor foods and treat illnesses. Some of your best choices are ground cloves, ground cinnamon, oregano, turmeric, ginger, and garlic. For example, curcumin, the active ingredient in turmeric, has been shown to improve endothelial function and decrease inflammation, both of which are important in reducing the risk of cardiovascular disease and coronary artery disease. Studies have shown that curcumin can increase the activity of antioxidant enzymes, while also decreasing the levels of various oxidative stress markers and ROS [173]. In addition to its antioxidant properties, curcumin has been found to have potent anti-inflammatory effects, which are also important for protecting the cardiovascular system. Chronic inflammation is a major risk factor for cardiovascular disease, and curcumin has been shown to inhibit the production of pro-inflammatory cytokines and other mediators of inflammation, such as NF-κB [174]. Studies have also shown that curcumin can improve lipid profiles by reducing levels of total cholesterol, LDL cholesterol, and triglycerides, while increasing levels of HDL cholesterol [175]. In addition, curcumin has been found to have antithrombotic effects, which may help prevent the formation of blood clots and reduce the risk of heart attacks and strokes [176]. Similarly, research has suggested that ginger, another commonly used herb, may have cardioprotective properties, because it may help lower blood pressure, reduce inflammation, and improve lipid metabolism, all of which are important factors in preventing cardiovascular disease [177]. Recommendations: Ideally, you should only opt for fresh herbs and spices, as they are healthier and have higher antioxidant levels than processed, powdered versions. For example, the antioxidant activity of fresh garlic is 1.5 times higher than that of dry garlic powder [178]. Moreover, adding fresh herbs and spices to your meals not only boosts their flavor and nutrition but can also help you reduce your intake of unhealthy additives. Processed and pre-packaged foods often contain high amounts of salt, sugar, and unhealthy fats to enhance their flavor, which can be detrimental to your health in the long run. By using fresh herbs and spices, you can avoid these additives and enjoy the natural flavors of your food. Finally, using herbs and spices to flavor your food can help you reduce your sodium intake, which is crucial for individuals with high blood pressure. Organic Green Tea This antioxidant-rich drink contains epigallocatechin-3-gallate (EGCG), a catechin polyphenol and one of the most powerful antioxidants known today. EGCG benefits you by lowering your risk of heart attack and stroke, glaucoma, high cholesterol, and more. Studies have also found that it can improve your exercise performance, increase fat oxidation, and even help prevent obesity due to its regulatory effect on fat metabolism [179][180][181]. However, remember that not all green teas are created equal. Some processed green tea brands can contain very little or no EGCG at all. Some tea bags are also contaminated with fluoride or contain hazardous plastics that can leach into your tea when brewing. Recommendations: To ensure you are drinking high-quality green tea, we advise buying only organic, loose-leaf tea from a reputable source. In addition, tea is not recommended for people that suffer from some forms of cardiac arrhythmias, as its alkaloid content can worsen such a problem, even if low-dose green tea intake has been related to a reduced incidence of both paroxysmal and persistent atrial fibrillation [182]. The importance of oxidative stress on endothelial function as a trigger for vessel damage and cardiovascular events has been established. In experimental animal models of atherosclerosis, hypercholesterolemia, hypertension, and diabetes, associations between oxidative stress and impaired endothelial function have been demonstrated. Among many biological changes that occur in the vessel wall under these conditions, reduced bioavailability of nitric oxide (NO) in a setting of increased superoxide anion levels seems to be a uniform underlying abnormality. Recent studies extended this potential mechanism to patients with coronary artery disease by demonstrating increased superoxide production of human blood vessels in association with endothelial vasomotor dysfunction and with clinical risk factors [183,184]. Furthermore, endothelial dysfunction in patients with coronary artery disease or coronary risk factors could be reversed by the administration of agents capable of scavenging superoxide, such as vitamin C [185,186]. These findings suggest that increased oxidative stress may be an important mechanism for impaired endothelial function in patients with atherosclerosis or cardiovascular risk factors. Nowadays, there is growing interest in the role of dietary polyphenols in the prevention and treatment of heart diseases. Polyphenols have been shown to have a range of beneficial effects on cardiovascular health, including improving endothelial function, reducing inflammation, and lowering blood pressure. Several epidemiological studies have reported that high intake of dietary polyphenols is associated with a reduced risk of heart diseases, leading to a lower risk of coronary heart disease and a lower incidence of heart failure [187,188]. Some studies have reported significant improvements in cardiovascular risk factors, such as blood pressure, cholesterol levels, and endothelial function, with supplementation of polyphenols [189][190][191]. One of the most well-known examples of the potential health benefits of polyphenols is the French paradox, whereby moderate red wine consumption in a diet otherwise high in saturated fats is associated with a lower risk of cardiovascular mortality in French people from the Bordeaux region [192]. Polyphenols can provide anti-fibrotic and myocardial protection by inhibiting oxidative stress and molecular pathways involved in heart fibrosis, and they can also promote vasodilation by increasing NO release, which improves vascular function by relaxing smooth muscle, inhibiting platelet aggregation, and increasing prostacyclin production. Moreover, polyphenols have been shown to have anti-diabetic effects by reducing blood glucose and glycated hemoglobin A1c levels, to be able to modulate liver function and lipid metabolism and to be effective in combating obesity. In fact, supplementation of polyphenols from red grapes leads down an anti-inflammatory pathway, causing weight reduction in obese individuals [193]. Overall, the evidence suggests that polyphenols have a beneficial impact on heart diseases ( Figure 4). Antioxidants act through multiple pathways to improve endothelial function, reduce blood pressure, prevent the formation of oxidized low-density lipoprotein (LDL), and provide anti-diabetic and weight-reducing effects. These mechanisms counteract the effects of free radicals, which have been implicated in the pathology of coronary diseases, and reduce the risk of cardiovascular events. Are Nutritional Supplements as Effective and Safe as Natural Food Sources? Nutritional supplements are increasingly popular in the healthcare industry as people seek to augment their diets with vitamins, minerals, and other compounds thought to promote wellness and combat disease. Although these products can be effective in certain situations, they are often misused, overhyped, and even harmful to human health. While it is true that some dietary supplements can help meet nutrient needs, research suggests that they are often less effective than natural foods. Several studies suggest that dietary vitamin C is more protective than supplements and is associated with a reduced incidence of chronic diseases, including stroke, coronary heart disease, and various types of cancer [194]. Similarly, recent studies found that supplementing with vitamin E did not lower the risk of heart disease, whereas consuming vitamin E through foods such as nuts and seeds did [195,196]. Moreover, excessive intake of vitamin supplements can have harmful effects on the body, particularly when taken in high doses over long periods of time. For example, high doses of vitamin A can cause liver damage, while excessive intake of vitamin D can lead to hypercalcemia, a condition characterized by high levels of calcium in the blood. Therefore, even commonly used supplements, such as multivitamins, vitamin E, and folic acid, appear to have limited or no benefit, and some may be harmful [197]. The bioavailability of nutrients from whole foods is generally higher than that of supplements, and they also contain other compounds such as fiber, antioxidants, and phytochemicals that may have synergistic effects on health. There is also a concern that taking supplements may lead to a false sense of security and encourage unhealthy dietary practices. For example, some individuals may take supplements as a means of compensating for a poor diet, rather than making healthy dietary choices. Additionally, it is important to note that the dietary supplement industry is largely unregulated, and many products may not contain the ingredients listed on the label, or may be contaminated with harmful substances. For these reasons, it is generally best to obtain nutrients from whole foods rather than supplements. Future Perspectives The complex interplay between oxidative stress, inflammation, and cardiovascular diseases poses several challenges and opportunities for future research. On the one hand, there is a need to better understand the molecular mechanisms underlying the pro-oxidant and anti-oxidant effects of different dietary polyphenols and their metabolites in the context of obesity and coronary artery disease. On the other hand, there is a potential to develop novel therapeutic strategies based on the modulation of oxidative stress and inflammation by natural antioxidants. Firstly, future studies should investigate the impact of lifestyle interventions on oxidative stress and inflammation in coronary artery disease. There is growing evidence that lifestyle interventions, such as dietary modifications, exercise, and stress reduction techniques, can have a significant impact on reducing oxidative stress and inflammation in patients with coronary artery disease [198,199]. Thus, it would be important to examine the effectiveness of such interventions and determine the optimal strategies for their implementation. Secondly, further research is needed to identify novel biomarkers of oxidative stress and inflammation that can independently predict cardiovascular events. Although several biomarkers have been proposed in the literature, their predictive value remains uncertain [200][201][202]. Therefore, it is crucial to identify reliable biomarkers that can help clinicians to assess the risk of coronary artery disease and monitor disease progression. However, we believe that assessing the by-products of free radicals can have a significant clinical impact when combined with endothelial functional tests, as previously discussed in this paper. Thirdly, investigations should focus on the development of new therapeutic strategies that can target oxidative stress and inflammation in coronary artery disease. Although several antioxidants and anti-inflammatory agents have been proposed as potential therapies for coronary artery disease, their efficacy and safety are still uncertain [203,204]. Therefore, it is important to conduct well-designed clinical trials to determine the optimal dose, duration, and safety of such therapies. Finally, a number of questions that deserve further investigation remain open: • What are the optimal doses and combinations of dietary polyphenols to achieve maximal protection against oxidative stress and inflammation in patients with coronary artery disease? • How do genetic and environmental factors influence the bioavailability, metabolism, and activity of dietary polyphenols and their metabolites in different tissues and organs? • How can oxidative stress biomarkers be improved to reliably reflect oxidative status and the risk of cardiovascular events in these patients? • What are the long-term effects and safety of antioxidant supplementation on cardiovascular outcomes and mortality in obese patients with coronary artery disease? Answering these questions may provide new insights into the role of oxidative stress in obesity-related cardiovascular diseases and pave the way for the development of personalized and effective interventions based on dietary polyphenols or their derivatives. Conclusions The rampant diffusion of factors generating free radicals is a real threat for the health of the endothelium and the coronaries, and it is evident that the engine that has generated such acceleration of free radicals' formation is man-made. A definite change in direction is rapidly needed. In the meantime, coronary patients should avoid all those factors that can generate free radicals, putting the endothelium under siege; reduce weight to tackle obesity and chronic inflammation that, along with free radicals, create the perfect storm for atherosclerosis generation; and finally optimize the intake of natural and organic food with a high content of balanced and protecting antioxidants.
2023-06-02T15:10:40.356Z
2023-05-31T00:00:00.000
{ "year": 2023, "sha1": "1cd3b806fc0ca14a4f39ab7ea098715cd68f0b7f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/metabo13060712", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "59ceb0eaf20dd031026f6c6903a19bb4177e43d2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
98119878
pes2o/s2orc
v3-fos-license
The effect of solvent on the morphology of an inkjet printed active layer of bulk heterojunction solar cells Bulk heterojunction organic solar cells were fabricated by sandwiching the active layer between indium tin oxide (ITO) and Al electrodes. The active layer used was a blend of poly(3-octylthiophene-2,5-diyl) (P3OT) as the electron donor and (6,6)-phenyl C 71 butyric acid methyl ester (PC 71 BM) as the electron acceptor. The active layer thin films were deposited by an inkjet printing technique. Prior to deposition of the thin films, the active materials were blended in three different solvents. The printed films were annealed at three different temperatures. It was found that the selection of the appropriate solvent and annealing treatment significantly influences the printing process, the morphology of the printed film and subsequently the performance of the solar cell devices. Introduction The organic solar cell is a promising candidate for a low-cost renewable energy source [1][2][3]. In order to obtain mass production, plus large scale and flexible organic photovoltaic cells, there are several deposition methods that are being intensively developed and the inkjet printing technique shows significant potential to achieve this goal [4]. However, film formation by inkjet printing is fundamentally different from the printing process in terms of the jetting of droplets, and the interaction among the droplets plays a key role in obtaining high-quality printed film. In terms of device architecture, the bulk heterojunction type has become the most popular device [5]. In the bulk heterojunction structure, the donor and acceptor materials are simply blended as one solution, deposited as the active layer and then sandwiched between two electrodes. In this work, the active layer used was composed of blended poly(3-octylthiophene-2,5-diyl) (P3OT) and (6,6)-phenyl C 71 butyric acid methyl ester (PC 71 BM). The P3OT : PC 71 BM pairing was selected for the following reasons. P3OT is a long alkyl side chain polymer that is expected to produce high-quality thin film. Hence, it may produce a high photovoltaic performance in solar cell structures. Meanwhile, PC 71 BM is a high optical absorption acceptor material in the visible range. The combination of both materials is expected to give a reasonably good photovoltaic performance. Prior to the deposition process of an organic thin film by the inkjet printing technique, the chemical should be prepared in the form of ink by dissolving it in a particular solvent. The choice of solvent may affect the quality of the film. This paper reports a study on the effect of using three different solvents on the quality of the printed organic solar cells' active layer films. The quality of the films was examined based on the morphology and the performance of fabricated bulk heterojunction organic solar cells. At the end of the deposition process, the solvent must be removed from the printed film. This was done through an annealing treatment that was also performed in a vacuum oven at various temperatures and for different periods. The three different solvents used were a mixture of chloroform/dichlorobenzene, dichlorobenzene and a mixture of dichlorobenzene/mesitylene. It was found that the selection of solvent is a critical part in the formation of printed film and is closely related to the physical properties of the solvents, such as vapor pressure, boiling point and surface tension. The surface morphology of the printed films was significantly effected by the formation process and the drying process of the films. Experimental The organic materials used in this experiment were poly(3-octylthiophene-2,5-diyl) (P3OT) and (6,6)-phenyl C 71 butyric acid methyl ester (PC 71 BM) (Sigma Aldrich). These chemicals act as electron donor (D) and acceptor (A) materials, respectively. Both materials were dissolved in three different solvents and then mixed with the ratio of 1 : 1 (weight) while being maintained at a concentration of 2 wt% (26 mg ml −1 ). The solvents were a mixture of chloroform/dichlorobenzene (1 : 1), dichlorobenzene and a mixture of dichlorobenzene/mesitylene (2 : 1), namely solutions A, B and C, respectively. The blended solutions were stirred for 16 h at ambient temperature in air and were then ready to use as ink. Prior to printing, the optical properties of the blended solutions were observed by using the UV-Vis spectrophotometer (Perkin Elmer Lambda 900 UV-VIS/NIR Spectrometer). We also observed the wettability of the solution on ITO-coated glass substrates by measuring the contact angle by using a contact angle geniometer. The blended solutions were deposited on indium tin oxide (ITO)-coated glass substrates (20 sq −1 ) by using a commercial piezoelectric inkjet printer (Dimatix Material Printer (DMP 2800 system)). The printer was placed in a glove box with an argon atmosphere. The substrates were placed 1 mm below the inkjet printhead and the typical waveform settings used a voltage between 14 and 16 volts with a pulse width of 20.2 µs. The printed films were then left to dry in a Petri dish until the color of the films changed from orange to purple. Post-deposition heat treatment was carried out by annealing in the vacuum oven at various temperatures and for different periods, i.e. 120 • C, 140 • C and 160 • C for 30 and 60 min. The surface morphology of films was characterized by atomic force microscopy (CP-II Veeco) using tapping mode. The films were then used as the active layer of bulk heterojunction organic solar cells. The active layer was sandwiched between ITO and the aluminum cathode, as shown in figure 1. The aluminum thin film was deposited through a shadow mask by the electron gun evaporation system. The photovoltaic performance of the solar cell devices was studied by current-voltage measurements (Keithley SMU 237) both in the dark and under AM 1.5 (Newpoint Solar Illumination) with an intensity of 100 mW cm −2 in the air. Results and discussions The UV-Vis optical absorption spectra of blended P3OT : PC 71 BM solutions in three different solvents, i.e. a mixture of dichlorobenzene/chloroform (DCB/CHL), dichlorobenzene and a mixture of dichlorobenzene/mesitylene (DCB/MST), are shown in figure 2. All of the solutions showed almost similar absorption spectra in the range of 300-700 nm. The examined contact angles of all of the solutions on ITO-coated glass substrates were less than 10 • , which indicated good wettability. The blended solutions were inkjet printed onto the ITO substrates. Figure 3 shows images of the films examined by digital microscope (National DC3-163) with a width of 4.5 mm. At a glance, these films exhibit different structures for different solvents. For the chloroform/dichlorobenzene solvent (solution A), the solution was ejected with partial clogging at the nozzle orifice. This reduced the volume and velocity of some droplets and also changed the direction of the flying droplets. As a result, from this non homogeneous solvent, droplets were formed and caused many pinholes in the film ( figure 3(a)). This problem may be due to the rapid evaporation of chloroform at the orifice of the nozzles, which left an accumulation of dry solute to block the droplet jetting process. In the case of the solution using pure dichlorobenzene solvent (solution B), jetting of the solution was controllable but produced an uneven film thickness ( figure 3(b)). The drying process of solution B after printing is slow so that the printed ink may move to other parts to produce an uneven surface. As shown in figure 3(c), the quality of the printed film was improved by adding mesitylene into dichlorobenzene (solution C). The dependence of the quality of the printed films on the solvents may be related to the physical properties of solvents, such as vapor pressure, boiling point and surface tension, as summarized in table 1. Chloroform has a high vapor pressure (159 mm Hg), hence the evaporation rate of solution A is high. In contrast, pure dichlorobenzene (DCB), which has a low vapor pressure, made the solution dry slowly, and its high surface tension may make it difficult to form a uniform printed film. By adding mesitylene, which has a higher vapor pressure (1.20 mm Hg) and a lower surface tension than DCB solution, the evaporation rate and the surface tension of the solution C have moderate values, hence improving the uniformity of the printed film thickness. In the next part we only considered the films prepared using a mixture of dichlorobenzene/mesitylene as the solvent. The freshly printed film is wet because it contains an amount of solvent that has to be removed through an annealing treatment. In this work, the annealing process was carried out in a vacuum oven by variation of temperatures and time. In our first trial, we annealed the film at 120 • C for 120 min and the film remained wet. In the next trial, we increased the temperature up to 140 • C for 60 min. The film was partially wet, especially at the upper part. In our last trial, the temperature was increased to 160 • C for 30 min. It was then found that the film was completely dry. All annealed printed films were then used as the active layer of solar cell devices by sandwiching them between the ITO and aluminum electrodes. The photovoltaic performance was studied through the current-voltage (I-V) measurements in the dark and under 100 mW cm −2 AM 1.5 solar illuminations. It was found that the device with annealing treatment at 120 • C did not show the photovoltaic effect. This may be due to the presence of solvent in the active layer, which prevented the generated charge carriers from drifting to the electrodes. The device that was annealed at 160 • C also showed no photovoltaic effect. Since the annealing temperature is relatively close to the melting point of P3OT, i.e. 190 • C, some of the P3OT molecules may have evaporated, leaving the active layer with holes and voids. Only the device that was annealed at 140 • C showed a photovoltaic effect ( figure 4). The I-V curves of the device in the dark showed diode behavior. The rectification ratio was calculated by comparing the current at ±2 V, while the series and shunt resistances were derived from the slope of the I-V curves in the dark at 2 V and 0 V, respectively. The rectification ratio, series resistances and parallel resistances of the device in the dark were 57, 167 and 2635 M , respectively. The I-V curves of the devices under AM1.5 solar illumination showed the photovoltaic effect with poor diode behavior. The short current density was 2.3 × 10 −3 mA cm −2 , the open circuit voltage was 0.9 volts, the calculated fill factor was 0.145 and the power conversion efficiency was 3.3 × 10 −4 %. The generated photocurrent of the solar cell device obtained from this work is low for several reasons. Firstly, this is due to the existence of solvent in the active layer, which lowers the electrical conductivity of the film. Secondly, the microstructure of the inkjet printed films was poor, as indicated by the high surface roughness. The annealing process increased the average roughness of the film from 8.4 to 13.2 nm. The surface morphology of the printed films before and after the annealing treatment can be seen from the AFM images in figure 5. During annealing, the evaporated solvent left behind many voids, while the organic molecules did not have the ability to fill these voids. Work to overcome this problem is in progress, including improvement in the composition of solvent components and multiple temperature annealing treatment. Thirdly, the active layer thickness of 80 nm was smaller compared to the penetration depth of light in an organic semiconductor, which is typically of 80-200 nm. This might cause a small part of the photons to be effectively absorbed by the active material, thus exciton generation was reduced. The open circuit voltage obtained from this work is relatively high compared with those obtained from spin-coated devices. In our previous work, in which the film was prepared via spin coating, the open circuit voltage was 0.43 volt [6]. This difference may be related to the formation of the films. In inkjet printed film, the layer was deposited drop by drop where the donor and acceptor molecules were close together. In contrast, in spin-coated films, the donor and acceptor molecules were separated by a distance due to the centrifugal force during the spinning process [7]. Such close donor-acceptor separation may increase the electron hole dissociation probability, hence increase the V oc [8]. Conclusion A selection of solvents and annealing treatments on blended P3OT : PCBM inkjet printed film as the active layer of bulk heterojunction solar cells has been investigated. A mixture of dichlorobenzene/mesitylene (2 : 1) as a solvent showed the best reliability in the printing process and the most homogenous printed film. The annealing treatment in a vacuum oven at 140 • C for 60 min succeeded in removing solvent but increased the surface roughness of the printed film.
2019-04-06T00:44:03.380Z
2011-03-01T00:00:00.000
{ "year": 2011, "sha1": "19091548ace706f04296ec14b7ab6a99b360d863", "oa_license": "CCBYNCSA", "oa_url": "http://iopscience.iop.org/article/10.1088/2043-6262/2/1/015014/pdf", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "32b3df07d19578188dc726eafe1f602cc0020a50", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
29115964
pes2o/s2orc
v3-fos-license
Role of the C-terminal di-leucine motif of 5-HT1A and 5-HT1B serotonin receptors in plasma membrane targeting. The 5-HT1A and 5-HT1B serotonin receptors exhibit different subcellular localizations in neurons. Evidence has been reported that the C-terminal domain is involved in the somato-dendritic and axonal targeting of 5-HT1AR and 5-HT1BR, respectively. Here we analyzed the consequences of the mutation of a di-leucine motif and palmitoylated cysteines within this domain. Replacement of I414-I415 by a di-alanine in 5-HT1AR led to endoplasmic reticulum (ER) sequestration of the corresponding mutant expressed in cell lines as well as in hippocampal neurons in culture. Furthermore, di-leucine-mutated receptors were unable to bind 5-HT1A agonists and presented a major deficit in their glycosylation state, suggesting that they are misfolded. By contrast, mutation of the di-leucine motif in the C-terminal domain of 5-HT1BR had no major consequence on its subcellular targeting. However, in the case of the 1ActB chimera (substitution of the C-terminal domain of the 5-HT1BR into 5-HT1AR), this mutation was also found to cause sequestration within the ER. Replacement of palmitoylated cysteines by serines had no consequence on either receptor type. These data indicate that the di-leucine motif of the 5-HT1AR and 5-HT1BR tails is implicated in proper folding of these receptors, which is necessary for their ER export. Introduction The 5-HT 1A and 5-HT 1B serotonin receptors are two G-proteincoupled receptors (GPCRs) that exhibit a relatively high degree of homology in their amino acid sequences and share common features. Both are negatively coupled with adenylyl cyclase and act as auto-and heteroreceptors that modulate the activity of numerous neuronal systems (Barnes and Sharp, 1999). However, investigations on their distribution throughout the rat central nervous system showed major differences between these receptors: the 5-HT 1A R is localized on the somas and dendrites of neurons (Kia et al., 1996;Riad et al., 2000), whereas the 5-HT 1B R is found in preterminal unmyelinated axons (Riad et al., 2000;Sari et al., 1999). Interestingly, their neuronal functions depend on these respective localizations. The 5-HT 1A R modulates the firing of neurons (Haj-Dahmane et al., 1991), whereas the 5-HT 1B R participates in a local control of serotonin or other neurotransmitters release from axon terminals in projection areas (for a review, see Sari, 2004). Recently, we investigated the respective targeting of 5-HT 1A R and 5-HT 1B R by constructing 5-HT 1A R-5-HT 1B R chimeras for the transfection of polarized epithelial Lilly Pork Kidney (LLC-PK1) cells and hippocampal neurons in primary culture (Darmon et al., 1998;Jolimay et al., 2000). These studies showed that the short cytosolic C-terminal tail of both receptors plays a crucial role in their targeting. As also reported for other GPCRs (Bermak et al., 2001;Duvernay et al., 2004;Oksche et al., 1998;Pankevych et al., 2003;Rodriguez et al., 1992;Tai et al., 1999), this region appeared to be necessary for the transport of 5-HT 1A R and 5-HT 1B R to the cell surface, because truncated receptors without the C-terminal domain were sequestrated within the endoplasmic reticulum (ER) in LLC-PK1 cells. Furthermore, these studies also demonstrated that the cytosolic C-terminal tail of 5-HT 1B R contains an axonal-apical targeting signal (Jolimay et al., 2000). Within the C-terminal domain, the residues I414 and I415 in 5-HT 1A R and L379 and I380 in 5-HT 1B R constitute potential di-leucine motifs (Fig. 1). For numerous integral membrane proteins, such motifs have been shown to play a role in internalization and lysosomal or plasma membrane targeting (Hunziker and Fumey, 1994;Letourneur and Klausner, 1992;Schülein et al., 1998). On the other hand, the C-terminal domains of 5-HT 1A R and 5-HT 1B R contain palmitoylated cysteines (Ng et al., 1993;Papoucheva et al., 2004). In other GPCRs, these residues were found to be involved in several functional aspects, such as membrane targeting and receptor signaling (for a review, see Qanbar and Bouvier, 2003). In the present study, we investigated the potential role of dileucine motifs and palmitoylated cysteines in the trafficking of 5-HT 1A R and 5-HT 1B R, using site-directed mutagenesis. For this purpose, the di-leucine motifs of 5-HT 1A R (I414-I415) and 5-HT 1B R (L379-I380) were replaced by di-alanine; and palmitoylated cysteines (two in 5-HT 1A R and one in 5-HT 1B R) of the C-terminal domains were mutated to serine. These mutations were also introduced in the chimeras 1ActB and 1BctA, which we used previously to unveil the role of the C- The 5-HT 1A and 5-HT 1B serotonin receptors exhibit different subcellular localizations in neurons. Evidence has been reported that the C-terminal domain is involved in the somato-dendritic and axonal targeting of 5-HT 1A R and 5-HT 1B R, respectively. Here we analyzed the consequences of the mutation of a di-leucine motif and palmitoylated cysteines within this domain. Replacement of I414-I415 by a di-alanine in 5-HT 1A R led to endoplasmic reticulum (ER) sequestration of the corresponding mutant expressed in cell lines as well as in hippocampal neurons in culture. Furthermore, di-leucine-mutated receptors were unable to bind 5-HT 1A agonists and presented a major deficit in their glycosylation state, suggesting that they are misfolded. By contrast, mutation of the di-leucine motif in the C-terminal domain of 5-HT 1B R had no major consequence on its subcellular targeting. However, in the case of the 1ActB chimera (substitution of the C-terminal domain of the 5-HT 1B R into 5-HT 1A R), this mutation was also found to cause sequestration within the ER. Replacement of palmitoylated cysteines by serines had no consequence on either receptor type. These data indicate that the di-leucine motif of the 5-HT 1A R and 5-HT 1B R tails is implicated in proper folding of these receptors, which is necessary for their ER export. 5-HT 1A R and 5-HT 1B R plasma membrane targeting terminal domains (Jolimay et al., 2000). All constructs were used for the transfection of two cell lines (COS-7 and LLC-PK1) and neurons and the localization of expressed proteins was analyzed by immunofluorescence and confocal microscopy. The intracellular localization of the di-leucine mutants was further characterized by double-labeling experiments with ER and Golgi markers. Finally, the characteristics of wild-type and mutated receptors expressed in transfected cells were further investigated by analyzing their glycosylation state, agonist binding properties and coupling to G proteins. Taken together, the data presented here indicate that the di-leucine motif of 5-HT 1A R and 5-HT 1B R tail is implicated in proper folding of the receptor, which is necessary for their ER export. Results Di-leucine motif and palmitoylated cysteines contained in the C-terminus of both receptors (Fig. 1) were replaced by alanine and serine residues, respectively, using site-directed mutagenesis. All constructs were tagged by addition of a Flag epitope at their extracellular N-terminus to analyze their subcellular localization. Di-leucine-mutated 5-HT 1A R is localized in the ER Flag-tagged 5-HT 1A R as well as I414/415A and C417/420S mutants were used to transfect COS-7 ( Fig. 2A,C) or LLC-PK1 (Fig. 2C) cell lines or primary cultures of hippocampal neurons (Fig. 2B,C). Surface labeling was performed by incubating living cells with monoclonal mouse anti-Flag M2 antibody. Transfected cells were then fixed and permeabilized for the subsequent detection of the intracellular receptors using polyclonal rabbit anti-Flag antibody. As expected, 5-HT 1A R was mostly found at the plasma membrane of each cell type (~55-65% of surface labeling depending on cell type, Fig. 2C). By contrast, the I414/415A mutant was detected only at low levels at the plasma membrane (~10-24% of surface labeling), whereas the amounts of C417/420S mutant at the plasma membrane were very close to those of the wild-type 5-HT 1A R. Intracellular staining of di-leucine-mutated-5-HT 1A R was distributed in perinuclear ER-like structures. Moreover, in transfected COS-7 cells, we observed a strong co-localization of this I414/415A mutant with the ER luminal marker calregulin but not with the cis and median Golgi marker giantin (Fig. 3A). For these co-localization experiments, cells were treated with the protein synthesis inhibitor cycloheximide before fixation, to lower as much as possible the presence of newly synthesized receptors in ER or Golgi apparatus. Under these conditions, wildtype and C417/420S 5-HT 1A R did not co-localize with calregulin and giantin (Fig. 3A). This result indicates that the I414-I415 motif, but not the palmitoylated cysteines, is necessary for 5-HT 1A R exit from the ER. We also analyzed the glycosylation state of 5-HT 1A R and related mutants. Western blotting of membrane proteins from transfected LLC-PK1 cells showed that both wild-type and C417/420S 5-HT 1A R migrated mainly as a broad band of ~65 kDa and, to a lesser extent, a thinner band of ~50 kDa (Fig. 3B). By contrast, the I414/415A mutant migrated only as a band of ~50 kDa, which suggests that this construction was not correctly glycosylated. We thus treated membranes with endoglycosidase H (Endo H) to remove high-mannose N-glycosylation or peptide Nglycosidase F (PNGase F) to remove both core and complex N-glycosylation. The band observed after treatment with PNGase F should correspond to fully deglycosylated receptors. By contrast Endo H is active only on partially glycosylated proteins. After treatment of wild-type and C417/420S 5-HT 1A R with PNGase F, both 50 kDa and 65 kDa bands shifted to a band of ~44 kDa, corresponding to the molecular mass calculated for non-glycosylated Flag-tagged 5-HT 1A R. In the case of digestion with Endo H, the broad band of 65 kDa was still visible, confirming that this band corresponded to a fully glycosylated receptor. Only the thin 50 kDa band (partially glycosylated) was eliminated and converted to the ~44 kDa band. These results showed that the majority of wild-type and C417/420S 5-HT 1A R was completely glycosylated. Concerning the I414/415A mutant, both treatments with Endo H and PNGase F converted the ~50 kDa band into thẽ 44 kDa band, suggesting that this mutant was only partially glycosylated (core glycosylated). These data are consistent with ER retention of the latter mutant, as complex glycosylation occurs only in the Golgi apparatus (Kornfeld and Kornfeld, 1985). The di-leucine motif is necessary for ligand-binding capacity of 5-HT 1A R We compared the ligand binding capacity of 5-HT 1A R and mutants using the mixed 5-HT 1A -5-HT 1B agonist radioligand, [ 3 H]LSD (Darmon et al., 1998). Membranes of LLC-PK1 cells Journal of Cell Science 119 (20) transfected with wild-type or C417/420S-5-HT 1A R specifically bound equivalent amounts of tritium after incubation with 1.6 nM [ 3 H]LSD (~1 pmol/mg of protein, depending on transfection efficiency). By contrast, membranes of cells transfected with I414/415A mutant specifically bound only a very low amount of the radioligand (Fig. 4A). Interestingly, deglycosylation of wild-type 5-HT 1A R with PNGase F did not affect its binding capacity under the same assay conditions (not shown). Thus, the reduced [ 3 H]LSD binding capacity of the I414/415A mutant was very probably not caused by its incomplete glycosylation state. Because ligand binding requires correct folding, it can be inferred that the I414-I415 motif is necessary for correct folding of 5-HT 1A R. Role of the dileucine motif and palmitoylated cysteines in 5-HT 1A R coupling to G proteins We first analyzed the interaction of I414/415A and C417/420S 5-HT 1A R with ␣ subunits of G proteins. As illustrated in Fig. 4B, 5-CT-stimulated [ 35 S]GTP␥S binding onto membranes from LLC-PK1 cells did not statistically differ whether cells were transfected with wild-type or C417/420S 5-HT 1A R. However, this binding was significantly impaired in the case of membranes transfected with I414/415A mutant. Furthermore, it was shown that 5-HT 1A Rs also interact with G protein ␤␥ subunits to modulate the activity of ERK1/2 (Garnovskaya et al., 1996). We thus tested the ability of 5-HT 1A R and related mutants to activate ERK in transfected LLC-PK1 cells. After treatment with the agonist 8-OH-DPAT for 5 minutes, a ~sixfold increase in ERK2 phosphorylation was observed in cells expressing wild-type or C417/420S 5-HT 1A R (Fig. 4C,D). These results demonstrate that wild-type and C417/420S 5-HT 1A R activate ERK in our experimental conditions. By contrast, 8-OH-DPAT treatment of cells transfected with I414/415A mutant only induced a ~2.7-fold increase in ERK phosphorylation. Mutations in the C-terminus of 5-HT 1B R have only minor effects on its subcellular localization The percentages of 5-HT 1B R and related mutants at the plasma membrane were also examined. In COS-7 ( Fig. 5A,C) and LLC-PK1 cells (Fig. 5C), wild-type 5-HT 1B R displayed a lower level of surface staining than 5-HT 1A R (COS-7, 22.7±2.7%; LLC-PK1, 30.7±6.0%; P<0.001 compared with data in Fig. 2C for 5-HT 1A R). However, this difference between 5-HT 1A R and 5-HT 1B R did not reach statistical significance in neurons. Replacement of the di-leucine motif in the C-terminus of 5-HT 1B R by alanines (LI379/380A) significantly reduced its amount at the plasma membrane in LLC-PK1 cells but not in COS-7 cells and neurons ( Fig. 5A-C). As observed in the case of 5-HT 1A R (Fig. 2C), mutation of the palmitoylated cysteine into a serine (C384S) had no significant effect on the subcellular localization of 5-HT 1B R in all cell types analyzed ( Fig. 5A-C). 1ActB chimera reveals the role of the 5-HT 1B R di-leucine motif In a previous study, we substituted the cytosolic tail of 5-HT 1B R into the 5-HT 1A R and vice versa (Jolimay et al., 2000). Analysis of these chimeras expressed in LLC-PK1 cells and in Cells were permeabilized and anti-Flag labeling is shown in green and anti-calregulin or anti-giantin labeling is shown in red. Bar, 10 m. (B) Glycosylation state of receptors was analyzed by western blotting of crude membranes from transfected COS-7 cells with anti-5-HT 1A polyclonal antibody. Membranes were treated with Endo H, PNGaseF or untreated. Similar results were obtained in three independent experiments. 5-HT 1A R and 5-HT 1B R plasma membrane targeting neurons showed that an apical/axonal targeting signal is located in the C-terminus of 5-HT 1B R. The resulting chimeric receptors, 1ActB (5-HT 1A R with C terminus of 5-HT 1B R) and 1BctA (5-HT 1B R with the C-terminus of 5-HT 1A R), were tagged with the Flag epitope and mutants were constructed for both. On the other hand, 1ActB chimera was mostly localized at the plasma membrane ( Fig. 6D-F), like the 5-HT 1A R (Fig. 2). Di-leucine mutation of the chimera (LI414/415A) resulted in a very low level of plasma membrane localization. By contrast, the amounts of C419S-1ActB mutant at the plasma membrane were very close to those of non-mutated 1ActB (Fig. 6D-F). Discussion As with numerous other GPCRs, the cytosolic C-terminal region of 5-HT 1A R and 5-HT 1B R plays an important role in their subcellular localization. Indeed, previous studies showed that this region is necessary for receptor exit from the ER and also that the cytosolic C-terminal tail of 5-HT 1B R contains a dominant axonal-targeting signal (Jolimay et al., 2000). Here, we investigated the potential role of a di-leucine motif and palmitoylated cysteines contained in this receptor domain using site-directed mutagenesis. The data reported here clearly showed that the di-leucine motif contained in the C-terminal domain of 5-HT 1A R is implicated in its targeting to the plasma membrane. More precisely, this motif plays a crucial role in the correct folding of the receptor, which is necessary for its exit from the ER towards the plasma membrane. The role of the C-terminal dileucine motif of 5-HT 1B R is less clear. Its mutation into a dialanine motif did not modify the localization of the receptor in transfected neurons. However, in the 1ActB chimera, in which the C-terminal tail of 5-HT 1A R has been replaced by the Cterminal tail of 5-HT 1B R, the same di-leucine motif appeared to be implicated in receptor targeting to the plasma membrane. However, substitution of the palmitoylated cysteine residues with serines did not affect the subcellular localization of receptors as well as chimeras, and in the case of 5-HT 1A R, did not affect receptor binding or coupling to G proteins. Role of the di-leucine motif of 5-HT 1A R and 5-HT 1B R In GPCRs, di-leucine motifs localized mainly in the Cterminal cytosolic tail were found to be important for targeting. Some of these motifs were shown to act as clathrindependent endocytosis signals (Fan et al., 2001;Gabilondo et al., 1997;Gaudreau et al., 2004;Orsini et al., 1999). In addition, a role in receptor transport from the ER to the plasma membrane was found for a di-leucine motif with an upstream acidic residue in the case of vasopressin V 2 receptor and, more recently, for a di-leucine motif associated with an upstream phenylalanine residue in the case Basal phosphorylation was determined in the absence of ligand. Equal amounts of cell lysate were separated by SDS-PAGE, blotted and revealed with anti-ERK or anti-PERK antibody (C). PERK2 and ERK2 signals were quantified and ERK phosphorylation is expressed as fold increase over basal levels after normalization with total ERK2 signal (D). ERK phosphorylation in stimulated untransfected (LLC-PK1) cells did not differ from the basal signal. Each bar is the mean ± s.e.m. of three independent experiments. *P<0.05 and ***P<0.001, when compared with the wild type; ns, not significant. of ␣ 2B -adrenergic and angiotensin II type 1A receptors (Duvernay et al., 2004) or surrounded by three hydrophobic residues in the case of vasopressin V 3 receptor (Robert et al., 2005). Our results concerning the I414-I415 motif in the C-terminal tail of 5-HT 1A R are consistent with these previous findings. Indeed, replacement of this motif by two alanines resulted in a 5-HT 1A R mutant sequestrated within the ER. The loss of [ 3 H]LSD binding capacity of this mutant (29% of wild-type binding) suggests that such sequestration might be caused by an incorrect folding. However, this binding assay was performed with purified membranes of cells expressing the receptors and does not allow distinction of functional characteristics of plasma membrane versus intracellular receptors because ligand could access both pools of receptors. By contrast, for ERK phosphorylation assays, the ligand 8-OH-DPAT was added to living cells, and only plasma-membrane-localized receptors could thus be activated in this protocol. Interestingly, 8-OH-DPATinduced activation of ERK phosphorylation in cells transfected with I414/415A mutant was still ~2.7-fold over basal levels, corresponding to ~45% of the increase observed for the wild-type receptor. We therefore propose that the observed decrease in ERK activation is most probably due to intracellular sequestration of a large amount of this mutant and that the residual plasma-membrane-localized I414/415A 5-HT 1A Rs are functional. This would support the idea of an incorrect folding of sequestrated mutants, as receptors that can exit the ER and reach the plasma membrane seem to be functional, implying their correct folding. However, we cannot entirely exclude that the di-leucine motif of 5-HT 1A R may also participate in receptor ER exit by interaction with COP-II-associated proteins (for a review, see Barlowe, 2003), and that defective interactions caused by the mutation were actually responsible for ER sequestration of the I414/415A mutant. As noted by Schülein et al. and Duvernay et al. (Duvernay et al., 2004), this dileucine motif is highly conserved in the C-terminal tail of GPCRs, suggesting a general role in the exit from ER for most of these membrane proteins. However, in our study, replacement of the di-leucine motif by two alanines in the Cterminal tail of 5-HT 1B R only affected its subcellular localization in LLC-PK1 cells, as this mutant did not differ from the wild-type 5-HT 1B R regarding its targeting to the plasma membrane in both COS-7 cells and hippocampal neurons. This apparent discrepancy is not unique among GPCRs. Indeed, mutation of the two conserved leucines in the ␤ 2 -adrenergic receptor, which strongly diminished receptor endocytosis, did not affect its targeting to the plasma membrane and its capacity to bind specific ligands (Gabilondo et al., 1997). Therefore, the implication of the two leucines (or isoleucines) localized in the cytosolic C-terminal tail of GPCRs in the exit of the receptor from the ER may not be universal, but would depend on its environment or be associated with other signals. Concerning the 5-HT 1B R, its predominant intracellular localization found in most cell types tested could also explain why substitution of the di-leucine motif did Journal of Cell Science 119 (20) not generally produce further detectable intracellular sequestration. To further address this question, we constructed chimeras in which the C-terminal domains of 5-HT 1A R and 5-HT 1B R were switched (Jolimay et al., 2000). In transient transfection experiments, 1BctA chimera (5-HT 1B R core with 5-HT 1A R Ctail) exhibited nearly exclusive perinuclear localization, indicating that this construct is probably not functional. On the other hand, a relatively high proportion (~50%) of 1ActB chimera was localized at the plasma membrane (Fig. 5F), like that observed with the wild-type 5-HT 1A R (Fig. 2C). Accordingly, it can be inferred that the di-leucine motif of the C-terminal domain of 5-HT 1B R allows correct folding of this 1ActB chimera and its exit from ER. This would suggest the need for another signal localized in a different domain of the receptor in addition to the di-leucine motif of the C-terminal tail, which would be present in the 5-HT 1A R but not in the 5-HT 1B R, thereby leading to plasma membrane targeting of the 1ActB but not the 1BctA chimera. Alternatively, it is possible Role of palmitoylated cysteines 5-HT 1A R and 5-HT 1B R were shown to contain palmitoylated cysteines in their C-terminal domain (Ng et al., 1993;Papoucheva et al., 2004). Substitution of these residues with serines did not affect the subcellular localization of either receptors or chimeras. Furthermore, we found that the glycosylation state and the ligand-binding capacity of 5-HT 1A R were not dependent on the presence of the palmitoylated cysteines. These data are consistent with results obtained by Papoucheva et al. (Papoucheva et al., 2004) who recently demonstrated the constitutive palmitoylation of cysteines 417 and 420 of this receptor. These authors also showed that palmitoylated cysteines 417 and 420 are necessary for the receptor coupling to G ␣i3 subunit in transfected insect Sf.9 cells, as well as for its capacity to inhibit adenylyl cyclase activity in NIH3T3 cells and to activate ERK in CHO cells. By contrast, using [ 35 S]GTP␥S binding and ERK activation assays, we showed here that the mutation of both palmitoylated cysteines 417 and 420 did not significantly affect 5-HT 1A R coupling with G ␣ proteins and activation of ERK in LLC-PK1 cells. Such discrepancies might be explained by the use of different cell lines expressing different G ␣i subunits and other signaling molecules, in our studies compared with those of Papoucheva et al. (Papoucheva et al., 2004). First, in Sf.9 cells, the G ␣i3 subunit was cotransfected with receptors. In LLC-PK1 cells, both G ␣i2 and G ␣i3 subunits are endogenously expressed but exhibit different subcellular compartmentations: G ␣i2 is localized at the basolateral membrane, whereas G ␣i3 is restricted to the Golgi apparatus (Ercolani et al., 1990). As we found that 5-HT 1A R is mainly localized at the plasma membrane (Fig. 2), it should interact primarily with G ␣i2 in LLC-PK1 cells. Thus, [ 35 S]GTP␥S binding results agreed with the hypothesis that the palmitoylated cysteines 417 and 420 are necessary for 5-HT 1A R coupling with the G ␣i3 but not the G ␣i2 subunit. In the case of ERK phosphorylation, the differences between cell lines tested is less clear, because CHO cells express both G ␣i2 and G ␣i3 subunits. However, multiple signaling pathways can lead to the activation of ERK after stimulation of a particular GPCR, and some of these pathways may be activated independently of G proteins (for a review, see Werry et al., 2005). In the case of the 5-HT 1A R, the intracellular signaling pathway leading to ERK activation has been shown to implicate G ␤␥ subunits in CHO cells (Garnovskaya et al., 1996;Della Rocca et al., 1999). However, to date, it is not known whether the same pathway is involved in other cell lines, such as LLC-PK1 cells. It would thus be of interest to identify which intracellular signaling molecules contribute to ERK activation by 8-OH-DPAT in LLC-PK1 cells. In any case, these results suggest that palmitoylated cysteines play variable roles in 5-HT 1A R functional characteristics, depending on the cell type and the signaling molecules available. Such differences were already reported for other GPCRs. Thus, substitution of palmitoylated cysteines by alanines in the A subtype of the endothelin receptor (ET A R) has been shown to reduce its capacity to activate G i and G q proteins without affecting its capacity to activate G o protein (Doi et al., 1999). In conclusion, our data show that the di-leucine motif in the C-terminal domain of 5-HT 1A and 5-HT 1B receptors is necessary for their ER sorting through its implication in the proper folding of receptors. They also provide further support for the statement that 5-HT 1A and 5-HT 1B receptors are routed through distinct intracellular pathways towards their final targeting in neurons. Antibodies Anti-rat 5-HT 1A R antibody has been described previously (El Mestikawy et al., 1990;Kia et al., 1996;Riad et al., 2000). This polyclonal rabbit antibody is directed against a peptide sequence located within the third intracellular domain of the receptor. Mouse anti-Flag M2 monoclonal antibody, rabbit anti-Flag polyclonal antibody and mouse monoclonal anti-diphosphorylated-ERK1/2 (PERK) were purchased from Sigma (St Louis, MO). Rabbit polyclonal anti-ERK1/2 (ERK) was purchased from Upstate Biotechnology (Charlottesville, VA). Rabbit polyclonal anti-calregulin antibody was purchased from Santa Cruz Biotechnology (Santa Cruz, CA) and rabbit polyclonal anti-giantin antibody from CRP (Berkeley, CA). Neuronal cultures were made as described previously (Goslin et al., 1998) with some modifications. Hippocampi of rat embryos were dissected at day 17-18. After trypsinization, tissue dissociation was achieved with a Pasteur pipette. Cells were counted and plated on poly-L-lysine-coated 12-mm-diameter coverslips, at a density of 60,000-75,000 cells per 16-mm dish (300-375 cells per square millimeter), in complete Neurobasal medium supplemented with B27 (Invitrogen), containing 0.5 mM L-glutamine, 10 U/ml penicillin G, and 10 mg/ml streptomycin. Five hours after plating, the coverslips were transferred to a 90-mm dish containing conditioned medium obtained by incubating glial cultures (70-80% confluency) for 24 hours in the complete medium described above. Experiments were performed in agreement with the institutional guidelines for use of animals and their care, in compliance with national and international laws and policies (Council directives no. 87-848, Cell transfections For immunofluorescence experiments, LLC-PK1 and COS-7 cells were transferred to 12-mm-diameter coverslips 16 hours before transfection to obtain 30-50% confluency cultures. LLC-PK1 cells were transfected using Lipofectin reagent (Invitrogen). For each coverslip, 1 g plasmid DNA and 1-3 l Lipofectin were both diluted separately in 125 l serum-free DMEM. After a 30-45 minute incubation at room temperature, the two dilutions were combined and the resulting mix was left for another 10-15 minutes at room temperature. Cells were washed with 500 l serum-free DMEM and mix was added for an overnight incubation at 37°C. COS-7 cells were transfected using FuGENE reagent (Roche, Meylan, France). For each coverslip, 2 l FuGENE were diluted in 50 l D-PBS (Dulbecco's phosphate-buffered saline, Invitrogen) and incubated for 5 minutes at room temperature. The dilution was then mixed with 1 g plasmid DNA, and incubation proceeded for another 15 minutes. This mix was added to the growth medium (250 l) overlaying the cells and transfection lasted 24 hours at 37°C. Hippocampal neurons were transfected on the 7-8th day in vitro as follows: for each coverslip, plasmid DNA (0.8 g) was mixed with 50 l Neurobasal medium without B27 supplement. After 15 minutes at room temperature, 0.8 l Lipofectamine 2000 (Invitrogen) in 50 l Neurobasal medium were added and incubation continued for another 20 minutes. After the addition of 150 l of complete Neurobasal medium containing B27 supplement, the mix was applied onto the neuronal culture, and transfection lasted for 3 hours at 37°C. Typically, 5-10% of neurons expressed the receptors after transfection. For both cell lines and hippocampal neurons, receptor expression was allowed in growth medium for 24 hours after transfection. For preparation of membranes and ERK phosphorylation assays (see below), LLC-PK1 cells were transfected by electroporation using Gene Pulser Xcell electroporation system (Bio-Rad, Hercules, CA; 135 V, 1800 F in 200 l DMEM containing 5-10ϫ10 6 cells and 5-10 g plasmid DNA; relaxation time: ~40 milliseconds). Cells were then transferred to a 90-mm dish and grown for 3 days in LLC-PK1 growth medium. Indirect immunofluorescence Cells on coverslips were washed with D-PBS+ (D-PBS containing 0.1 mM CaCl 2 and 0.1 mM MgCl 2 ) at 37°C, then fixed with paraformaldehyde (3%) containing 4% sucrose at 37°C in D-PBS+, and permeabilized with 0.1% Triton X-100 in D-PBS+. After two 10-minute washes in D-PBS+, cells were incubated for 30 minutes in antibody buffer (3% bovine serum albumin, 2% normal goat serum, 2% normal donkey serum in D-PBS+). Incubation with primary antibodies was then performed in antibody buffer for 1 hour at room temperature. After two 10-minute washes in D-PBS+, incubation with secondary antibodies proceeded for 1 hour. The secondary antibodies used were Cy3-conjugated donkey anti-rabbit IgG (1:1600 dilution; Jackson ImmunoResearch, West Grove, PA), and Alexa Fluor 488-conjugated goat anti-mouse IgG (1:1600; Molecular Probes, Eugene, OR). The coverslips were finally mounted in Fluoromount-G solution (Clinisciences, Montrouge, France). For ER and Golgi co-localization experiments, cells were treated with the protein synthesis inhibitor cycloheximide (70 M) for 4 hours before fixation, to lower as much as possible the presence of newly synthesized receptors in ER or Golgi apparatus. ER and Golgi labeling was performed using anti-calregulin antibody (1:100 dilution) and anti-giantin antibody (1:2000 dilution), respectively, and receptors were labeled using anti-Flag M2 monoclonal antibody. For surface detection, anti-Flag M2 antibody (2.5 g/ml) was incubated with living cells for 20 minutes at room temperature. Cells were washed in D-PBS+, fixed with paraformaldehyde (3%) containing 4% sucrose, and incubated for 1 hour with Alexa Fluor 488-conjugated goat anti-mouse IgG in antibody buffer. After permeabilization with 0.1% Triton X-100 in D-PBS+, intracellular epitopes were detected using rabbit anti-Flag polyclonal antibody (0.85 g/ml) subsequently revealed by Cy3-conjugated donkey anti-rabbit IgG. Immunofluorescence images were generated using a Leica laser-scanning confocal microscope. For relative surface label analysis, unsaturated acquisitions were made with the same exposure settings and laser gain for each condition. For each cell type, at least ten cells were analyzed. Quantification of surface and intracellular staining were performed using ImageJ software (NIH) according to Jaskolski et al. (Jaskolski et al., 2004) with modifications, and statistical analysis was carried out using GraphPad Prism 4 (GraphPad Software, San Diego, CA). Background was lowered using Gaussian blur (radius 1 pixel) and an intensity threshold was fixed just above the background level to maximally reduce nonspecific staining. Single cells were selected and carefully traced manually. Surface (S) and intracellular (I) areas with labeling above threshold were measured and the percentage of surface receptor labeling calculated as Sϫ100/(S+I). Contrast and brightness of images displayed in figures were modified using Adobe Photoshop 7.0 (Adobe Systems, San Jose, CA) for clearer demonstration and do not correspond to the analysis conditions. 5-HT 1A R and 5-HT 1B R plasma membrane targeting Preparation of membranes Transfected LLC-PK1 cells were washed with D-PBS, scraped into Tris buffer (50 mM Tris-HCl, pH 7.4), and homogenized with a Polytron. After each of four successive washings in Tris buffer, the membranes were collected by centrifugation at 31,000 g for 20 minutes at 4°C. An incubation for 10 minutes at 37°C was performed after the first washing to eliminate 5-HT (from the serum in the culture medium), and the final pellet was suspended in the same Tris buffer to be stored at -80°C until use. Protein concentration was measured using BCA protein assay kit (Pierce, Rockford, IL). For membranes subsequently used for binding assays, deglycosylation was done without denaturation. Radioligand binding assays Binding assays were performed using 20-25 g membrane proteins in 500 l of 50 mM Tris-HCl buffer, pH 7.4, supplemented with 1.6 nM [ 3 H]lysergic acid diethylamide ([ 3 H]LSD; 79.2 Ci/mmol; Amersham Biosciences). Incubations were performed for 90 minutes at 25°C. Non-specific binding was determined in the presence of 10 M 5-HT. Assays were stopped by rapid filtration through Whatman GF/B filters coated with polyethylenimine (0.5% v/v). Subsequent washing and counting of entrapped radioactivity were as described by Fabre et al. (Fabre et al., 1997). Specific binding is expressed as a percentage of wild-type receptor specific binding. Data were corrected for individual variations in transfection efficiency by relative quantification of receptors as detailed in the deglycosylation procedures. Data analysis was done using GraphPad Prism 4. [ 35 S]GTP-␥-S binding assays Binding of [ 35 S]GTP␥S onto transiently transfected LLC-PK1 cell membranes stimulated by 5-carboxamido-tryptamine maleate (5-CT; Sigma) was measured according to a procedure adapted from Alper and Nelson (Alper and Nelson, 1998) and Fabre et al. (Fabre et al., 2000). Briefly, membranes (~40-50 g protein) were incubated for 30 minutes at 37°C in a final volume of 800 l assay buffer (50 mM Tris-HCl, 3 mM MgCl 2 , 120 mM NaCl, 0.2 mM EGTA) containing 0.1 nM [ 35 S]GTP␥S (1000 Ci/mmol, Amersham Biosciences), 300 M GDP and 1 M 5-CT. The reaction was terminated by addition of 3 ml ice-cold 50 mM Tris buffer and rapid vacuum filtration through Whatman GF/B filters. Each filter was then washed twice with 3 ml ice-cold Tris buffer, placed into 4.5 ml scintillation fluid and its entrapped radioactivity measured. Basal [ 35 S]GTP␥S binding was determined from samples without 5-CT, and non specific binding from those supplemented with both 5-CT and WAY 100,635 (1 M), a selective 5-HT 1A R antagonist (Fabre et al., 2000). ERK phosphorylation assays Electroporated LLC-PK1 cells were transferred to 35-mm dishes 24 hours after transfection and grown for another 10 hours in LLC-PK1 growth medium. Cells were then starved for at least 8 hours in serum-free medium and subsequently treated with 1 M 8-OH-DPAT (8-hydroxy-N,N-dipropyl-2-amino-tetralin, Sigma) for 5 minutes at 37°C under 5% CO 2 . Cells were then washed in ice-cold D-PBS and lysed in 50 l sodium dodecyl sulphate sample buffer (Laemmli, 1970). For each transfection, an equal portion of the cells was set aside for protein determination with BCA kit. Equal quantities of extracts from cells exposed to 8-OH-DPAT or none (controls) were separated by electrophoresis, transferred to nitrocellulose, and probed with rabbit anti-ERK (1:10,000 dilution) or mouse anti-PERK (1:2500 dilution) antibodies. After incubation with anti-rabbit (1:10,000 dilution, Sigma) or anti-mouse (1:2500 dilution, Sigma) antibodies coupled to horseradish peroxidase, revelation was performed with the ECL+ kit. ERK-2 and PERK2 bands were quantified as described in deglycosylation section. PERK2/ERK2 ratio was calculated for each sample for normalization and data were expressed as the fold increase over basal levels. Data analysis was done using GraphPad Prism 4. Statistical analyses For relative surface label analysis, statistical significance was assessed using a oneway ANOVA followed by a Bonferroni test. For [ 3 H]LSD and [ 35 S]GTP␥S binding assays as well as for ERK activation assays, differences were evaluated using unpaired and paired Student's t-tests, as appropriate. The significance level was set at P<0.05.
2017-06-29T23:44:32.485Z
2006-10-15T00:00:00.000
{ "year": 2006, "sha1": "6c8dc5492994b484370659f80c9319181d032f4a", "oa_license": "CCBY", "oa_url": "http://jcs.biologists.org/content/joces/119/20/4276.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "6c8dc5492994b484370659f80c9319181d032f4a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
257853263
pes2o/s2orc
v3-fos-license
Massively Parallel Reporter Assays for High-Throughput In Vivo Analysis of Cis-Regulatory Elements The rapid improvement of descriptive genomic technologies has fueled a dramatic increase in hypothesized connections between cardiovascular gene expression and phenotypes. However, in vivo testing of these hypotheses has predominantly been relegated to slow, expensive, and linear generation of genetically modified mice. In the study of genomic cis-regulatory elements, generation of mice featuring transgenic reporters or cis-regulatory element knockout remains the standard approach. While the data obtained is of high quality, the approach is insufficient to keep pace with candidate identification and therefore results in biases introduced during the selection of candidates for validation. However, recent advances across a range of disciplines are converging to enable functional genomic assays that can be conducted in a high-throughput manner. Here, we review one such method, massively parallel reporter assays (MPRAs), in which the activities of thousands of candidate genomic regulatory elements are simultaneously assessed via the next-generation sequencing of a barcoded reporter transcript. We discuss best practices for MPRA design and use, with a focus on practical considerations, and review how this emerging technology has been successfully deployed in vivo. Finally, we discuss how MPRAs are likely to evolve and be used in future cardiovascular research. Introduction The mammalian heart is a complex organ composed of diverse cell types that must undergo specification, differentiation, and maturation [1][2][3]. Collectively, these processes drive morphogenesis and enable mature cardiac function. Abnormalities in heart development can cause congenital heart disease (CHD) [4][5][6] or can contribute to impairments in adult cardiac function, resulting in a diversity of conditions including dilated cardiomyopathy (DCM), hypertrophic cardiomyopathy (HCM), and arrhythmogenic cardiomyopathy (ACM) [7][8][9][10]. The coordination of multiple cardiac lineages during development and adult homeostasis is orchestrated by the precise transcriptional control of gene expression; therefore, developing a firm understanding of how gene expression is controlled and how aberrant gene expression impacts cardiac phenotypes is a crucial first step in the development of targeted therapies. Over the past few decades, studies of cardiovascular gene expression have benefited from a spectrum of approaches for creating genetically modified model organisms, with most mammalian studies being conducted in mice. These murine models have included the generation of transgenics, homologous recombination-mediated systemic gene knockouts, targeted gene knockins, and conditional and inducible gene knockout via the Cre/LoxP system [11][12][13][14]. While these approaches have revealed significant mechanistic insights into the development and function of the heart, they exhibit limitations related to the time and expense required. While the widespread dissemination of CRISPR/Cas gene editing technology within the last decade has expedited the process of generating genetically modified mice [15,16], the one mouse line for one modification paradigm remains too slow and constraining for the efficient systematic characterization of cardiac transcriptional networks. Thus, alternative approaches that prioritize speed, flexibility, and throughput are needed. Genome-wide association studies (GWAS) indicate that more than 90% of diseaseassociated genetic variation is located within non-coding regions [17], which include enhancers and promoters. The detection, validation, and functional characterization of disease-associated regulatory elements can expand our understanding of gene regulation and improve our ability to treat human disease. Methodologies such as DNase I hypersensitive site sequencing (DNase-seq) and assay for transposase-accessible chromatin sequencing (ATAC-seq) to identify accessible chromatin regions [18,19], enhancer RNA sequencing [20], and chromatin immunoprecipitation sequencing (ChIP-seq) to reveal DNA occupancy by transcription factors or chromatin markers [21] have all been used to generate numerous cis-regulatory element predictions. For instance, large annotation efforts such as the US National Institutes of Health Roadmap Epigenomics Program and ENCODE have uncovered millions of putative regulatory elements within more than 100 human cell types [22,23]. Importantly, all of these methods involve measuring genomic characteristics that correlate with cis-regulatory element activity, but do not directly measure activity. As a result, the vast majority of candidate elements remain functionally uncharacterized. Validation and functional analyses via traditional methods such as transient transgenic reporters, gene-targeted reporters, and enhancer knockout have been used successfully for a subset of elements; however, the low-throughput high-cost nature of these approaches diminishes their utility. Recently, massively parallel reporter assays (MPRAs) have been deployed to bypass these limitations [24][25][26]. MPRAs are a powerful functional genomics technique that utilizes a reporter assay with a sequencing-based readout to measure the activities of thousands of DNA elements in a single experiment. In this manuscript, we provide an overview of how MPRAs work, and review key points for design, execution, and data analysis. We highlight variations in the MPRA approach, summarizing the strengths and weaknesses of each, and we highlight cardiacspecific considerations. Furthermore, we review the progress that has been made toward adapting MPRAs to in vivo experimentation via viral delivery. Finally, we discuss the limitations of the MPRA technique and how MPRAs are likely to evolve and be used in future cardiovascular research. Massively Parallel Reporter Assays As modern genomics identifies ever increasing numbers of candidate cis-regulatory elements, the validation of candidates has become a major bottleneck in the field. While the activities of small numbers of candidates can be measured in vivo in mouse via the germline integration of reporter constructs, and larger numbers of candidates can be tested in vitro via transfected reporter constructs, a true high-throughput solution (the MPRA) has only recently been developed. MPRAs utilize a next-generation sequencing-based readout, which allows for many thousands of reporter constructs to be measured simultaneously in the same sample. There are several variations of the MPRA, but the key feature in each is that the regulatory element, or a barcode corresponding to the element, is embedded within an untranslated region (UTR) of a reporter gene such that the element will drive the transcription of itself or its linked barcode (Figure 1a), which subsequently will be referred to as an enhancer-identifier. Thus, when a pool of regulatory elements is assayed in an MPRA vector, the reporter gene RNA transcripts will contain enhancer-identifiers in proportion to each enhancer's relative strength. The relative frequency of each element can be measured by the reverse transcription, amplification, and sequencing of the portion of the transcript containing the enhancer-identifiers. To account for the starting frequency of each element in the pool, the vector DNA is amplicon-sequenced, and the activity for each element is then expressed as the RNA:DNA ratio or a derivative thereof. While a number of tools and guides can be referenced during MPRA experimental design and analysis [27][28][29][30], the basic framework is fairly simple (Figure 1b) and should be well understood before undertaking an experiment. To that end, here, we provide a practical overview of the major steps and pitfalls associated with designing and executing an MPRA, with particular attention to cardiovascular and in vivo applications. We focus primarily on enhancers, as they have been the subject of the majority of the relevant literature; however, the MPRA vectors that we discuss can be easily modified for the analysis of promoters, with very few conceptual differences in the experiment. and the activity for each element is then expressed as the RNA:DNA ratio or a derivative thereof. While a number of tools and guides can be referenced during MPRA experimental design and analysis [27][28][29][30], the basic framework is fairly simple (Figure 1b) and should be well understood before undertaking an experiment. To that end, here, we provide a practical overview of the major steps and pitfalls associated with designing and executing an MPRA, with particular attention to cardiovascular and in vivo applications. We focus primarily on enhancers, as they have been the subject of the majority of the relevant literature; however, the MPRA vectors that we discuss can be easily modified for the analysis of promoters, with very few conceptual differences in the experiment. Experimental Design: Assay Configuration and Context Choosing an appropriate variant of the MPRA assay is a key step in experimental design that will impact the production and cloning of the cis-regulatory element library, as well as how the results are analyzed ( Figure 2). Several configurations of the MPRA vector have been reported and characterized [26]. One common configuration features the insertion of an enhancer pool in the 3′ UTR of the reporter gene. In this approach, termed self-transcribing active regulatory region-sequencing (STARR-seq) [31], the enhancer sequence functions as its identifier so that enhancer identity and activity can both be determined by the sequencing of the reporter transcript. While elegant, the STARR-seq approach may be biased by enhancer sequences that affect reporter-transcript stability, and it has also been reported to suffer from elevated sample-to-sample variation [26]. Two additional common configurations feature an upstream position for candidate enhancers, which are linked with a barcode positioned in either the 5′ or 3′ reporter gene UTR [32][33][34]. Since a small barcode in the UTR is unlikely to affect transcript stability, these configurations are commonly regarded as more rigorous than STARR-seq; however, these assays may bias towards promoter-like elements [26]. Furthermore, library construction is more complex, often requiring multiple cloning steps that must maintain library diversity and the integrity of the enhancer-barcode links [35][36][37][38]. Likewise, the analysis of the data may also require significant additional expertise, depending on the strategy used for barcoding. When these three configurations were directly compared using an integrating vector, the reproducibility of the 5′ and 3′ barcoding approaches was excellent, with Pearson correlations between replicates exceeding 0.95, while the STARR-seq approach yielded moderately lower correlations exceeding 0.85 [26]. Interestingly, when this assay was repeated in a mutant non-integrating lentiviral vector, the correlations between the replicates dropped sharply for STARR-seq (to ~0.5) and moderately for the 3′ barcoding approach (to ~0.8). These results indicate that when executed with care, all three approaches are viable, but the STARR-seq approach may require additional replicates to establish comparable statistical power. Experimental Design: Assay Configuration and Context Choosing an appropriate variant of the MPRA assay is a key step in experimental design that will impact the production and cloning of the cis-regulatory element library, as well as how the results are analyzed ( Figure 2). Several configurations of the MPRA vector have been reported and characterized [26]. One common configuration features the insertion of an enhancer pool in the 3 UTR of the reporter gene. In this approach, termed self-transcribing active regulatory region-sequencing (STARR-seq) [31], the enhancer sequence functions as its identifier so that enhancer identity and activity can both be determined by the sequencing of the reporter transcript. While elegant, the STARR-seq approach may be biased by enhancer sequences that affect reporter-transcript stability, and it has also been reported to suffer from elevated sample-to-sample variation [26]. Two additional common configurations feature an upstream position for candidate enhancers, which are linked with a barcode positioned in either the 5 or 3 reporter gene UTR [32][33][34]. Since a small barcode in the UTR is unlikely to affect transcript stability, these configurations are commonly regarded as more rigorous than STARR-seq; however, these assays may bias towards promoter-like elements [26]. Furthermore, library construction is more complex, often requiring multiple cloning steps that must maintain library diversity and the integrity of the enhancer-barcode links [35][36][37][38]. Likewise, the analysis of the data may also require significant additional expertise, depending on the strategy used for barcoding. When these three configurations were directly compared using an integrating vector, the reproducibility of the 5 and 3 barcoding approaches was excellent, with Pearson correlations between replicates exceeding 0.95, while the STARR-seq approach yielded moderately lower correlations exceeding 0.85 [26]. Interestingly, when this assay was repeated in a mutant non-integrating lentiviral vector, the correlations between the replicates dropped sharply for STARR-seq (to~0.5) and moderately for the 3 barcoding approach (tõ 0.8). These results indicate that when executed with care, all three approaches are viable, but the STARR-seq approach may require additional replicates to establish comparable statistical power. As demonstrated in the above-mentioned study, vector context is an important consideration for any MPRA system. Assays are commonly conducted within vectors that are either episomal, such as a transient plasmid transfection and an adeno-associated virus (AAV), or chromosomally integrated via lentivirus. With an integrating vector, the chromatin state likely is more representative of that of the native enhancers, although locusdependent integration effects may add noise to the dataset. Some significant differences between integrated and episomal assays have been noted [26,39]. However, in these studies, Pearson correlations between episomal and integrated contexts typically exceeded 0.8, suggesting that episomal assays are sufficient to capture most of the valuable signals. This is encouraging since, as the field moves toward in vivo MPRAs, many cells that are poorly transduced by lentivirus, such as cardiomyocytes, are robustly transduced by AAV. In conclusion, while configuration and integration state should be chosen for maximum alignment with features of the model system and available expertise, all common MPRA configurations have been used successfully in a variety of contexts. Experimental Design: Library Construction, Data Collection, and Analysis Library construction begins with the design of the enhancer pool. Candidate regulatory elements can be selected from any number of sources, including regions of interest from ChIP-sequencing, chromatin accessibility, DNAse hypersensitivity, non-coding genomic variants from clinical sequencing data, conserved non-coding sequences, or published enhancer atlases that integrate multiple sequence features [40][41][42][43]. In addition to candidate regions, positive and negative controls should also be included. Negative controls may originate from a variety of sources, including candidate enhancers from a different cell type, random genomic sequences, or regions individually validated as being inactive. Base shuffling of candidate regions is an attractive option as it preserves nucleotide frequencies. When choosing negative controls, it is important to include a robust number of regions (typically at least several hundred) in order to later set a meaningful cut-off when categorizing candidates as "active" or "inactive". In the past, we have identified active enhancers as those with activity greater than 95% of that of the negative controls (i.e., a 5% false discovery rate) [42,44]. Positive controls should consist of regions previously validated as having activity in sufficient numbers to instill confidence in the assay; this group typically consists of fewer regions than the negative control group. Finally, the number of replicates per sequence is an additional consideration. While many studies use only a single replicate, designing each candidate sequence to be produced in combination with multiple barcodes allows for multiple measurements per candidate within each sample, which improves the statistical power of subsequent analyses. After regions have been selected, the size of the regions to be assayed must be chosen. Fragment size may depend in large part on the method that will be used to produce the regions. While some methods, such as error-prone PCR or region capture via array-based As demonstrated in the above-mentioned study, vector context is an important consideration for any MPRA system. Assays are commonly conducted within vectors that are either episomal, such as a transient plasmid transfection and an adeno-associated virus (AAV), or chromosomally integrated via lentivirus. With an integrating vector, the chromatin state likely is more representative of that of the native enhancers, although locus-dependent integration effects may add noise to the dataset. Some significant differences between integrated and episomal assays have been noted [26,39]. However, in these studies, Pearson correlations between episomal and integrated contexts typically exceeded 0.8, suggesting that episomal assays are sufficient to capture most of the valuable signals. This is encouraging since, as the field moves toward in vivo MPRAs, many cells that are poorly transduced by lentivirus, such as cardiomyocytes, are robustly transduced by AAV. In conclusion, while configuration and integration state should be chosen for maximum alignment with features of the model system and available expertise, all common MPRA configurations have been used successfully in a variety of contexts. Experimental Design: Library Construction, Data Collection, and Analysis Library construction begins with the design of the enhancer pool. Candidate regulatory elements can be selected from any number of sources, including regions of interest from ChIP-sequencing, chromatin accessibility, DNAse hypersensitivity, non-coding genomic variants from clinical sequencing data, conserved non-coding sequences, or published enhancer atlases that integrate multiple sequence features [40][41][42][43]. In addition to candidate regions, positive and negative controls should also be included. Negative controls may originate from a variety of sources, including candidate enhancers from a different cell type, random genomic sequences, or regions individually validated as being inactive. Base shuffling of candidate regions is an attractive option as it preserves nucleotide frequencies. When choosing negative controls, it is important to include a robust number of regions (typically at least several hundred) in order to later set a meaningful cut-off when categorizing candidates as "active" or "inactive". In the past, we have identified active enhancers as those with activity greater than 95% of that of the negative controls (i.e., a 5% false discovery rate) [42,44]. Positive controls should consist of regions previously validated as having activity in sufficient numbers to instill confidence in the assay; this group typically consists of fewer regions than the negative control group. Finally, the number of replicates per sequence is an additional consideration. While many studies use only a single replicate, designing each candidate sequence to be produced in combination with multiple barcodes allows for multiple measurements per candidate within each sample, which improves the statistical power of subsequent analyses. After regions have been selected, the size of the regions to be assayed must be chosen. Fragment size may depend in large part on the method that will be used to produce the regions. While some methods, such as error-prone PCR or region capture via array-based probes, can generate enhancer libraries with fragment lengths exceeding 1 kb [45], the upper limit of region size is commonly dictated by the constraints of pooled oligo synthesis, which currently sit at 350 bp [46]. If the chosen strategy incorporates barcodes and/or priming sites, region size is limited to~300 bp. However, multiple groups have presented methods for the assembly of overlapping oligos to generate enhancers of increased length. A recent Nature Methods study demonstrated that two oligos can be assembled to produce 354 bp enhancers, or three oligos for 678 bp enhancers [26], while our own work has demonstrated the assembly of two oligos to produce 400 bp enhancer pools [42]. Unfortunately, few studies have systematically investigated how enhancer length affects reporter assay results, with the aforementioned Nature Methods study being the best available data. This study, which was conducted in cultured Hep2G cells, compared the activities of 651 candidate enhancers at three different lengths: 192 bp, 354 bp, and 678 bp. The authors observed a Pearson correlation between 192 bp and 678 bp enhancers of 0.53, indicating substantial differences in activity between different length enhancers. However, the direction of the difference varied from enhancer to enhancer, and significant differences in mean group activity level for different length enhancers were not observed. Surprisingly, no significance difference was reported between positive and negative controls in the 678 bp group, while significant differences were observed for the two shorter groups. To provide some clarity on the effect of enhancer length on MPRA results, we conducted a similar experiment featuring 50 enhancers that had been validated for activity in transient transgenic mouse embryos, individually synthesized in 200 bp, 400 bp, and 1000 bp lengths. Enhancers were pooled and cloned into an AAV9 reporter vector containing a minimal generic Hsp68 promoter or a short promoter sequence from the cardiac sarcomere gene Mlc2v (Figure 3a). In the heart, AAV9 selectively transduces cardiomyocytes. Of the 50 enhancers tested, 25 were cardiomyocyte positive control enhancers, and 25 were negative controls that displayed endothelial/endocardial specific activity in transgenic reporter mice. For the Hsp68 promoter-containing vector, we observed that on average, positive controls had significantly higher activity than negative controls for all lengths, with activity increasing as enhancer length increased (Figure 3b). For the Mlc2v promoter-containing vector, we observed similar results; however, the magnitude of the difference between positive and negative controls (i.e., the dynamic range of the assay) was considerably larger (Figure 3c). Next, we analyzed the correlation in activity between 200 bp and 1000 bp myocardial enhancers (Figure 3d). We observed a positive, albeit weak, correlation (Pearson correlation = 0.19). Enhancers with elevated activity at one length tended to also have elevated activity at the other length. However, not all enhancers followed this trend, with several enhancers showing high activity only within the 1000 bp group. Importantly, no enhancers had high activity in the 200 bp group and low activity in the 1000 bp group. Since activating motifs are much more frequent than repressive motifs, longer enhancers typically have similar or greater activity than truncations. Thus, our observations are consistent with expectations based on known enhancer biology. Interestingly, for both assays, as enhancer size increased, we observed a modest but significant increase in the activity of negative controls, in addition to the increase observed in positive controls. Thus, in both the Hsp68 vector and the Mlc2v vector, the ratio between positive and negative control activities did not dramatically change as enhancer size changed, suggesting that a wide range of enhancer sizes can be effectively used in MPRA assays, with larger enhancers having higher absolute levels of activity but only a moderately improved dynamic range. Our results suggest that the choice of a promoter that has a high likelihood of robust compatibility with the candidate enhancers should be carefully considered, as should to enhancer size. Figure 3. Effect of Enhancer Length on Activity (a) MPRA configuration. A library of 50 enhancers, each tested in three different lengths and with two different promoters (300 combinations), was packaged into AAV9 and delivered to newborn mice. Enhancers were selected from the VISTA Enhancer Browser of transgenic reporter data, and included 25 candidates active in the embryonic myocardium and 25 negative control candidates active in embryonic endothelium but not in myocardium. In the heart, AAV9 selectively transduces cardiomyocytes. Imaging of hearts from mice injected with the MPRA pool identified cardiomyocytes (green; Myh7 YFP ) with robust reporter expression (red; mScarlet) scattered throughout the myocardium. After collecting ventricles at P28, the reporter transcripts were sequenced, and the frequency of each barcode was compared to its frequency in the viral pool DNA. (b) When combined with an Hsp68 promoter, average myocardial enhancer activity was higher than endothelial enhancer activity at all lengths. Within both enhancer groups, longer enhancers generally displayed higher activity than shorter enhancers. (c) When combined with an Mlc2v promoter, average myocardial enhancer activity was again greater than endothelial enhancer activity at all lengths; however, the difference between the two enhancer groups was much more pronounced. Within both groups, longer enhancers once again displayed increased activity. (d) Correlation between 200 bp and 1000 bp activities for myocardial enhancers. Activities (RNA:DNA ratios) were normalized to the 200bp endothelial group average. Steel-Dwass p < 0.05 *, p ≤ 0.001 **, p ≤ 0.0001 ***. After enhancer selection and enhancer size choice, regions are typically generated by pooled oligo synthesis, PCR amplified, and cloned into the MRPA vector. As with any library amplification, it is important to use the minimum necessary number of PCR cycles to avoid mutations, maintain pool diversity, and avoid a recombination that degrades enhancer-barcode links [35]. Emulsion PCR, in which template molecules are segregated into small aqueous droplets in oil for highly parallelized amplification, is another technical approach that can minimize these issues [47]. After insertion of enhancers into the MPRA vector, sufficient plasmid must be generated for transient transfection or virus production. This typically involves the electroporation of a ligation product and the collection of a large number of bacterial colonies. In order to maintain library diversity, it is critical to collect a sufficiently large number of colonies, with greater than 100x more colonies than library sequences being ideal [48]. At this point, library plasmids should be ampliconsequenced to verify the sufficient representation of most candidates. Candidates with poor representation in the plasmid or viral library will be excluded from subsequent analyses of RNA samples. Following the creation of the vector pool, cells are transfected or transduced. Adequate coverage of the library requires that each unique sequence be sampled many times, with 500x being a commonly referenced benchmark [49]. As an example, given that the mouse heart contains ~2 million cardiomyocytes [50], if 70% of the Figure 3. Effect of Enhancer Length on Activity (a) MPRA configuration. A library of 50 enhancers, each tested in three different lengths and with two different promoters (300 combinations), was packaged into AAV9 and delivered to newborn mice. Enhancers were selected from the VISTA Enhancer Browser of transgenic reporter data, and included 25 candidates active in the embryonic myocardium and 25 negative control candidates active in embryonic endothelium but not in myocardium. In the heart, AAV9 selectively transduces cardiomyocytes. Imaging of hearts from mice injected with the MPRA pool identified cardiomyocytes (green; Myh7 YFP ) with robust reporter expression (red; mScarlet) scattered throughout the myocardium. After collecting ventricles at P28, the reporter transcripts were sequenced, and the frequency of each barcode was compared to its frequency in the viral pool DNA. (b) When combined with an Hsp68 promoter, average myocardial enhancer activity was higher than endothelial enhancer activity at all lengths. Within both enhancer groups, longer enhancers generally displayed higher activity than shorter enhancers. (c) When combined with an Mlc2v promoter, average myocardial enhancer activity was again greater than endothelial enhancer activity at all lengths; however, the difference between the two enhancer groups was much more pronounced. Within both groups, longer enhancers once again displayed increased activity. (d) Correlation between 200 bp and 1000 bp activities for myocardial enhancers. Activities (RNA:DNA ratios) were normalized to the 200bp endothelial group average. Steel-Dwass p < 0.05 *, p ≤ 0.001 **, p ≤ 0.0001 ***. After enhancer selection and enhancer size choice, regions are typically generated by pooled oligo synthesis, PCR amplified, and cloned into the MRPA vector. As with any library amplification, it is important to use the minimum necessary number of PCR cycles to avoid mutations, maintain pool diversity, and avoid a recombination that degrades enhancer-barcode links [35]. Emulsion PCR, in which template molecules are segregated into small aqueous droplets in oil for highly parallelized amplification, is another technical approach that can minimize these issues [47]. After insertion of enhancers into the MPRA vector, sufficient plasmid must be generated for transient transfection or virus production. This typically involves the electroporation of a ligation product and the collection of a large number of bacterial colonies. In order to maintain library diversity, it is critical to collect a sufficiently large number of colonies, with greater than 100x more colonies than library sequences being ideal [48]. At this point, library plasmids should be amplicon-sequenced to verify the sufficient representation of most candidates. Candidates with poor representation in the plasmid or viral library will be excluded from subsequent analyses of RNA samples. Following the creation of the vector pool, cells are transfected or transduced. Adequate coverage of the library requires that each unique sequence be sampled many times, with 500x being a commonly referenced benchmark [49]. As an example, given that the mouse heart contains~2 million cardiomyocytes [50], if 70% of the cardiomyocytes are transduced, and we conservatively estimate one viral particle per transduced cell, then a library of 14,000 cis-regulatory elements will require at least five mice for adequate coverage (500 × 14,000 = 2,000,000 × 5 × 0.7). Since samples are often relatively easy to acquire compared to the effort necessary for creation of the pooled vector, we recommend erring on the side of caution and collecting replicates sufficient for a very high coverage of the library. In our experience, too few sample replicates, both biological and technical, is a common source of noise in MPRA data. Upon collecting total RNA, the reporter transcript is reverse transcribed, often adding a unique molecular identifier (UMI) to each molecule in the process. Next, the barcodecontaining region is PCR amplified and sequenced. After the removal of PCR duplicates using UMI information [51,52], barcodes are counted and associated with their linked enhancers. After normalizing for sequencing depth, the frequency of each enhancer in the RNA-derived samples can be compared to its frequency in the starting DNA pool. This RNA:DNA ratio serves as an activity measurement that can be compared between enhancers and experimental conditions. While this analysis strategy can be refined in various ways [29,30], the basic framework is simple and accessible to most scientists familiar with transcriptomics data. In Vivo Applications A key limitation of many reporter assays is that they are often conducted in cultured cell types that have dubious relevance to in vivo biology. However, progress is rapidly being made in adapting MPRAs to in vivo use. While the first in vivo MPRA was conducted in the mouse liver via the hydrodynamic tail vein injection of plasmid [34], the delivery of MPRA pools to other tissues has been more challenging and has lagged behind. One notable system, the delivery of plasmid pools via the electroporation and culture of explanted newborn mouse retinas, has been used successfully in multiple MPRAs [53][54][55][56], shedding light on photoreceptor gene regulation. However, this in situ approach has limited applicability to tissues beyond the retina, and thus, the development of viral vectors for MPRAs is an important front in the effort to adapt MPRAs to diverse tissues. To date, much of the relevant activity has been within the neuroscience field. In 2016, an MPRA library was successfully packaged into an AAV9 variant capable of high-efficiency neural transduction, and the activities of approximately 3500 cis-regulatory elements were assessed within a mouse cerebral cortex [45]. This study demonstrated that reporter RNA and DNA could be recovered from transduced tissue, and it allowed for insights into the sequence features that mediate cis-regulatory element activity in the mouse cortex. In 2019, the work was followed up by a study that combined an MPRA with single-cell RNA-sequencing, which allowed for the resolution of enhancer activities across the different cell types that were transduced by the injection of AAV into the mouse cortex [57]. More recently, a group screened a library of candidate brain enhancers and regions associated with GWAS studies of epilepsy and schizophrenia using an AAV MPRA vector injected to the postnatal mouse forebrain. Many putative regulatory elements were validated as forebrain enhancers, including a Cacna1c intronic region previously associated with neuropsychiatric disorders [58]. Despite the existence of AAV vectors with strong cardiomyocyte tropism, the cardiac adoption of MPRAs has lagged. To date, only two cardiac studies have been published, both from the laboratory of Dr. William Pu. The first, our work with genomic regions bound by multiple core cardiac transcription factors [42], featured a library of 2700 regions, each 400 bp in length, generated by annealing overlapping oligos. Regions were cloned into the 3 UTR of a scAAV Mlc2v promoter-containing reporter vector (Addgene #182649), which was packaged into the AAV9 capsid and delivered to newborn mice. A week after injection, the hearts were collected and enhancer activities were measured. On average, enhancers bound by multiple core cardiac transcription factors displayed robust activity, while negative control regions corresponding to putative mouse embryonic stem cell enhancers did not, thus confirming the importance of core cardiac transcription factor binding for transactivation in cardiomyocytes. The second cardiac study expanded on these results by using a similar ChIP-seq-based approach to identify candidate enhancers, including a subset with atrial-or ventricular-specific occupancy [44]. Chamber-specific and non-specific candidates were then assayed by MPRA in the atria and ventricles. This strategy featured 2943 candidate enhancers and 954 negative control regions, each 400 bp in length, constructed via the pooled synthesis of self-priming oligo pairs. Candidates were cloned into a STARR-seq style AAV-MPRA vector and assayed. Of the candidates, 1092 had activity in either the ventricle or atria, with the activity of 229 enhancers having significant chamber specificity. From the active enhancers, subsets of chamber specific and non-specific enhancers were selected for dense mutagenesis. This was achieved by the pooled synthesis of a series of shorter sequences "tiled" at 5 bp overlapping intervals across the larger enhancer. For each tile, a wildtype version and a mutant version were synthesized, with the mutant having a deletion of the central 5 bp. Thus, by assaying these wildtype and mutant tiles in both the atria and the ventricle, specific motifs conferring activity and chamber specificity were discovered. One such motif, ERRα/γ, was shown to be necessary for ventricle specific activity in a subset of enhancers, with subsequent studies showing that ERRα/γ double knockout in cardiomyocytes results in the loss of ventricular identity. By rapidly screening large numbers of candidates and finely dissecting those with chamber-specific activity, this study effectively demonstrated the utility and versatility of in vivo MPRAs. Discussion and Perspectives on Future Directions While the studies mentioned above demonstrate the feasibility and value of using AAV to deliver MPRA libraries, this approach remains in its infancy. In the near future, we expect a surge of studies spanning diverse organ systems, with the heart being well represented. We anticipate that the future of cardiac genomics will include a much more comprehensive MPRA-based characterization of enhancers, including measurements of activity across development and during disease. As enhancers of interest are identified, high-resolution dissection will identify the molecular mechanisms underlying their activity profiles. Traditionally, enhancer dissection has been achieved using reporter assays with truncated versions of the enhancer, while the MPRA era has made saturating mutagenesis possible. In this approach, all parts of the enhancer are independently mutagenized and tested, allowing for a more comprehensive analysis [32,[59][60][61]. Mutagenesis can be achieved through error-prone PCR, or as mentioned above, synthesis of a series of enhancers in which each features a different mutation or group of mutations. In either case, mutant enhancers that show a loss of activity relative to the wildtype element can be analyzed to identify the crucial motifs, giving important clues as to which protein regulators are responsible for transactivation. Alternatively, the use of bioinformatics to identify candidate motifs for targeted mutation allows for the collection of mechanistic data while greatly reducing the necessary size of the regulatory element library. This targeted approach has been used effectively to analyze the impact of specific motif families on enhancer activity [62], and we expect this to be a fruitful strategy for characterizing the roles played by various cardiac transcription factors and their corresponding motifs. While MPRAs are a powerful tool, they have several limitations. First, it can be challenging to directly link MPRA data to gene expression. Enhancers can be located long distances away from the genes they regulate, and thus, integrated analysis with high-resolution chromatin interaction maps may be required to make these regulatory connections. Furthermore, an enhancer that is sufficient to activate the expression of a reporter gene may not be necessary for gene expression in the native genomic context. Indeed, enhancer redundancy is a commonly observed adaptation that ensures robust and resilient gene expression [63][64][65]. Similarly, specific motifs that are necessary for enhancer activity may not be necessary for gene expression. While these issues can be addressed using the traditional approach of generating a mouse line with a targeted mutation, the growing popularity of CRISPR-mediated somatic mutagenesis in the heart potentially offers an expedient alternative [66][67][68]. In such a system, Cas9-expressing mice are transduced with a gRNA targeting a motif of interest. The resulting targeted double strand breaks are repaired by error-prone non-homologous end joining, which typically produces a small mutation sufficient to disrupt a regulatory motif. After confirmation of successful mutagenesis, the expression of the associated gene can be assessed. While this approach may not be appropriate for all motifs, such as those that lack a suitable gRNA PAM sequence, this will allow for the targeting of most motifs in a scalable manner. A second limitation of MPRAs is the inaccessibility of many tissues and developmental timepoints. While postnatal cardiomyocytes are easily transduced by AAV9, non-myocyte cardiac populations are challenging. However, a number of groups are pursuing large-scale AAV capsid engineering [69][70][71], and many variants with increased tropism for previously poorly transduced cell types have already been developed [72][73][74]. In addition to the challenges of transducing diverse cell types, many of the most dynamic gene regulatory events within the heart take place during embryonic development. While AAV and other popular viral vectors administered during pregnancy typically do not cross the placenta or fetal membranes [75], the direct fetal injection of AAV has shown variable levels of success in transducing a variety of embryonic tissues [76][77][78][79][80], including high-efficiency transduction of the myocardium by AAV9 during late gestation [81]. While the early stages of heart development may be inaccessible for the foreseeable future, we expect that the improvement in viral vectors and delivery protocols will continue to accelerate, eventually allowing access to most cardiac cell populations at a range of timepoints, including mid to late gestation. A third limitation of MPRAs related to AAV vectors is the uncertain chromatin state of the reporter vector. After cellular import, AAV genomes are converted from singleto double-stranded DNA. AAV genomes then persist as circularized monomeric or concatemeric extrachromosomal episomes, which acquire chromatin properties that include a typical nucleosomal pattern [82,83]. Nevertheless, it is not clear if vector-derived chromatin receives the full set of typical modifications, raising the possibility that episomal vectors may not be able to fully recapitulate the activity profiles of some enhancers. In comparison, lentivirus vectors integrate into the host genome and can be marked by the full set of epigenetic modifiers; however, integration position effects will introduce variability to the assay results. One potential improvement on current approaches is to employ site-specific insertion of the reporter vector into the host genome. While this strategy has been successfully executed for individual enhancers in genetically modified mice [84], an MPRA has not been possible. However, developments in genome editing will likely change this in the near future. Recently, we showed that AAV vectors carrying donor templates can facilitate the CRISPR-mediated precise integration of transgenes within the genomes of postnatal mouse cardiomyocytes via homology-directed repair [81]. A similar approach could be used to precisely insert a library of cis-regulatory elements at a target locus in vivo. We eagerly look forward to such developments, which may improve the accuracy, reproducibility, and dynamic range of MPRAs. In summary, MPRAs are a powerful tool for the high-throughput assessment of cisregulatory element activity, and the adaptation of this tool for use in vivo is particularly exciting, given the time and budgetary constraints of traditional methods of enhancer evaluation. MPRAs have a range of applications, including identification of novel cisregulatory elements, dissection of known elements, characterization of element activity across development or during disease, and characterization of disease-associated variants. MPRAs can be deployed in a variety of configurations with varying levels of complexity, and we anticipate that MPRAs will soon be used in combination with other functional genomic techniques to systematically characterize the transcriptional networks that govern cardiovascular gene expression. Conflicts of Interest: The authors declare no conflict of interest.
2023-03-31T15:19:49.226Z
2023-03-29T00:00:00.000
{ "year": 2023, "sha1": "f2f3ed17a7249c66338d76834e0e88b18f2a8038", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2308-3425/10/4/144/pdf?version=1680068184", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "275188c97a71734d84d03d6e714a18a94a4f7274", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
43632553
pes2o/s2orc
v3-fos-license
Tunable Vibrational Band Gaps in One-Dimensional Diatomic Granular Crystals with Three-Particle Unit Cells We investigate the tunable vibration filtering properties of one-dimensional diatomic granular crystals composed of arrays of stainless steel spheres and cylinders interacting via Hertzian contact. The arrays consist of periodically repeated three-particle unit cells (steel-cylinder-sphere) in which the length of the cylinder is varied systematically. We apply static compression to linearize the dynamic response of the crystals and characterize their linear frequency spectrum. We find good agreement between theoretical dispersion relation analysis (for infinite systems), state-space analysis (for finite systems), and experiments. We report the observation of up to three distinct pass bands and two finite band gaps and show their tunability for variations in cylinder length and static compression. I. INTRODUCTION The presence of band gaps, a characteristic of wave propagation in periodic structures, has been studied in a wide array of settings involving phononic crystals, photonics, and plasmonics [1][2][3][4][5] . Materials exhibiting band gaps are of particular interest as they forbid and allow the propagation of waves in selected frequency ranges (pass and stop bands), and in the case of elastic wave propagation (in composites or multilayered structures) have previously been proposed for use in acoustic filters, vibration isolation applications, and rectification of acoustic energy flux [6][7][8][9] . Chains composed of elastic particles in close contact with each other, or "granular crystals", have gained much attention with respect to elastic wave propagation in nonlinear media. The nonlinearity in granular crystals results from the Hertzian contact between two elastic spherical (or spherical and cylindrical) particles in compression and from a zero tensile strength. The contact stiffness is defined by the geometry and material properties of the particles in contact 10 . In this type of system the level of nonlinearity present can be controlled by the amount of static compression applied to the chain, resulting in a dynamic response which encompasses the linear, weakly nonlinear, and strongly nonlinear regimes 11,12 . This simple means of controlling their dynamic response has made granular cystals a perfect test bed for the study of nonlinear phenomena, including the emergence of coherent structures such as solitary waves 11,13 , discrete breathers 14,15 , shock waves 16 , and linear/nonlinear defect modes 17,18 . Additionally, granular crystals have been shown to be useful in application to engineering endeavors including shock and energy absorbing layers 12,[19][20][21] , actuating devices 22 , acoustic lenses 23 and sound scramblers 24,25 . Previous studies involving statically compressed granular crystals composed of onedimensional (1D) periodic (monoatomic and diatomic with a two particle unit cell) arrays of glued 26 , welded 27 , and elastically compressed spherical particles 14,[29][30][31] have been shown to exhibit tunable frequency vibrational band gaps. In this manuscript we study statically compressed 1D diatomic granular crystals composed of periodic arrays of stainless steel sphere-cylinder-sphere unit cells. We employ theoretical models to estimate the dispersion relation of the crystals, we numerically validate their dynamic response using state-space analysis and verify experimentally the crystal's acoustic transmission spectrum. For such configurations we experimentally report the presence of a third distinct pass band and a second finite band gap. We show tunability and customization of the response for variation of the cylinder length and static compression. II. EXPERIMENTAL SETUP We assemble five different 1D diatomic granular crystals composed of three-particle, sphere-cylinder-sphere, repeating unit cells as shown in Fig. 1(a). The chains are 21 particles (7 unit cells) long. The particles (spheres and cylinders) are made from 440C stainless steel, with radius R = 9.53 mm, elastic modulus E = 200 GPa, and Poisson's ratio ν = 0.3 32 . Each of the five chains is assembled with cylinders of a different length, L = [9.4, 12.5, 15.8, 18.7, 21.9] mm. The mass of the spherical particles is measured to be m = 27.8 g and the mass of the cylindrical particles is measured to be M = [20.5, 27.3, 34.1, 40.7, 47.8] g for each of the corresponding cylinder lengths. We align the spheres and cylinders in a horizontal 1D configuration using a containment structure of four polycarbonate rods (12.7 mm diameter). We hold the polycarbonate rods in place with polycarbonate guide plates spaced at intervals of 1 unit cell. We apply low amplitude broadband noise to the granular crystals using a piezoelectric actuator mounted on a steel cube of height 88.9 mm which is fixed to the table. We visualize the evolution of the force-time history of the propagating excitations using a calibrated dynamic force sensor. The force sensor is composed of a piezo-electric disk embedded with epoxy inside two halves of a R = 9.53 mm, 316 stainless steel sphere (of elastic modulus 193 GPa, and a Poisson ratio of 0.3 32 ). The sensor is constructed so as to approximate the mass, shape, and contact properties of the spherical particles in the rest of the crystal 12,24,25,33 . The assembled force sensor is calibrated against a commercial dynamic force sensor, and has a measured total mass and resonant frequency of 28.0 g and 80 kHz, respectively. We insert the dynamic force sensor in place of the last particle, located at the opposite end of the crystal from the actuator. We condition its output with a 30 kHz cutoff 8-pole butterworth low pass filter and voltage amplifier. At the opposite end of the crystal with respect to the piezoelectric actuator, we apply a static compressive force, F 0 , using a soft (compared to the contact stiffness of the particles) stainless steel linear compression spring (stiffness 1.24 kN/m). In this case we can approx-imate this boundary as a free boundary. The static compressive force applied to the chain is adjusted by positioning, and fixing to the table, a movable steel cube of height 76.2 mm so that the soft linear spring is compresssed. The resulting applied static load is measured with a static load cell placed in between the steel cube and the spring. A. Dispersion Relation We model a 1D diatomic crystal composed of n sphere-cylinder-sphere unit cells (and N particles) as a chain of nonlinear oscillators 11 : where [Y ] + denotes the positive part of Y ; the bracket takes the value Y if Y > 0, and 0 if Y ≤ 0. This represents the tensionless characteristic of our system: when adjacent particles are not in contact, there is no force between them. The above model assumes that the particles act as point masses. This is valid as long as the frequencies of the applied vibrations are much lower than the frequencies of the natural vibrational modes of the individual particles 28 . Here, u l is the displacement of the lth particle around the static equilibrium, δ l−1,l is the static overlap between the (l − 1)th and the lth particles, and m l is the mass of the lth particle (where l is the index of the lth particle in the chain counted from the piezoelectric actuator end, and l ∈ {1, · · · , 3n}). As per Hertz's contact law, the coefficients α depend on the geometry and material properties of the adjacent particles and on the exponent p (here p = 3/2) 10 . Here, in the case of the sphere-cylinder-sphere unit cell, we need to account for two different values of the contact coefficients α, corresponding to the sphere-cylinder and the sphere-sphere contacts, where: For this case, it can be seen that A 1 = √ 2A 2 . Furthermore, for Hertzian contacts, under a static load F 0 , we can define the static overlap for the sphere-cylinder contact as δ sphere,cylinder = δ cylinder,sphere = (F 0 /A 1 ) 2/3 , and for the sphere-sphere contact as 10,11 . Considering small amplitude dynamic displacements as compared to the static overlap, one can linearize the equations of motion (Eq. 1). For the studied sphere-cylinder-sphere unit cell the particles' linearized equations of motion are: where j is the number of the jth unit cell (j ∈ {1, · · · , n}), m is the mass of a spherical particle, M is the mass of a cylindrical particle, is the linearized stiffness between a spherical and cylindrical particle, and is the linearized stiffness between two spherical particles. The dispersion relation for a diatomic (two particle unit cell) granular crystal is known to contain two branches (acoustic and optical ) 30 . Here we use a similar procedure to calculate the dispersion relation for a diatomic crystal with a three particle unit cell. We substitute the following traveling wave solutions where k is the wave number, ω is the angular frequency, and a = L + 4R − 2δ sphere,cylinder − δ sphere,sphere is the equilibrium length of the sphere-cylinder-sphere unit cell: into Eqs. (4). U, V , and W are the wave amplitudes, and are constructed complex so as to contain both the amplitude and phase difference for each particle within the unit cell. Solving for a nontrivial solution we obtain the following dispersion relation: In Fig. 2 (a), we plot the dispersion relation (Eq. 6) for the previously described spherecylinder-sphere unit cell granular crystal with cylinder length L = 12.5 mm (M = 27.3 g) subject to an F 0 = 20 N static load. Three bands of solutions (or propagating frequencies) can be seen; the lowest in frequency being the acoustic band, followed by lower and upper optical bands. Frequencies in between these bands are said to lie in a band gap (or forbidden band). Waves at these frequencies are evanescent, decay exponentially, and cannot propagate throughout the crystal 1 . If we solve the dispersion relation, Eq. (6), for when k = π a and k = 0 we obtain the following cutoff frequencies: f c,1 , f c,2 , and f c,3 correspond to k = 0 and f c,4 , f c,5 , and f c,6 to k = π a . In Fig. 2 (a), we label the six cutoff frequencies (Eqs. 7) for the previously described granular crystal with cylinder length L = 12.5 mm (M = 27.3 g) subject to a F 0 = 20 N static load. From Eqs. 7, it can be seen that the cutoff frequencies are tunable through the variation of particle masses m and M, and the linearized stiffnesses β 1 and β 2 (thus tunable with changes in geometry, and static compression F 0 ). In Fig. 2(b) we plot the cutoff frequencies in Eqs. (7) as a function of cylinder length for fixed F 0 =20 N static compression, and in β 2 ). Notice, however, that aside from these special parameter values where the above intersections occur, the spectrum preserves the three pass bands with two associated finite bandgaps between them. B. State-space approach In addition to the dispersion relation previously calculated for an infinite system, we study the finite linearized system corresponding to our experimental setup as shown in Fig. 1(b). We model the actuator boundary of our system as a fixed 440C steel wall. We model the other end of the chain as a free boundary, as the stiffness of the spring used for static compression is much less than the characteristic stiffness of the particles in contact. The linearized equations of motion for the finite system are the same as Eqs. (4), except the equations for the first and last particles which are given by the following expressions: where F 1 is the force applied to the first particle by the actuator. Next, we apply the state-space approach, which can be written as 34 : where vectors x, u, and y are the state, input, and output vectors, respectively. Matrices A, B, C, and D are called state, input, output and direct transmission matrices, respectively. We choose as an input to the system the force F 1 , i.e., u = F 1 and as an output , the averaged force of the two contacts of the last particle (which is analogous to what is measured by the embedded dynamic force sensor in our experimental setup) 12,24,25,33 . Thus, for the linear system of Fig. 1(b), we obtain . . 0 , and D = 0. 0 is a zero matrix and I is the identity matrix (both of size N × N). We use the formulation in Eqs. 9 with MATLAB's (R2008b) bode function to compute the bode diagram of the frequency response for the experimental configurations described. We truncate the visualization in Fig. 3 below −40 dB and above 20 dB as a visual aid to maintain clarity of the frequency region of interest. This resembles experimental conditions, as the noise floor of our measurements is approximately −38 dB (as can be seen in Fig. 4) and the presence of dissipation in our experiments reduces the sharpness of the resonant peaks in contrast to those predicted by the state-space analysis. Attenuating and propagating frequency regions for this formulation match well with the cutoff frequencies of the infinite system (see Eqs. (7)), denoted by the solid lines plotted in Fig. 3. The high amplitude (bright) peaks correspond to the eigenfrequencies of the system, the modes of which are spatially extended. However, for certain cylinder lengths, we also observe an eigenfrequency located in the second gap of the linear spectrum (denoted by an arrow in Fig. 3(a)). These modes result from the break in periodicity due to the presence of the actuator "wall" (acting like a defect in the system). In our setup, it can be seen in Fig. 1(b) that the first particle (which is spherical) is coupled to both its nearest neigbors via springs characterized by spherical-planar contact (β 1 ). This is unique within the chain and forms a type of locally supported defect mode. When the frequency of this mode lies within a band gap the mode becomes spatially localized around the first particle and its amplitude decays exponentially into the chain. Furthermore, as our chains are relatively short and the gap that the localized modes occupy relatively narrow (in frequency), the spatial profile is found to be almost similar to the extended modes. This suggests that it may be experimentally difficult to differentiate these modes from their extended counterparts. IV. EXPERIMENTAL LINEAR SPECTRUM We experimentally characterize the linear spectrum of the previously described diatomic chains with sphere-cylinder-sphere unit cells for varied cylinder length and static load. We apply a low-amplitude (approximately 200 mN peak) bandwidth limited (3 − 15 kHz) noise excitation with the piezoelectric actuator. We measure the dynamic force using a sensor embedded in the last particle of the granular crystal as shown in Fig. 1 Fig. 4. We compare the experimentally determined A 1,exp and A 2,exp to the theoretically determined A 1 and A 2 in Table I (error ranges indicate the standard deviation resulting from the measurements at the five different static loads). As in the local radius of curvature due to surface roughness could result in the material behaving more stiffly 36 . Deviations from Hertzian behavior could potentially be caused by the dynamic loading conditions, non-Hookean elastic dynamics (due to nonlinear elasticity or plasticity), or dissipative mechanisms (viscoelasticity, solid friction), and could result in a shift in the exponent p or in the effective contact coeffcient A [36][37][38] . We also observe that the contact coefficient A between the cylindrical and spherical particles has the larger deviation from theory. This deviation could be attributed mainly to the cylindrical particles, due to characteristics not shared by the spherical particles. Such characteristics could include: surface roughness particular to the manufacturing process of the cylindrical particles, or plastic deformation occuring closer to the surface as compared to spherical particles. In Fig. 5, we plot the experimentally determined PSD transfer functions for the five previously described diatomic (three-particle unit cell) chains with varied cylinder length for fixed F 0 =20 N static compression (Fig. 5(a)), and static compression F 0 = [20,25,30,35,40] N, for fixed cylinder length L = 12.5 mm (M = 27.3 g) (Fig. 5(b)). We plot with solid white lines the cutoff frequencies from the dispersion relation calculated using the experimentally determined Hertz contact coefficients A 1,exp and A 2,exp . We observe good agreement between the semi-analytically derived cutoffs (i.e, from the theoretical dispersion relation but using A 1,exp and A 2,exp ) and the experimental spectra. By comparing Fig. 5 to Fig. 3 we observe good qualitiative agreement between the numerical (state-space) and experimental spectra. The demonstrated attenuation of the elastic wave propagation in frequency regions lying within the band gaps of the granular crystals shows that such systems have potential for use in a wide array of vibration filtering applications. Furthermore, the tunability displayed (achievable from material selection, shape, size, periodicity and application of static compression) offers significant potential for attenuating a wide spectrum of undesired frequencies. V. CONCLUSIONS In this work, we describe the tunable vibration filtering properties of a 1D granular crystal composed of periodic arrays of three-particle unit cells. The unit cells are assembled with elastic beads and cylinders that interact via Hertzian contact. Static compression is applied to linearize the dynamics of particles interaction and to tune the frequency ranges supported by the crystal. We measure the transfer functions of the crystals using state-space analysis and experiments, and we compare the results with the corresponding theoretical dispersion relations. Up to three distinct pass bands and three (two finite) band gaps are shown to exist for selected particle configurations. The tunability of the band edges in the crystal's dispersion relation is demonstrated by varying the applied static load and the cylinder length. In the present work, we restrict our considerations to the study of near linear, small amplitude excitations. A natural extension of this work would involve the examination of nonlinear excitations within the bandgaps of such granular chains 14 . In particular, it would be relevant to compare the properties of localized nonlinear waveforms in different gaps of the linear spectrum. Such studies will be reported in future publications.
2010-11-06T20:26:50.000Z
2010-11-06T00:00:00.000
{ "year": 2011, "sha1": "5280e48f2c7da50013a63e5c692d742777e8ad32", "oa_license": null, "oa_url": "https://authors.library.caltech.edu/23709/1/Boechler2011p13828J_Appl_Phys.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5280e48f2c7da50013a63e5c692d742777e8ad32", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
16304941
pes2o/s2orc
v3-fos-license
Message From Editor-in-Chief I presented a proposal to move the TFVI to a technical committee (TC) and change the TC name from Virtual Intelligence to Computational Intelligence (CI). The proposal was unanimously accepted by the TAB meeting, and formally approved by the IEEE Computer Society Board of Governors in July 2002. Since the approval of the TCCI, I have formed an Executive Committee to manage TCCI activities with the following members. is the past chair of the TFVI. He will provide advice for the Chair on TCCI awards and interactions with sister societies as well as on other important issues of the TCCI activities. looks after student affairs. He is looking at the possibilities of (1) providing student grants to help defray the cost of attending TCCI conferences, (2) encouraging student poster sessions with a supplement to the TCCI conference proceedings, (3) arranging for industry to meet students, and (4) promoting the interaction between TCCI conference invited speakers and graduate students. • Gusz Eiben (Vrije Universiteit Amsterdam, The Netherlands) takes care of curriculum issues, including the coverage of CI degree programs at universities. Professor Eiben believes typical CI components would involve evolutionary computing, neurocomputing, fuzzy computing, machine learning and data mining, adaptive and intelligent agents, and self-organizing systems. • Vipin Kumar (University of Minnesota, USA) deals with publication matters. Our publication activities will involve possible special issues on Computational Intelligence related topics in IEEE Transactions. Baptist University, China) has been appointed as the Editor for the IEEE Computational Intelligence Bulletin. He has set up an editorial board and a plan for the publication of this Bulletin. The Bulletin will cover news and announcements of the TCCI activities, feature articles, book reviews and other Computational Intelligence relevant items. • Benjamin W. Wah (University of Illinois, Urbana-Champaign, USA) is the past chair of the TFVI. He will provide advice for the Chair on TCCI awards and interactions with sister societies as well as on other important issues of the TCCI activities. • Nick J. Cercone (Dalhousie University, Canada) looks after student affairs. He is looking at the possibilities of (1) providing student grants to help defray the cost of attending TCCI conferences, (2) encouraging student poster sessions with a supplement to the TCCI conference proceedings, (3) arranging for industry to meet students, and (4) promoting the interaction between TCCI conference invited speakers and graduate students. • Gusz Eiben (Vrije Universiteit Amsterdam, The Netherlands) takes care of curriculum issues, including the coverage of CI degree programs at universities. Professor Eiben believes typical CI components would involve evolutionary computing, neurocomputing, fuzzy computing, machine learning and data mining, adaptive and intelligent agents, and self-organizing systems. • Vipin Kumar (University of Minnesota, USA) deals with publication matters. Our publication activities will involve possible special issues on Computational Intelligence related topics in IEEE Transactions. • Jiming Liu (Hong Kong Baptist University, China) has been appointed as the Editor for the IEEE Computational Intelligence Bulletin. He has set up an editorial board and a plan for the publication of this Bulletin. The Bulletin will cover news and announcements of the TCCI activities, feature articles, book reviews and other Computational Intelligence relevant items. We have a standing invitation to join the TCCI for faculty, students, researchers and application developers from different Computational Intelligence related areas, such as artificial neural networks, fuzzy logic, evolutionary optimization, rough sets, data mining, Web intelligence, intelligent agent technology, parallel and distributed information processing, and virtual reality. You can visit the IEEE Computer Society's website (http://computer.org/tcsignup/) and fill out the TC membership form there. I believe Computational Intelligence can play a very important role in the IEEE Computer Society's activities, and the Executive Committee will do our best to promote more activities sponsored by the TCCI and advance the TCCI's role both within the IEEE Computer Society and in collaboration with related sister societies. The TCCI has been planning on a number of exciting activities, as indicated in the above Executive Committee members' roles, and the participation and contributions from our TCCI members are essential. If you have any suggestions on any of the above activities or on other possible and worthwhile activities, please let me know. I hope you will enjoy reading this Bulletin. If you need any more information about TCCI, please visit our website at http://www.cs.uvm.edu/∼xwu/tcci/index.shtml Xindong Wu (University of Vermont, USA) Chair, TCCI -IEEE Computer Society Technical Committee on Computational Intelligence Message from the Editor-in-Chief The 21st century will continue to be computation-centric; social and economic development will be benchmarked by how intelligently the power of computing can be utilized and extended. As the official publication of the IEEE Computer Society Technical Committee on Computational Intelligence, the IEEE Computational Intelligence Bulletin will provide a new gateway for researchers and practitioners to have direct access to the latest information on advanced research, industrial development, and professional activities in the major technical areas of Computational Intelligence, to share ideas and experiences, and to stimulate dialogues on emerging or challenging issues. This first issue of the Bulletin presents an excellent coverage of some of the most challenging problems for today's as well as future Computational Intelligence community. The issue contains two feature articles: one on Web-log mining for request prediction and another on context-based problem solving in intelligent agents, an article on the coming Bison project in self-organizing P2P systems, and a R&D profile report on a newly established Cork Constraint Computation Centre. Last but not the least, it provides a book review of Blondie24, Playing at the Edge of AI and a highlight of several major upcoming events. This debut of the first issue of the IEEE Computational Intelligence Bulletin shows the result of months of collaborative hard work by all the members of the Editorial Board. I would like to express my gratitude to their timely and professional efforts in working closely with contributing authors on the contents and presentations. I hope that you will indeed enjoy reading the IEEE Computational Intelligence Bulletin, and find it informative, insightful, and inspiring.
2019-02-16T14:33:01.340Z
2016-12-12T00:00:00.000
{ "year": 2016, "sha1": "2f6a30828e57cbb9beeb9de18e4522b6c6a54bcb", "oa_license": "CCBY", "oa_url": "https://ojs.calaijol.org/index.php/ijol/article/download/17/85", "oa_status": "GOLD", "pdf_src": "DBLP", "pdf_hash": "4445695ca0a3d238c08fb9f21439000ff24241ac", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
245483930
pes2o/s2orc
v3-fos-license
Sports ActionRecognitionBased on ImageProcessing Technology and Analysis of the Development of Sports Industry Pattern ,e current era is an information age, and society is turning to the information age.,e image processing technology is also widely used in various fields, and the technology of sports action recognition based on image processing technology can also be said to be appropriate. ,is article uses a spatial visual feature analysis algorithm to implement it. To implement this algorithm, a series of work such as image collection, feature extraction, and action recognition must be completed first and then implemented through texture functions and other related functions. ,is algorithm can be used to complete the image-based sports action recognition technology at the minimum time cost.,is algorithm can help sportsmen better complete training and standardize movements to a certain extent. As for the development of China’s current sports industry structure, it is also steadily improving.,e people’s love for sports is getting stronger and stronger, which also makes the development of China’s sports industry still benefit a lot. Introduction As the technology of image processing [1] matures, there will be more and more things we can do with this technology. We can also apply this technology to various sports [2]. In the training guidance, use this technology to check the characteristics of specific sports training [3] to check whether the action meets the standard to make prompts to achieve the purpose of improving the quality of various sports training. We can use this technology to identify the image information collected during sports [4], identify the training actions in the sports, and then combine the feature analysis methods in the visual space of the relevant space to analyze the various sports characteristics in the sports. rough image feature analysis and edge feature detection technology [5], the main features of sports training sports images are extracted. Edge contour detection [6] and corner detection technology realize motion capture in sports training [7], aiming at actual training in various sports [8]. Research on the application of capture methods in sports training has received widespread attention. e capture action of a specific motion is based on the collection and characteristic analysis, the dynamic characteristics of the training actions are extracted, and then the three-dimensional image is reconstructed [9], using the motion training motion capture combined with the spatial region [10] reconstruction method to improve the training actions of each sport capture ability. A motion recognition method based on image processing [11] is proposed here. First, collect sports training action images, capture sports training actions in Gaussian blur [12] affine space [13], and extract features. Use image processing methods to realize the recognition and optimization of sports training actions [14]. Finally, a simulation test [15] and analysis were carried out, and the validity conclusion was reached. For the development of the entire sports industry in today's society [16], the existence of this technology is also very necessary. is technology can better assist athletes in training. rough the study of the existing literature, it is found that, in the mature stage of image processing technology, the recognition and training of sports behavior can improve the effect of sports. However, the accuracy and effect of recognition are not ideal, and the guidance of reasonable training behavior is limited. Using image processing technology to recognize sports behavior is the main way to improve the effect of sports in the future. erefore, image processing technology is also of great significance to the layout of sports industry structure. Image Processing Technology Concepts and Methods Image processing technology uses various devices that can take pictures or produce pictures to generate images and then use this technology to identify or process these images. is technology mainly includes image digitization, image enhancement and restoration, image data encoding, image segmentation, and image recognition. e image processing technology specifically includes point processing, group processing, geometric processing, and frame processing. Since the processing target is a pixel, the most basic method among these methods is the point processing method. is method of treatment is simple and effective. Its main function is to adjust the brightness of the image, adjust the contrast of the image, invert the brightness of the image, and so on. Because the processing method of the image group has a wider processing range than the method of point processing, it is also called "area processing or block processing." For group processing, the main application of this method in image recognition covers the detection and enhancement of the edge of the relevant image, the softening of the uploaded image, and the sharpening of the uploaded image, increasing or reducing the noise of the uploaded image, and so on. e geometric processing method of the uploaded image is to use operations to change the position and arrangement order of the pixels of the uploaded image to achieve a series of effects such as adjusting the image, rotating the image, mirroring the image, changing the image. e image frame processing method synthesizes an image or multiple images to generate a new image in a specific format. Among them, the specific forms include the image generation of "number multiplication operation," the generation of "logical OR" operation relations, the generation of "exclusive OR" logical operation relations, the image generation of addition and subtraction operations, and the generation of related conditions. Image processing software usually has an image composition function, which can generate images into a variety of specific formulas. Classification of Image Processing Technology. Image processing technology is generally divided into two categories: analog image processing and digital image processing. Analog image processing includes optical and electronic processing such as photography, remote sensing image processing, and television signal processing. e characteristics of analog image processing are parallel processing and high-speed processing, which can usually reach the theoretical speed of light. Real-time TV screens are the main example of processing analog signals that can process video at a speed of 25 frames per second. Analog image processing has the shortcomings of low precision and low flexibility, and this shortcoming makes it more difficult to realize the judgment and processing ability related to nonlinearity. is technology can also be called computer image processing because it usually uses real-time computer processing or hardware processing. It also has the advantages of higher precision during processing, relatively richer processing content, better nonlinear processing, and better flexibility. Generally speaking, as long as you change the software, you can change its role. e disadvantage is that speed is still an issue, especially for complex tasks. Generally, when processing most still images, real-time processing of generalprecision digital images will require about 100 MIPS of processing power, and this will also lead to a certain degree of image resolution and accuracy. Limits: for example, the accuracy of the usual image is 1024 × 1024 × 12-bit size. For this higher accuracy and resolution, the time required for processing the image is increased to a certain extent as shown in Figure 1. Analog image processing uses lenses to achieve processing, such as photography, remote sensing image processing, and television signal processing. Analog image processing is characterized by high speed and generally realtime processing; theoretically speaking, it can reach the speed of light and can be processed in parallel at the same time. Television pictures are a typical example of analog signal processing, which deals with moving pictures at 25 frames per second. e disadvantage of analog image processing is poor accuracy and flexibility, and it is difficult to have the ability of judgment and nonlinear processing. Digital image processing is generally done by computer or real-time hardware. Its advantages are high processing precision, rich processing content, complex nonlinear processing, and flexible flexibility. Generally speaking, the processing content can be changed by changing the software. Its disadvantage is that processing speed is still a problem, especially for complex processing. Still, pictures are mostly processed. If digital images with general accuracy are processed in real time, it is necessary to have a processing capacity of 100 MIPS. Secondly, the resolution and precision are still limited. For example, the general-precision image is 512 × 512 × 8 bits, and the high-resolution image can reach 2048 × 2048 × 12 bits. If the precision and resolution are higher, the processing time will be significantly increased. Sports Action Recognition Based on Image Processing Technology When performing this program of sports recognition based on image processing technology, you first need to collect the athlete's image, extract the motion characteristics, and then use the formula to recognize the action. When the recognition is completed, we choose a specific action to compare with the standard action, judge whether the action is standard or not, and then give specific action prompts. e athletes then modify and optimize the actions according to 2 Scientific Programming the exercise prompts to achieve better and more efficient sports results, as shown in Figure 2. Image Acquisition. In order to obtain sports action images and perform edge contour detection on the collected high-resolution sports action images, it is first necessary to use multiresolution image scanning technology to collect training action images based on the characteristics of the 3D model. Assume that the function of the image of the sport is f(x, y), and the background component function of the sports image is g(x, y). We can use the corresponding points of 2D sports images or 3D sports images to perform model matching on g(x, y) and then add Gaussian noise and Gaussian blur features to divide the image model into a 3-by-3 topology structure. Get the result shown in Figure 3. e 3D image is recreated according to the distribution of Figure 3 and the 3D points corresponding to the 3D model. e distribution of the main features in the image acquisition is n (x, y), and the pixels f(x, y) around the current feature point are Pattern matching is performed on the center pixel of the multiresolution video. Image feature decomposition modeling combined with edge contour features is described by H and n. With the quantitative information of H and n, the stronger the ability to search for sports actions, the closer f (x, y) will be to f (x, y). On this basis, the structure model obtained by hierarchical registration is as follows: In the above formula, H(x, y) * f(x, y) is the main feature point of sports training action positioning, and the symbol * represents convolution. Locate the motion image features in different regions, and obtain the affine invariant moments as In the above formula, n (x, y) is the noise interference term. Realize the capture and acquisition of sports training movements through fine registration. Feature Detection and Extraction. We can perform relevant contour detection on the collected high-resolution sports training images, using the 3D model reconstruction area segmentation method of the sports training sports and the residual component mixing method, as well as the boundary feature points of the sports training and the image of the sport. en, the vector quantization decomposition of the sports training action image is carried out, and the grayscale matching value of the sports training image is In formula (4), r and 0 are the decomposition coefficients of hierarchical features. When the corresponding variable satisfies n m (x, y)∈{− 1, 0, 1}, the gray-scale feature blending mode is used to obtain the edge component of the image edge of the sports training actions as follows: In formula (5), r is the corresponding value in the texture function of the sports action image, and its range is 0 ≤ r ≤ 1. When the variance feature quantity of the output of the sports' image satisfies the expected normal distribution and the variance is r/2, the feature quantity of the gray boundary of the image of the sport at this time is Scientific Programming Based on the recognition of sports images and the contours of the edges of sports training actions, the corresponding refinement and optimization recognition are carried out. Combined with the irregular triangulation model, the corresponding theory of the visual obstruction model can be realized for sports training actions. e known generation data must be a nondecreasing and increase sequence, and the development trend of the gray quantity accumulation process can be seen through accumulation so that the law contained in the original data is displayed, and the law in the data can be summarized and utilized. e gray feature mixture model is to input a certain motion action of the motion standard as the contrast between the edge and the boundary. e regional feature distribution model of irregular triangulation is Since the correction scale of each image is different, the recognized internal texture and edge features of these images will be rearranged linearly. e image improvement results are as follows: σ can be expressed as the texture distribution of the image. ∆x represents the pixel value of the image gradient and predicts the local boundary and local area of the sports training image. t(x) represents the image of the statistical distribution value of the sports training pixel. e specific feature detection result is expressed as In the nonlinear change mode of sports, solve the characteristic trajectory equation of sports: Performing motion capture and feature recognition of sports in Gaussian blur, the optimized trajectory equation can be obtained as Among them, (x, y) represents the distribution of a group of blurred pixels in the image, and the motion capture is performed through the spatial matching of pixel blocks. On the Realization of the Spatial Visual Feature Analysis Algorithm. Relevant image enhancement technology is used to improve the resolution and adaptability of the training action capture in sports, and the computer vision image processing method is used to realize the f(g i ) texture distribution function of the athlete: From the above, the dynamic distribution of the trajectory in the sports training actions can be obtained, and the specific expression is where ϕ (T n ) is obtained by the following formula: ϕ T n � c t Hc + θ T Hθ + ω T Hω. rough the above steps combined with the method of reconstructing the trajectory of its sports movement, the expression is as follows: In equation (15), G new and G old , respectively, represent the gray track components of sports training actions: In formula (16), c is the decomposition formula of the size feature of the image of the sports action and Md(Ci) is the specific information component in Ci. According to the above description, image recognition technology can be used to achieve the optimization of capturing characteristic actions during sports training. According to the aforementioned spatial visual feature analysis algorithm, standard actions can be compared with sports actions in the collected picture information, and areas that are not standard enough can be proposed and then optimized based on this to achieve better training results. Experimental Comparison. According to the information in the above article, we can compare the accuracy rate achieved by this algorithm with other algorithms. Of course, it must be on the premise of the same image quality. If the image quality is different, it will be compared. is comparison will be meaningless. If the quality of the image we collect is high, the accuracy and correctness of the sports feature actions we can recognize in the image will be higher. Different types of pictures are compared using the same algorithm. ere are some differences between different algorithms, which will cause the accuracy of the results of the final algorithm to be different. We can use the feature capture algorithm to achieve sports action recognition and do not use the algorithm or use the comparison chart of the accuracy of sports action recognition performed by other algorithms as shown in Figure 4. In the entire spatial visual feature analysis algorithm, several indicators that can be used as evaluation indicators for this algorithm: accuracy, response speed, feasibility, clarity, and ambiguity. Based on these evaluation indicators, when we want to use this technology to complete a certain related work, we can use these indicators for comparison to select the more important indicators for ourselves for evaluation and finally choose the algorithm, fundamentally, saving working time in Figure 5. Comparing the recognition rate of the spatial vision feature algorithm in this article with the mainstream algorithms at home and abroad on the MSRGcsturc3D database, it can be seen that the spatial vision feature analysis algorithm in this article uses the texture distribution function and a series of other functions, which greatly improves the recognition rate of image acquisition, and the improvement of recognition rate also improves the efficiency of image recognition to a certain extent in order to achieve the purpose of reducing the running time of the algorithm as shown in Table 1. When collecting images, the body parts covered in the images are different, and the experimental results we can get will be slightly different. For example, when recognizing legs and arms, the key feature points of the image we are targeting are different. Differently, in this case, we should first understand the key features of the corresponding arm or leg and then perform specific feature positioning based on these features to improve the accuracy of identifying a specific part rate and achieve the purpose of saving time. When implementing a specific sports action recognition algorithm based on image processing technology, the specific information in the image may be different, and the feature recognition of the specific action may also be different. For the same image, it may be that the legs in the image are recognized, and it is also possible to recognize the arms or faces in the image and other parts. For these different parts to be identified, the amount of feature that needs to be extracted may also be different as shown in Figure 6. In the entire sports action recognition algorithm based on image processing technology, the number of times each body part is extracted is also very different. Moreover, the number of times of extraction will be different because of the different objects to be extracted. For athletes, every part of the body will be extracted for comparison. For ordinary people, in most cases, only the legs or arms need to be extracted and then compared, as shown in Figures 7 and 8. Development and Analysis of Sports Industry Pattern Nowadays, the sports industry is the carrier of China's economic development. It has the characteristics that other industries also have; that is, it pays attention to the benefits of the market and the economic benefits of today. Differentiating characteristics: the products of this industry have the function of improving the physical fitness of residents, promoting the spirit of famous people, and realizing the overall development of individuals and the progress of social civilization. According to the new statistical classification of sports industry, China's sports industry can be divided into sports product and related product manufacturing, sports service industry, and stadium construction industry. Among them, the sports service industry includes several subcategories such as sports management activities, sports competitions and performance activities, sports fitness and entertainment activities, and sports facilities and facility management as shown in Figure 9. For the development of the sports industry, the Chinese government is also giving full support, and all relevant units have also issued favorable documents in terms of funds and policies. is makes the growth of the total scale of China's sports industry steadily increasing as shown in Figure 10. In addition, the structure of China's sports industry is gradually optimized, and China's sports industry is dominated by the manufacturing of sports equipment and related products. In recent years, China has strongly supported the development of the sports industry. In 2018, the proportion of scale reached 47.9%, the proportion of sports equipment and related industries in the entire sports industry gradually declined, reaching 49.7% in 2018, and the gap has been increasing year by year. It can also be seen from the industrial added value that since 2016, the proportion of the added value of China's sports service industry has surpassed that of the sports goods manufacturing industry and related industries. In 2018, the added value of the sports service industry was approximately twice that of sports goods and related industrial production as shown in Figures 11 and 12. In the entire Chinese sports industry, the proportion of various industries in the entire sports industry is different. As far as today is concerned, the sports service industry accounts for the largest proportion of the entire sports industry, followed by sports, the manufacturing of supplies and related products, then the others, and finally the construction of sports venues, as shown in Figure 13. Based on the rapid development of the entire sports market, and because of the commercialization of today's events, the scale of the relevant sports event market is also growing rapidly. However, due to the lack of traffic, this growth rate is not very fast. In general, the entire market still has long-term investment value, as shown in Figure 14. Consumption in the sports market in the current era has also led to the rapid growth of per capita sports Scientific Programming consumption. In 2018, the scale of China's sports market had grown to nearly one trillion yuan. By 2020, driven by the rapid growth of per capita sports consumption expenditure, China's sports consumption market is expected to reach 1.5 trillion yuan. e mainstream sports market in China has grown from 593 yuan in 2011 to 2,264 yuan. It is expected to reach 3448 yuan in 2018, with an annual compound interest rate of 19.24% in the past ten years, as shown in Figure 15. Conclusion In today's era, the rapid development of information makes our forms of life or work become more diversified. Take the aspect of sports in our article. If the technology of sports action recognition based on image processing technology can be used properly, this technology can help athletes assist in training. Of course, this technology can also be used in our ordinary people's lives. For us, we only need to apply this technology to fitness. It can be used to check whether our exercises are standard during fitness to achieve better training results. It can also prevent us from hurting ourselves during exercise. Furthermore, the development of the current sports industry is also becoming more and more mature, which will make the existence of this technology more necessary and meaningful. According to relevant information, China's future urban sports industry must take the road of high-quality development. e urban development in the development of the sports industry must focus on the combination of software and hardware. At the same time as the hardware construction, it must also focus on improving the quality of services and pay more attention to the needs of local residents. To develop the sports industry, we must clearly recognize the internal and external relationships. Looking inwardly, the sports industry is essentially a livelihood undertaking and a happiness project. In the final analysis, it must serve the local people. At the same time, the development of the sports industry is not an overnight event but a long-term process. e key is to cultivate healthy living habits and living habits. Today, the future of China lies in young people, so for us, training the exercise habits of young people is the most important thing. Let young people fall in love with sports and eventually move towards a healthy life of sports for all to fundamentally meet the needs of the development of the sports industry. Data Availability e experimental data used to support the findings of this study are available from the corresponding author upon request.
2021-12-26T16:05:48.482Z
2021-12-24T00:00:00.000
{ "year": 2021, "sha1": "e124c5c75a204921ae242e47d9ef21a11e128f45", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/sp/2021/4815097.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4738408580f956c817c60467f817a2f09c9a5932", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
249058037
pes2o/s2orc
v3-fos-license
A Comparative Study between Managed Aquifer Recharge and Other Community Water Supply Options in Coastal Bangladesh The acute scarcity of safe water exists in disaster-prone coastal Bangladesh due to the occurrences of brackish or saline and arsenic contaminated groundwater, the salinization of freshwater ponds by inundation during storm surges, and brackish water aquaculture. Millions of people living there mainly depend on pond water and rainwater harvesting system and face severe difficulties to collect freshwater, particularly during the dry season. Therefore, various community water supply technologies, e.g., RO, SIDKO, RPWS, SkyHydrants etc. have been established to meet their daily needs, though the majority of these technologies fall short of the value of time and effort of water collection and sometimes fail to supply water of desired quality. Managed Aquifer Recharge (MAR) technique, also a community water supply system, was designed to provide safe water by creating underground storage of freshwater where ambient groundwater salinity is reduced by infiltrating rooftop or pond water through wells. Understanding the need to sort out the best water supply option, a comparative study has been conducted between MAR and other water supply technologies, and among all of them the MAR has been demonstrated as a low cost, reliable, sustainable, and durable option for providing safe drinking water to the community round the year. Introduction The coastal communities of southwestern Bangladesh, particularly along the northern fringe of the Sundarban mangrove forest in Khulna, Satkhira and Bagerhat districts, have been confronting critical shortages of freshwater for many years (Karim and Mimura, 2008). The major reasons behind this misery are naturally occurring salinity and arsenic in groundwater Ahmed et al., 2004), gradual salinization of freshwater ponds by seawater inundation caused by seasonal storm surges (Barker, 2013;Mallick et al., 2011) and widespread man-made transformation of natural land use by brackish-water aquaculture (shrimp and crab) (Ahmed et al., 2009). Households mainly depend on pond water, rainwater harvesting (RWH), and pond sand filters (PSF) for drinking and cooking purposes as the number of shallow-water tube wells are few. However, all are vulnerable to pathogens and reliant on rain which is unevenly distributed annually. Additionally, a major portion of these areas is incompatible for deep tube well development (Hasan, 2012). Therefore, the local government, NGOs and some private funded agencies are engaged to recuperate from this situation. Some community water supply technologies, including Reverse Osmosis (RO), Arsenic Iron Removal Plant (AIRP) -SIDKO, Rural Pipe Water Supply (RPWS), SkyHydrant have been established, and they are being operated to supply freshwater to the community. However, water sourcing patterns, households' preference to water supply options and their economic feasibility suggest that a combination of household and community-based options could be suitable for yearround water supply particularly for drinking purposes. An action research program, funded by UNICEF, Bangladesh in collaboration with the Department of Public Health Engineering (DPHE), the Department of Geology, University of Dhaka, Bangladesh and Acacia Water, Netherlands, was initiated in 2010 to test various designs of Managed Aquifer Recharge (MAR) system in coastal Bangladesh for improving groundwater quality and for acquiring knowledge on low cost construction and maintenance, operational methods using locally available materials (Hasan et al., 2018;Barker et al., 2016;Ahmed et al., 2015;Sultana et al., 2015). Initially, the MAR system has been constructed and operated at ---------------------------------------------* Corresponding Author E-mail: nnawrin@du.ac.bd (Nazia Nawrin) 20 sites in three coastal districts of Bangladesh since 2012 to evaluate its performance and applicability. Their promising results led to implement additional 75 sites in 2015 as an alternate water supply technology to expand the access to safe drinking water to the coastal communities (Nawrin et al., 2016;Ahmed et al., 2015). However, these MAR schemes comprised of filtration unit, i.e., double chambered graded sand filtration tank, infiltration unit, i.e., four to six large diameter (12-to22inch) infiltration wells filled with sorted gravel and recovery unit, i.e., a 2-inch diameter well fitted with hand pump. However, the optimum infiltration rate was hardly achieved. Moreover, larger diameter infiltration wells and double chambered graded sand filtration tankmade the construction and maintenance comparatively expensive (Ahmed et al., 2020;Nawrin et al., 2016). In addition, the low infiltration rate took one to two monsoon seasons to make groundwater sufficiently fresh in the shallow aquifer for potable uses. Hence, there was a need to improvise the design for enhancing infiltration rates with reduced costs in order to get a better recovery of fresh and safe groundwater and a modified MAR scheme has been constructed and tested at KDP site in Khashiar Danga village, Mongla, Bagerhat for optimum benefit (Nawrin et al., 2016). Since both existing and modified MAR have already been proven as low cost, safe, year-round water supply technology in coastal Bangladesh (Ahmed et al., 2020;Hasan et al., 2018;Barker et al., 2016;Nawrin et al., 2016;Sultana et al., 2015), it creates an opportunity to study the competency of the MAR system (in this paper the modified MAR system) over the other water supply options in the study area. Keeping in mind the necessity to rank the community water supply technologies and to identify the best water supply option for the disasterprone coastal communities in Bangladesh, a comparative approach has been adopted in this paper. First, each of these technologies is briefly described and then the analysis of collected data about their cost, accessibility, performance, maintenance, and water quality both on source and supply are presented in a systematic manner from which conclusions are drawn. This comparative study can help to enhance local knowledge in the context of water supply technologies, which is significant for fresh and safe water supply planning and management in the saline-prone coastal areas of Bangladesh. The distribution of available community water supply technologies and the modified MAR design "MAR-KDP" site are shown in Figure 1. Managed Aquifer Recharge (MAR) System MAR is one of the significant adaptation opportunities for developing countries seeking to minimize vulnerability to climate change and hydrological variability (IGRAC, 2016). The artificial recharge of groundwater can be achieved by infiltrating fresh and treated surface water in aquifers and subsequent recovery from wells (Bouwer, 2002), which is technically considered feasible and climate resilient alternatives for storing surplus monsoon run off in the regions of freshwater shortage (CGWB, 2000). Aquifer Storage Transfer and Recovery (ASTR) system is one of the MAR technologies, where either rooftop rain and/or filtered pond water are infiltrated into the shallow brackish aquifer during wet season Maliva and Missimer, 2010;Pyne, 2005). MAR schemes in Bangladesh are constructed mainly using local materials, which significantly reduces the construction and maintenance costs and provides a relatively lowcost option of fresh water supply for the coastal saline areas (Ahmed et al., 2020;Nawrin et al., 2016;. Several previous investigations on the detailed design, construction, and performance of implemented MAR system in the southwestern coastal areas of Bangladesh have been undertaken (Ahmed et al., 2020;Hasan et al., 2018;Barker et al., 2016;Nawrin et al., 2016;Sultana et al., 2015;Sultana et al., 2014;Monim, 2014;Hossain, 2014;Imranuzzaman, 2012). Figure 2 shows the modified and optimized design of the MAR system for enhanced infiltration rate at a comparatively low cost, which comprises three operating units for filtration, infiltration, and abstraction ( Figure 2a) and source water pond ( Figure 2b) at KDP (Khashiar Danga Pond) site in Mongla, Bagerhat district (Nawrin et al., 2016). In this optimized MAR design, an abandoned pond sand filter (PSF) has been modified and used as a filtration unit to remove source water turbidity which incurs negligible cost. Four 4-inch diameter empty recharge wells placed below a small 6x6 ft 2 recharge pit for the direct infiltration of treated water into the underground which significantly enhance infiltration rate. A scheme consisting of total nine observation wells in and around has been constructed in order to monitor water level and water quality. After one or two monsoon seasons of infiltration when the salinity of ambient groundwater reduced to drinkable limit, a number 6 hand pump has been installed fitted with a flow meter for abstracting groundwater (Figure 2a). Other Water Supply Technologies Reverse Osmosis (RO) is a water purification technology capable of removing up to 99% of the dissolved salts (ions), particles, colloids, organics and bacteria from the source water (Puretec, 2012). In the process of RO, water from a pressurized saline solution is separated from the dissolved salts by flowing through a water-permeable membrane, which is very effective in desalination or treating brackish surface and groundwater for both large and small flows applications (Khanzada et al., 2017). Fourteen (14) RO plants have been found operational in three coastal districts ( Figure 3a). Arsenic Iron Removal Plant (AIRP)-SIDKO removes arsenic and iron from groundwater through the sequences of oxidation, adsorption, and filtration processes (Chakraborty et al., 2016). Four (4) community level AIR plants, known as SIDKO in Bangladesh, are found operational ( Figure 3b). Rural Pipe Water Supply (RPWS) is an extensive supply of either pond water treated by sand filtration or groundwater direct from the deep tube well via a systematic pipe network. In the disaster-prone areas of Bangladesh, RPWS system tends to reduce sufferings of coastal communities by providing fresh water where either large pond reservoirs are available, or the aquifers are suitable for deep tube well installation. Eight (8) RPWS systems have been found in three districts of the coastal zone ( Figure 3c). Figure 3: (a) RO, (b) SIDKO (AIRP), (c) RPWS, and (d) SkyHydrant plants in three coastal districts of Bangladesh SkyHydrant, which is one of the recent technologies in coastal Bangladesh, can remove pathogens and turbidity from both surface and groundwater sources. It is a low pressure, high volume, ultra-filtration unit. Raw water flows along the length of the hollow fibers before being forced through the walls of the fiber to produce a filtrate free of suspended solids. The filtrate flow rate is controlled manually (Skyhydrant Specification Sheet). Two SkyHydrant have been installed in Satkhira District ( Figure 3d). Materials and Methods To complete a comparative investigation among the water supply options, a questionnaire survey (no. of samples, n=28) on four technological options was conducted to collect a number of information from each technology, including the location (GPS), the date of commission, installation agency, the cost of installation, capacity (L/day), the source of water, distribution type, the number of households covered, operational status, management authority, payment information, payment system, and community contribution and acceptance. The information about the source water quality (n=16) and supply water quality (n=18), e.g., the presence of salinity, arsenic, iron, bacteria, and smell were tested at the field sites. 26 samples of raw (source) water and 28 samples of treated (supply) water were also collected in 500 ml size plastic bottles for chemical analysis for some index water quality parameters. Note that samples of source water from two RPWS sites were not possible to collect as there was no water collection option from the well head. The samples were analyzed in the Geochemistry Laboratory of the Department of Geology, University of Dhaka for anions, i.e., Chloride (Cl -) and Bicarbonate (HCO3 -) and cation, i.e., Sodium (Na + ) and two trace elements, Arsenic (As) and Iron (Fe). At first, a 0.45μm membrane filter was used to remove suspended solids and colloidal substances from the samples. Unacidified samples were analyzed for anions, Cland HCO3 -, by titration method and the concentration of cations, i.e., Na+, Fe (total) and As, were analyzed by Atomic Absorption Spectrometer (AAS) after acidifying the samples using ultra-pure nitric acid (HNO3) to lower the pH to slow down chemical and biological processes and to act as preservatives. The Electric Conductivity (EC) and pH of collected samples were also measured by using the portable EC meter (HANNA, model DIST HI 198300/4) and portable waterproof pH/°C meter (pHep by HANNA, model HI 98127) respectively. Source Water and Supply Water Quality Significant reductions of EC from the source to the supplied water in most of the RO plants have been observed (Figure 4). No or little changes on salinity have seen from source to supply in RPWS systems as only sand filter is used to filter pond water to remove turbidity prior to distribution system and low EC of source water has to be ensured before passing it to the filtration unit. The SIDKO plant has no efficiency to reduce high salinity. However, the SkyHydrant plants can reduce EC through the ultra-filtration process if only EC is at lower range in the source water. The EC of the source and supplied water of the MAR site are little higher than the acceptable drinking limit, whereas EC of most of the supplied water are within acceptable range ( Figure 4). Figure 4: Changes of EC in source water and supplied water from different water supply technologies in three coastal districts of Bangladesh (MAR site is marked as red) The significant reduction of Clconcentrations from source water to supplied water have been found in most of the RO and SkyHydrant plants. On the contrary, RPWS and SIDKO systems showed almost no change of concentration of Clin both source and supplied water since these systems are not equipped to reduce salts. Three RPWS sites displayed Clconcentration a little higher than the safe limit in their supplied water ( Figure 5a). Although increased concentrations of Na + in the supplied water was seen in some locations compared to the source water, Na + concentration of the supplied water of all the water supply options are within safe drinking water limit (Figure 5b). Despite both Cland Na + concentrations in source and supplied water of MAR site were observed unchanged but all are within safe drinking water limit (Figure 5a, b). The significant reduction of As concentrations from source water to supplied water have been observed in four RO plants and one SkyHydrant plant (Figure 6a). However, there is an unusual exception found in one RO plant (AOSED-4) that As concentration in supplied water was unexpectedly increased relative to source water after the treatment, which was also supported by the field test. Some chemical additives used in the treatment process may cause this abnormal rise of As and merit attention from the management authority. Figure 5: Changes of (a) Cland (b) Na + in source and supplied water from different water supply technologies (MAR site is marked as red) The noteworthy decreases of Fe concentrations (total) in supplied water have been seen in three RO and all SIDKO AIRP plants ( Figure 6b). Moreover, the As and Fe concentration of the supplied water from all the technologies were found within safe drinking water limits, except one RPWS plant (SHU-22) (Figure 6a, b), which distributes groundwater from deep tube well without any treatment. It is likely that these high arsenic and iron coming from the aquifer that is already as contaminated. Supplied water from the MAR site was arsenic-free, but Fe concentration exceeded the permissible limit compared to the source water ( Figure 6a, b), probably due to the shallow aquifer geochemical reactions. Figure 6: Changes of (a) As and (b) total Fe in source and supplied water from different water supply technologies (MAR site is marked as red) Installation Cost The installation cost of RO ranges from BDT 12,00,000-2,75,00,000 and SIDKO plants range from BDT 5,50,000-2,00,00,000 (Figure 7). On the other hand, the cost of RPWS installation varies from BDT 4,75,000 to 44,00,00,000 depending on their size and capacity of water supply. The implementation cost of SkyHydrant technology was BDT 11,40,000, where the community contribution was BDT 75,000. Contrary to other water supply technologies, installation cost of the modified MAR system at KDP site has been calculated as BDT 2,50,000 to 3,00,000 (Figure 7). Source Water, Supply Capacity and Payment In most of the RO plants, groundwater from both deep tube well and shallow tube well are used as source water. Only 3 plants out of 14 use pond and river water after treating through sand filter. The supply capacity of any treatment plant depends on the volume of water it can treat in a day or hour. A majority of the RO plants in the coastal area have the capacity of about 1,000-10,000 L/day. However, two RO plants (RO-MF-14, GMF-29) have high capacity (about 800-2000 L/hour). People pay 0.35 to 2.25 Tk/L to collect water ( Table 1) from most of the RO installation sites either by cash on each collection or monthly payment. In addition, two RO plants supply treated water by vendors and one plant uses piped distribution system. LoCOS-9 RO plant supplies water free of cost. The SIDKO plants mainly use groundwater from shallow tube well as the source water. The SIDKO plant MF-18, which has been installed by WaterAid, has a high capacity to treat water at 50,000 L/day and supply water using a piped distribution system by taking 0.3 Tk/L as per meter reading on monthly basis. The other three SIDKO plants have a capacity of treating water about 10,000 L/day and supplied water to beneficiaries by vendors who pay 0.3-0.5 Tk/L (Table 1). In Bagerhat district, the Government-implemented large RPWS system (AOSED-1) can provide 5,00,000 L/day after filtering the water from three large pond reservoirs. Another large high capacity (3,00,000 L/day) RPWS system in Satkhira District (SHU-28), uses deep groundwater as a source. Two more RPWS systems implemented by the Government (SHU-22) and HYSAWA (LoCOS-8) also use deep groundwater andhave the capacity of 15,000L/day and 58,000 L/day respectively. The other RPWS systems use pond water as a source and are able to supply filtered water of 10,000 to 20,000 L/day. Community usually makes monthly payment for the piped water supply at different rates ranging from Tk.20 to Tk.250 based on the number of households covered (Table 1), e.g., about 1200 households are being served by the RPWS-AOSED-1 plant with a payment of Tk.250/household/month. At one RPWS plant people pay cash at the collection point, whereas two other plants do not charge any payment. SkyHydrant plants supply treated pond water by removing turbidity and bacteria and has a capacity of 10,000 L/day. Individuals pay about 0.5 Tk/L to collect water from the plant (Table 1). In the MAR system, the source is pond water or rooftop rainwater infiltrated into the shallow brackish aquifer through recharge wells after filtration using slow sand filter, and after a certain period of infiltration when groundwater salinity is reduced to drinkable limit, the groundwater is abstracted from the aquifer through recovery well. The supply capacity of MAR has been measured as 10,000 L/day and people pay 20 Tk/household/month (Table 1). Community Perception Community acceptance is a critical parameter for the success of any water supply technology. Communities typically prefer the technology where they can easily get the drinking water with a minimum charge. In addition, people are usually concerned about the other two parameters in the supplied water, i.e., smell and color. Some RO plants have bad odour in their source water that is usually removed after treatment. In addition, higher Fe concentration gives water a reddish appearance which sometimes make them unacceptable to the community even they provide good quality water. The SIDKO plants have been seen to lower Fe concentration along with reducing As, that eventually changes the color of water. Abstracted water from the MAR site was sometimes observed to have odour problem, but it can easily be removed by storing the water over a night before drinking. Maintenance Community-based water supply options need regular maintenance. Some RO plants are managed by the community and others by the Government or NGO. The WaterAid-implemented SIDKO plant is maintained by an NGO and the other three by individuals. The RPWS and SkyHydrant systems are mostly maintained by communities. The capacity loss and mechanical loss can happen if consistent maintenance is not provided. Sludge management is one of the most critical and important tasks of RO treatment plants. In the RPWS system, the rate of supply water can be reduced due to pipe damage and clogging problems. These maintenance costs might be higher sometimes and paid by the community people. Though risk of clogging is considered as one of the major issues for the MAR system (Sultana et al., 2014), managing clogging of the open infiltration wells of the modified MAR through backwashing using mechanical pumps has been found easier. The routine replacement of suspended fines deposited on top of the filtration unit can be performed more readily which lowers the maintenance cost relative to the other technologies. Performance and Longevity The performance during operation of any treatment plant is connected to its longevity. Every mechanical system has a specific lifespan. The RO, SIDKO-AIR treatment plant and the SkyHydrant are operated as mechanical systems and therefore, has a particular durability and lifespan, e.g., RO membrane 3-7 years (Johnson, 2006) and SkyHydrant membrane 5-10 years (Skyhydrant Specification Sheet) based on the application, even though maintained routinely and if maintenance tasks are not performed sincerely those might lose viability soon. Efficiency of SIDKO-AIRP can be declined by 10% in 3 years after installation (Sorensen et al., 2015). In contrast, the modified MAR is a sustainable water supply system with a relatively longer lifespan (20 years) if communities have adequate awareness to continue routine maintenance. The filtration unit is a PSF, where the pond water is being brought into the unit by using an electric pump and the filtered water then directed to infiltration unit (openinfiltration wells) through pipe network which 1000-35,000 4000-50,000 10,000-3,00,000 10,000 10,000 Supply Water Quality Cl, As, Fe within acceptable limit Cl, As, Fe within acceptable limit Cl (some), As, Fe within acceptable limit Cl, As, Fe within acceptable limit finally enters into the aquifer by gravity and does not involve any mechanical engagement. The tasks of cleaning and washing of filtration and infiltration units can easily be managed by the community people or caretaker. Moreover, as the MAR scheme is constructed using locally available materials, the filtration or infiltration unit can be repaired if needed with less difficulty. The water source of the MAR system, i.e., the pond can be re-excavated if required and the roof can easily be cleaned for rainwater once a year before the monsoon arrives. Overall, the modified MAR system requires a comparatively easy and low-cost maintenance. Discussion RO is a very effective treatment process in coastal areas to reduce the salinity of brackish or saline groundwater and provides the year-round water supply. The reduction of EC, Clconcentration and sometimes Fe concentration have proved the efficiency of the RO systems. However, the installation and maintenance cost of RO treatment plants is relatively high, the supply capacity is slightly lower, and the cost of water is slightly higher compared to the other technologies. The SIDKO-AIRP is preferable for those locations where salinity is within drinkable limit, and Fe and As concentrations are high, as it cannot reduce salinity but can provide Fe-and As-free water throughout the year. However, salinity is one of the severe problems in the coastal areas, thus the SIDKO cannot be an appropriate treatment process for the high saline groundwater areas. Moreover, installation and maintenance costs are relatively high. The RPWS usually supplies drinking water from pond after sand filtration or from deep tube well without any treatment. It provides water to the end user via pipe network and has wide coverage. Communities can get water free of cost or with a small payment. However, their development is confined to the areas of fresh deep groundwater or large pond reservoir which is not widespread in coastal regions. RPWS systems, using large ponds as source water, can face seasonal shortage of water as well as the salinity of pond water may increase during the dry season which cannot be treated by the sand filter. RPWS plants that provide groundwater by pipeline without any treatment may contain salinity, Fe or As. According to the water quality analysis, high As has been found in the source water of one of the RPWS systems which draws attention of water managers. The SkyHydrant is a relatively new technology in the coastal area as an alternative to PSF (pond sand filter), which significantly removes pathogens and turbidity from pond water. It is a single lightweight compact portable unit, fast set up, easy to operate, and filtration process does not require power or chemicals. The entire operations are simple and manual. Although the SkyHydrant is not designed to remove salt, Fe or As from source water, it still can reduce EC of pond water to some extent through the ultra-filtration unit. However, if these contaminants are present at high levels the water may not be suitable for filtering through SkyHydrant. In addition, this water supply technology mainly depends on pond water availability, thus it may face the seasonal problems of source and supply of water. The implementation cost is slightly higher. MAR schemes have been tested and proven successful and are being operated through communities in the remote areas of coastal districts where the other technologies are less capable to function due to high salinity of source water and inaccessibility of transferring machineries. The key aim of the MAR system in coastal areas is to store fresh water in underground aquifers through a sustainable infiltration system in order to reduce the salinity of groundwater and abate the seasonal problem of getting safe drinking water. The MAR technique is able to infiltrate a large volume of treated water (10,000 L/day) into the underground during monsoon. As a result, groundwater EC is reduced by mixing of rainwater and/or filtered pond water, and the community can get fresh or much less saline water even at the dry period. Arsenic concentration of groundwater has also been observed to decrease considerably due to high infiltration and mixing of fresh water. Moreover, the MAR system is comparably low-cost technology as it uses the local materials with easy maintenance and is of high capacity and can be accessed with nominal payment. In addition, the construction cost of the filtration unit is very low in MAR compared to other technologies as it utilizes the advantage of abandoned PSFs. It also creates the opportunity to reuse hundreds of abandoned PSF which are widespread in the coastal areas. Moreover, the abstraction is very easy by using a hand pump. Furthermore, it is disaster resilient as the stored fresh underground water remains safe, even when there is inundation due to storm surges, and can be abstracted through recovery wells during emergency. Despite the long list of advantages of the MAR system there are some challenges associated with the site selection, ambient groundwater chemistry as well as operation and maintenance. The site selection of the MAR system largely depends on the local subsurface hydrogeological setting (i.e., clay layer at the top of a confined aquifer, thickness of the clay layer) and ambient groundwater conditions (i.e., brackish groundwater in the target aquifer, salinity of the source water, the concentrations of arsenic and iron in groundwater) which make the technology confined to specific sites. The implementation, operation, and performance also depend on freshwater pond availability, skilled drillers, infiltration rate and community participation. Moreover, the recovery volume of water never exceeds the infiltration volume in order to abstract only freshwater, which confines the water use to drinking purposes. Besides the physical clogging of PSF and infiltration wells, the chemical and biological clogging also possess risks on the operation of the MAR system (Sultana et al., 2014). Sometimes the management of clogging can be time consuming and labor intensive when performed manually. Conclusion There are several newly introduced water supply options located in the coastal districts of Khulna, Satkhira, Bagerhat to mitigate the scarcity of safe drinking water. This comparative study among the MAR and other four water supply technologies has been conducted based on a questionnaire survey on several facts related to both on source water quality and supply water quality, types of source water, water supply capacity, installation cost, water pricing, distribution and payment system, operational status, management authority, community acceptance as well as laboratory analysis of source water and supply water chemistry. These technologies supply either fresh (reduced salinity), or As-and Fe-free, or turbidity-and bacteriafree water to the communities, mostly on payment basis. But most of these technologies are site-specific, installed comparatively at high cost, require expensive maintenance, and have fixed lifetime. Not all the technologies fulfill the requirement of desired quality of water for drinking purpose. Contrary to those technologies, despite having some challenges the modified MAR technique has been recognized as a successful alternative and sustainable disaster-resilient option that provides year-round good quality water at low cost, which is much needed for reducing salinity and increasing access to safe water for the coastal communities in Bangladesh.
2022-05-26T15:10:05.875Z
2022-05-24T00:00:00.000
{ "year": 2022, "sha1": "85492d5c6ae3692c0b2c629439e43dadd111a2f9", "oa_license": "CCBYNC", "oa_url": "https://www.banglajol.info/index.php/DUJEES/article/download/59081/41204", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "3d6ab21833684981de8d29051d85f066c3caef3a", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
232321748
pes2o/s2orc
v3-fos-license
Focal Vibration Alters Human Digital Sensory Nerve Action Potentials: A Pilot Study Introduction We studied the impact of vibratory stimulation on the electrophysiological features of digital sensory nerve action potential (SNAP). Methods The antidromic digit 3 SNAP was recorded in 19 healthy adults before, during, and after applying a vibration to either 3rd or 5th metacarpal phalangeal joint (MCPJ) at 60 Hz and amplitude of 2 mm. 100% supramaximal stimulus intensity was performed in 5 subjects (randomly selected from the 19 subjects) where the SNAP sizes were recorded. Results The amplitude of digit 3 SNAP declined to 58.9 ± 8.6% when a vibration was applied to MCPJ digit 3. These impacts did not change by increasing the electrical stimulus intensity. The SNAP regained its baseline value immediately after the cessation of vibration stimulation. The magnitude of size reduction of digit 3 SNAP was less when vibration was moved to from MCPJ of digit 3 to MCPJ of digit 5. Discussion. The marked drop of the SNAP size during vibratory stimulation reflects the decreased responsiveness of Aβ afferents to electrical stimulation, which deserve further investigation in the study of focal vibration in neurorehabilitation. The main component of the digital sensory nerve action potential (SNAP) is produced by the summation of action potentials of large, myelinated Aβ fibers. The size of SNAP is proportional to the number of nerve axons depolarized by the testing electrical stimulation. Both the function of skin mechanoreceptors and Aβ fibers during vibration could affect the measures of digital 3 SNAP. Our study was designed to examine the electrophysiological features of digital SNAP during acute and transient exposure to vibratory stimulation. Subjects. Nineteen healthy subjects (10 men), aged 23-50 (mean, 32) years, with no known neuromuscular or musculoskeletal disorders participated in this study. They were recruited from the university research center population. The Human Ethics Committee of Huashan Hospital, Fudan University, China, granted the ethical committee approval, and each subject gave his/her informed consent prior to the study. Each subject was seated comfortably on a chair, with the left forearm and hand supinated on a solid wooden table with the fingers relaxed and unsupported ( Figure 1). Only minimal discomfort was caused by the brief application of vibration to the palm either at metacarpophalangeal joint (MCPJ) of digit 3 or MCPJ of digit 5. No visible venous stasis or color changes on the fingers were observed on any subject. 2.2. Stimuli. The median nerve was stimulated with a bar electrode at the midwrist 13 cm proximal to the active recording electrode. The electrical stimulation consisted of a square wave, 0.1 ms in duration, and was delivered at a rate of 2/s. The stimulus intensity started at a level below the threshold of the action potential and was incrementally increased until the maximal response was reached. The intensity was then increased by an additional 20% to ensure the supramaximal activation of SNAP. With five subjects, we used 100% supramaximal stimulus to test the effects of an additional 100% increase in intensity for a maximal achievable activation of the sensory axons. 2.3. Recording. The antidromic median nerve SNAPs were recorded from the left hand with a self-adhesive ring electrode (Nihon Kohden, MEB-9400, Japan) placed 1 cm distal to the metacarpophalangeal joint (MCPJ) of the digit 3 with the reference electrode 4 cm further distally. A surface ground was adhered to the skin between the stimulating and recording electrodes. The impedance was maintained below 5 k ohm. Palm Vibration. We utilized a hand-held massage vibrator (YH-3U, Yihe Electronic, China) with the vibration frequency at 60 Hz and a displacement of 2 mm. The vibration was applied to the palm at the MCPJ of digit 3. A constant force was applied to the MCPJ by using the own weight of the vibrator (0.9 kg). The diameter of the circular contact area between the skin and the vibrator was 2.5 cm. The vibrator was secured manually rather than being strapped to the palm. This method worked better in keeping the vibrator in place and ensuring the constant force of application during the experiment. In addition, the digit 3 SNAP was recorded with the vibrator applied to the palm at MCPJ of digit 5 for seven subjects. Measurements of SNAPs included (1) amplitudes from the baseline to the negative peak and (2) onset latencies. The digital skin temperature was maintained at 32 ± 0:5°C. SNAPs Were Recorded in the following Steps 2.6. Statistical Analysis. Statistical evaluation was performed by Student's t-test for paired data. Values, given as mean ± SD, were considered significant at P < 0:05. Results The traces of SNAP recorded before, during, and after vibration showed a satisfactory signal to noise ratio, without increased noise from muscle activity or electromagnetic interferences during vibration. 2 Neural Plasticity ceased in both cases. The mean onset latency remained unchanged before, during, and after vibration. Vibration Relocated to the MCPJ Digit 5. In seven subjects, when the vibration was moved to MCPJ of digit 5, the reduction of the amplitude of digit 3 SNAP was significantly smaller than when the vibration was applied to the MCPJ of digit 3 (P < 0:05 (Table 1). The SNAP amplitude again regained its baseline value after vibration ceased. There was no notable change in the onset latency of the SNAP throughout the experiments (Figure 2). Discussion We found that the mechanical vibration applied to the palm remarkably reduces the size of the digital SNAP. In addition, the SNAP amplitude returned to the baseline level immediately after the cessation of the vibration. The fact that the digit 3 SNAP reduction was smaller when the vibration was moved from MCPJ of digit 3 to digit 5 suggests a position specific effect caused by the vibration stimulation. Our experimental setup was carefully designed to minimize the impact of other possible factors for this phenomenon such as a change in intensity of the electrical stimulus or displacement of the recording electrodes. inhibiting the impulse propagation, which in turn may cause SNAP reduction [7]. This seems unlikely considering an approximately 40% increase in threshold attributable to this phenomenon. With the use of 100% supramaximal intensity, vibration-induced SNAP reduction remained the same, which, otherwise, would have activated the hyperpolarized axons. Second, hyperpolarization that is sufficient to produce a significant SNAP depression would have increased its onset latency, a finding not seen in our experiments [8]. Third, hyperpolarization develops slowly after the application of stimulation and wears off gradually over many minutes after its cessation [9,10]. This stands in contrast to the present findings, where vibration caused immediate suppression of SNAPs, which then recovered to the baseline level as soon as the stimulation ceased. Collision and "Line Busy" Phenomenon. The magnitude of the size reduction of digital SNAP shown in the present study implies that vibration should have activated mechanoreceptors wildly, and the vibration-induced afferent volley should have come from multiple types of sensitive mechanoreceptors. It is likely that the reduction in the amplitude of the SNAP reflects spike collision between the vibratory evoked depolarization and the electrically evoked spike. The "line busy" hypothesis implies the occurrence of an afferent spikes and associated after potential followed by the absolutely refractory, which prevents generation of action potentials, as suggested by Hagbarth [11]. Consistent with single nerve fiber studies in animals [12], microneurographic recordings in humans demonstrated the absolute refractory periods of the distal median nerve sensory axons to range Neural Plasticity from 0.7 to 4.5 ms (mean: 2:1 ± 0:9 ms) for all the afferent fibers with no difference between the rapid and slow adapting afferents [4,13]. Paired stimulus technique [14] also yielded the absolute refractory period of 0.7 ms for the human digital nerveSNAPs [15]. Maintaining the absolute refractory periods for all afferents would require the different types of Aβ fibers to discharge at approximately 222 to 1,428 Hz or above. It has been well established that in studies such as by Muniak et al. [16] that low-frequency vibratory stimuli (e.g., 20 Hz at amplitude of 50 μm) activate all types of hand mechanoreceptors and evoke multiple spikes per cycle. Nevertheless, there are no studies which provide direct evidences of the veryhigh-frequency tonic discharges of Aβ afferents during vibration to corroborate the hypotheses of "line busy" effects. Focal vibratory stimuli have been used in neurorehabilitation including the neurological diseases or disorders like stroke, spinal cord injury, multiple sclerosis, Parkinson's disease, and dystonia. Focal vibration stimulated the proprioceptive system to obtain an efficient motor control in functional activities [2]. Our study demonstrated in vivo that the Aβ afferent fibers (proprioceptive system) were stimulated by vibratory stimulation and coursed the reduction of the responsiveness of Aβ afferent fibers to the electrical stimulation. In conclusion, the remarkable drop of the SNAP size during acute exposure to vibratory stimulation reflects the significant reduction of the responsiveness of Aβ afferents to electrical stimulation. These changes of electrophysiological features of SNAP deserve further investigation in the study of the effects of focal vibration in neurorehabilitation. Data Availability Datasets analyzed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest The authors declare that they have no conflicts of interest.
2021-03-24T05:09:58.892Z
2021-03-03T00:00:00.000
{ "year": 2021, "sha1": "30d42ab5415b010cec2952471f26213d12dd2dd2", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/np/2021/8819169.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "30d42ab5415b010cec2952471f26213d12dd2dd2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267847211
pes2o/s2orc
v3-fos-license
Experimental and Numerical Study on Chloride Transport in Unsaturated Concrete: Highlighting Temperature, Humidity, and Mineral Admixtures Chloride transport within concrete is critical for the durability of reinforced concrete structures; however, its diffusion under the coupling action of temperature and humidity has not been fully comprehended. Therefore, in this work, the coupling effects of temperature, relative humidity, and mineral admixtures on chloride transport in concrete were investigated through experimental and numerical simulation work. The results show that the chloride diffusion coefficient decreases with the decreased temperature and growth of relative humidity; however, the chloride concentration on the concrete surface is increased with the growth of temperature and relative humidity. Moreover, compounding about 15% fly ash (FA) and 30% granulated ground blast furnace slag (GGBS) to replace the cement is the most beneficial for improving the antichloride capacity of concrete, considering also the strength. In addition, the numerical simulation considering the coupled effect of temperature and relative humidity of chloride transport in concrete has good agreement with that of experimental results. Introduction Chloride ion is a sensitive keyword when considering the durability performance of reinforced concrete structures, especially under harsh marine environments [1][2][3].In chloride-rich circumstances, reinforced bars embedded in concrete cover will corrode under the conditions of moisture and oxygen due to the serious damage to the rebars' electrochemical stability [4,5].Furthermore, rebar corrosion inside concrete will trigger a series of fatal consequences in the field of civil engineering, which includes the degradation of mechanical properties, load-bearing capacity, and seismic performance of concrete structures.Moisture is not only necessary for cement hydration but is also closely related to the durability of concrete, which plays a role as a carrier for the corrosive media to enter the concrete [6,7].The moisture variation in concrete is triggered by the water exchange with the external environment.For concrete with a high water-cement ratio, due to its poor compactness, its internal humidity is greatly affected by the environment, and the self-drying effect is not obvious [8,9].Furthermore, the pore saturation of concrete is determined by the relative humidity; meanwhile, the temperature is considered to influence the diffusion processes.The water transport in concrete not only significantly affects the hardening of concrete but also changes the mass and volume [10,11].In addition, when the internal relative humidity is higher than the ambient relative humidity, the moisture will be diffused into the air from the concrete inside, resulting in different degrees of dry shrinkage and tensile stress on the surface of the concrete structures, which seriously impacts the service life and durability performance of reinforced concrete structure [12].Hence, it is of great concern to reveal the principle of chloride ions' penetration through the concrete cover while evaluating the residual service life of concrete structures [13,14]. Recently, several researchers have conducted extensive studies and attempted to reveal the diffusion process of chloride within concrete using laboratory characterization and numerical simulation [15][16][17][18].Wu et al. [19] presented a study on the durability testing of a concrete facility located within the North Bay Port region and in service for 200 months.They proposed that the apparent chloride diffusion coefficient (D app ) and the apparent surface chloride concentration (C s ) follow the log-normal distribution, while m follows a normal distribution.Yu et al. [20] proposed the chloride diffusion coefficient model coupled with environmental factors to describe chloride ingress into concrete with field tests and artificially simulated environment experiments.Wang et al. [21] investigated chloride ion transport in concrete, considering the effect of a dry-exposure ratio under a diurnal tidal environment, and established a time-dependent model of chloride ion transport in concrete considering the effect of dry-exposure ratio.Al-Sodani et al. [22] investigated the correlation between short-term chloride ion migration coefficients and long-term Dapp in concrete with indoor experiments and field exposure experiments.They established a correlation model between field and laboratory results based on the experimental data.Pansera et al. [23] proposed an empirical chloride transport model contemplating the convection-diffusion zones through two Gaussian-Lorentzian functions using nonlinear regression in chloride profiles obtained from field structures located in different marine aggressive zones exposed to the marine environment for more than 40 years. Lately, much mesoscopic research into chloride transport in concrete has been reported [24][25][26][27][28].These works concentrate on two aspects: (1) investigating the impact of concrete's mesoscopic phases on chloride transport and (2) developing the equivalent methods from the mesoscale to the macroscale.Lee et al. [29] proposed a coupled transport model considering moisture and chloride distribution in concrete to investigate the influence of a cyclic exposure condition on chloride penetration.Jain et al. [30] developed a numerical model to study the coupled transport of chloride with heat, relative humidity, and oxygen into the concrete.Liu et al. [31] established a modified chloride ion diffusionconvection model for concrete considering curing age, initial water saturation, and the combined effects of chloride ions.These studies have revealed some intrinsic mechanisms of chloride transport in concrete and have demonstrated that mesoscopic research is an effective and reliable method for chloride transport in concrete.However, these investigations did not adequately account for the coupling effects of temperature, relative humidity, and mineral admixtures. Recently, the authors also conducted relevant studies on this issue, especially in practical marine environments.Despite enormous studies on the point of chloride transportation, there are still several drawbacks to revealing the chloride transport law influenced by other impact factors, such as temperature and relative humidity.Furthermore, a gap still exists in revealing the mechanism of chloride ion transport impacted by external factors.Thus, the chloride ion profile at the concrete cover will be discussed in this paper; meanwhile, concrete with mineral admixture was also involved in this study. Materials Ordinary Portland cement (OPC, P•O 52.5), granulated ground blast furnace slag (GGBS, S95), and fly ash (FA, Class I) are used as main cementitious materials.Their chemical compositions are shown in Table 1.Crushed granite with a size of 5-20 mm from Lei Xin Group Company (Qingdao, China) is used as coarse aggregate.River sand with a fineness modulus of 2.6 from Qingjian New Material Company (Qingdao, China) is used as fine aggregate.Polycarboxylic acid superplasticizer (JM-PCA) is used to adjust the workability of fresh concrete.In addition, an air-entraining agent is used to control the air content of fresh concrete by 3-5%.The water-binder ratio of concrete was controlled to 0.35.Cubic concrete with the dimensions of 100 mm × 100 mm × 100 mm was prepared as the mix proportion listed in Table 2.Each sample of three is a group and takes the average value as the experimental result.The prepared concrete was cured for 3 d, 7 d, 28 d, 56 d, and 120 d in a standard room with a temperature of 20 ± 2 • C and relative humidity of 95%.The sensors (temperature and humidity sensor and thermocouples) and their acquisition instruments used in this study are presented in Figure 1a.The sensor of temperature and humidity (SZWS-SHT15) is produced by Sanzhi Electronic Technology Co., Ltd.(Changsha, China), and its parameters are as follows: Capacitive, the measured temperature range is 0-120 • C (±0.5 • C), and the measured humidity range is 0-100% (±3%).The thermocouple is K-type and its acquisition instrument (34970A) is produced by Agilent Technology Co., Ltd.(Santa Clara, CA, USA). Materials 2024, 17, x FOR PEER REVIEW 3 of 15 workability of fresh concrete.In addition, an air-entraining agent is used to control the air content of fresh concrete by 3-5%. Sample Preparation The water-binder ratio of concrete was controlled to 0.35.Cubic concrete with the dimensions of 100 mm × 100 mm × 100 mm was prepared as the mix proportion listed in Table 2.Each sample of three is a group and takes the average value as the experimental result.The prepared concrete was cured for 3 d, 7 d, 28 d, 56 d, and 120 d in a standard room with a temperature of 20 ± 2 °C and relative humidity of 95%.The sensors (temperature and humidity sensor and thermocouples) and their acquisition instruments used in this study are presented in Figure 1a.The sensor of temperature and humidity (SZWS-SHT15) is produced by Sanzhi Electronic Technology Co., Ltd.(Changsha, China), and its parameters are as follows: Capacitive, the measured temperature range is 0-120 °C (±0.5 °C), and the measured humidity range is 0-100% (±3%).The thermocouple is K-type and its acquisition instrument (34970A) is produced by Agilent Technology Co., Ltd.(Santa Clara, CA, USA).The thermocouple was first preprocessed by tying a wooden stick to the bottom of the thermocouple probe to ensure that it could be fixed vertically inside the concrete.After that, a PVC pipe with an inner diameter of 15 mm was inserted into the concrete to a certain depth when pouring into concrete.To prevent the mortar from flowing into the pipe, we placed a rubber rod slightly smaller than the inner diameter of the PVC pipe in advance; after the concrete was finally hardened, the rubber rod was pulled out and replaced by the sensors, as shown in Figure 1b 1 .In this study, four different humidity spaces were constructed as follows: the concrete specimens were immersed in the water to make it fully saturated and reach 100%, as shown in Figure 1b 2 .Then, several specimens were removed in a constant temperature and humidity laboratory with a relative humidity of 80% and a temperature of 20 • C to make the humidity reach 80%.After that, they were dried using a drying oven until the humidity reached 65% and saturated the LiBr solution to make it 45%, respectively.Four temperature conditions of 45%, 65%, 80%, and 100% were constructed, respectively, in four different temperatures (5 • C, 25 • C, 45 • C, and 65 • C); after the salt absorption test, the concrete specimens were cleaned on the surface and then grounded using a type DRB-H1 concrete grinder.Furthermore, the content of chloride ions in powder was evaluated using the electrode method according to the Chinese Testing Code of Concrete for Port and Water Engineering, while saturated potassium sulfate was set as the reference electrode. Theoretical Models The transport of chloride ions in concrete is an unsteady process.Based on Fick's second law, the transport equation can be expressed as Equation ( 1) [32].Figure 2 shows the schematic diagram of ion diffusion within the concrete.Equation ( 2) is adopted to describe the chloride diffusion process under constant pore saturation.Equation (3) shows the formula to describe the effect of relative humidity on the chloride ion diffusion coefficient [33,34].Moreover, the chloride migration coefficient will be impacted by temperature, which is presented in Equation (4) [35,36].The influence of external environmental conditions on chloride ion transport is not a single effect; meanwhile, the increase in temperature will enhance the molecular kinetic energy, thus improving the transport rate of water molecules and resulting in the growth of water in pore structures.Correspondingly, the increase in relative humidity inside the concrete will affect the transport speed of ions or molecules rate, which has a certain effect on the temperature.Equation ( 5) introduces the empirical formula of chloride ion transport considering the coupled effect of temperature and humidity, while 10 mm was set as a boundary condition [37]. where c is the ions, t represents the transport time, x represents the transport distance away from the concrete surface, D is the migration coefficient, n 0 is a constant, erf is the error function, C s is the surface chloride concentration of concrete, and C f is the chloride concertation at a certain depth. Materials 2024, 17, 930 where D is the chloride ion transport coefficient, H R is the relative humidity inside the concrete, H C is the critical relative humidity, T 1 is the reference temperature (293 K), T is the calculated temperature, q is the activation constant, D 1 is the chloride ion transport coefficient at temperature T 1 . Theoretical Models The transport of chloride ions in concrete is an unsteady process.Based on Fick's second law, the transport equation can be expressed as Equation ( 1) [32].Figure 2 shows the schematic diagram of ion diffusion within the concrete.Equation ( 2) is adopted to describe the chloride diffusion process under constant pore saturation.Equation (3) shows the formula to describe the effect of relative humidity on the chloride ion diffusion coefficient [33,34].Moreover, the chloride migration coefficient will be impacted by temperature, which is presented in Equation ( 4) [35,36].The influence of external environmental conditions on chloride ion transport is not a single effect; meanwhile, the increase in temperature will enhance the molecular kinetic energy, thus improving the transport rate of water molecules and resulting in the growth of water in pore structures.Correspondingly, the increase in relative humidity inside the concrete will affect the transport speed of ions or molecules rate, which has a certain effect on the temperature.Equation ( 5) introduces the empirical formula of chloride ion transport considering the coupled effect of temperature and humidity, while 10 mm was set as a boundary condition [37]. Model Establishment Based on the theoretical models mentioned above, chloride ion transport was simulated using the software COMSOL Multiphysics (6.1).A two-dimensional model of concrete was established to a cross-section sized 100 mm × 100 mm by randomly distributing circles.The model was divided into three phases: cement paste, fine aggregate, and coarse aggregate.Moreover, the interfacial transitional zone (ITZ) between cement paste and aggregates was modeled with a thickness of 70 µm [38], as shown in Figure 3a.The model was meshed as follows: regular-size triangular meshing for the cement paste, boundary-layer meshing for the interface zone, and finer-size triangular meshing for the aggregates, as shown in Figure 3b.From the physical viewpoint, this model optimizes the aggregate interface and is more in line with the practical situation with a random arrangement of coarse aggregates and fine aggregates.The diffusion coefficient of the ITZ is 4 × 10 −10 m 2 /s, and that of aggregate is 4 × 10 −13 m 2 /s.Furthermore, the water-cement ratio (w/c) was set as 0.35, and the concrete density was 2500 kg/m 3 .In addition, the surface chloride ion concentration can be calculated as Equation (6).The values of the parameters used in the calculations are shown in Table 3. where C S,Cl is the surface chloride ion concentration, A S,Cl is the regression coefficient of the surface chloride content, w/b is the water-cement ratio, and γ C,Cl is the fractional coefficient of the surface chloride ion concentration. where CS,Cl is the surface chloride ion concentration, AS,Cl is the regression coefficient of the surface chloride content, w/b is the water-cement ratio, and γC,Cl is the fractional coefficient of the surface chloride ion concentration. Compressive Strength of Concrete The compressive strength of concrete was tested after curing for 3 d, 7 d, 28 d, 56 d, and 120 d, as shown in Figure 4.The early compressive strength (-7 d) of OPC-concrete was improved by the replacement of FA with a dosage of 15%, while it was decreased when the dosage reached 30% and 50%, as shown in Figure 4a.However, the compressive strength of all FA concretes was increased by about 32.4% more than OPC concrete when the curing age reached 120 d.Therefore, the early compressive strength of concrete can be decreased due to the excessive addition of FA, while its effect was positive for its longterm compressive strength.As presented in Figure 4b, the compressive strength of OPC concrete after curing for 28 d was improved by about 31.5%, 45.8%, 51.1%, and 49.7% when the dosage of GGBS was 15%, 30%, 50%, and 65%, respectively.The long-term compressive strength was improved as the same trend.Therefore, the compressive strength of concrete can be improved with the increased dosage of GGBS until the dosage exceeds 50%.In addition, there is no significant improvement in the early compressive strength of OPC concrete when simultaneously adding FA and GGBS, as presented in Figure 4c.Meanwhile, compared to the single addition of FA or GGBS, the compressive strength of FA+GGBS concrete was decreased by about 14.6% and 23.3% for the age of 28 d, and 8.4% and 17.4% for the age of 120 d.Therefore, the compressive strength of concrete can be significantly improved by adding 15% FA or 50% GGBS. Compressive Strength of Concrete The compressive strength of concrete was tested after curing for 3 d, 7 d, 28 d, 56 d, and 120 d, as shown in Figure 4.The early compressive strength (-7 d) of OPC-concrete was improved by the replacement of FA with a dosage of 15%, while it was decreased when the dosage reached 30% and 50%, as shown in Figure 4a.However, the compressive strength of all FA concretes was increased by about 32.4% more than OPC concrete when the curing age reached 120 d.Therefore, the early compressive strength of concrete can be decreased due to the excessive addition of FA, while its effect was positive for its long-term compressive strength.As presented in Figure 4b, the compressive strength of OPC concrete after curing for 28 d was improved by about 31.5%, 45.8%, 51.1%, and 49.7% when the dosage of GGBS was 15%, 30%, 50%, and 65%, respectively.The long-term compressive strength was improved as the same trend.Therefore, the compressive strength of concrete can be improved with the increased dosage of GGBS until the dosage exceeds 50%.In addition, there is no significant improvement in the early compressive strength of OPC concrete when simultaneously adding FA and GGBS, as presented in Figure 4c.Meanwhile, compared to the single addition of FA or GGBS, the compressive strength of FA+GGBS concrete was decreased by about 14.6% and 23.3% for the age of 28 d, and 8.4% and 17.4% for the age of 120 d.Therefore, the compressive strength of concrete can be significantly improved by adding 15% FA or 50% GGBS.The free chloride concentration of LF50 concrete surface was measured after eroding in the environment with an RH of 65% and four temperatures (5 °C, 25 °C, 45 °C, and 65 °C) of Temperature on Chloride Transport The free chloride concentration of LF50 concrete surface was measured after eroding in the environment with an RH of 65% and four temperatures (5 • C, 25 • C, 45 • C, and 65 • C) for 10 d, as presented in Figure 5.The chloride concentration on the surface area was 0.290% at 5 • C, which tends to be stable at a depth of approximately 5 mm.Similarly, at 25 • C, the chloride concentration at the surface was about 0.275%, and the diffusion depth was about 5 mm.However, the surficial chloride concentration was significantly improved to 0.610% and 2.264% when the temperature reached 45 • C and 65 • C, respectively.Meanwhile, the diffusion depth was also increased to 10 mm.The chloride ion distribution after capillary salt absorption at 65 • C was fitted according to Fick's second law to obtain the diffusion coefficient (D) and chloride concentration (C s ) of the concrete surface, as shown in Figure 5b.The correlation coefficient (R 2 ) was 0.983, indicating the fitting result was reliable.The confidence band of 95% presented in the figure can also prove that result.In addition, the correlation between the chloride diffusion coefficient and temperature was established according to Arrhenius' empirical formula (Equation ( 7)), as presented in Figure 5c.The chloride diffusion coefficient of concrete gradually increased with the increased temperature; meanwhile, the migration rate of chloride showed a similar tendency.From a physical point of view, the chloride transport was accelerated when the temperature increased; meanwhile, the concentration was higher due to a faster chloride migration coefficient. where the slope is the apparent activation energy E a (j•mul −1 ), the intercept is the preexponential factor, A, D is the chloride diffusion coefficient (10 −12 m 2 /s), and T is the thermodynamic temperature (K). Effect of Relative Humidity on Chloride Transport The free chloride concentration of concrete (L50 and LF50) surface was measured after eroding in an environment with a temperature of 25 • C and four relative humidities (45%, 65%, 80%, and 100%) for 10 d, as presented in Figure 6.Meanwhile, the diffusion coefficient and chloride concentration of the concrete surface obtained by fitting according to Fick's second law.The diffusion coefficient of concrete is decreased with the increased relative humidity.However, the C s is increased with the growth of relative humidity, especially for LF50 concrete.The C s of LF50 concrete is increased from 0.229% to 0.467% when the relative humidity is increased from 45% to 100%.On the one hand, the chloride ion diffusion is inhibited by the compound replacement of FA and GGBS, inducing more chlorides to be enriched on the concrete surface.Moreover, chloride diffusion is dependent on moisture transport, while more moisture migrates from the concrete surface to the interior when the relative humidity of the environment lower.It can be further observed that the penetration depth of concrete decreases with the increase in internal humidity; the penetration depth of chlorides is about 20 mm, 15 mm, 15 mm, and 10 mm when the relative humidity is 45%, 65%, 80%, and 100%.In addition, it is noted that under unsaturated conditions (not a relative humidity of 100%), there are convection zones that exist on the concrete surface due to capillary adsorption.Therefore, the chloride diffusion will be inhibited by the increased relative humidity; meanwhile, the chloride will be enriched on the concrete surface. second law.The diffusion coefficient of concrete is decreased with the increased relative humidity.However, the Cs is increased with the growth of relative humidity, especially for LF50 concrete.The Cs of LF50 concrete is increased from 0.229% to 0.467% when the relative humidity is increased from 45% to 100%.On the one hand, the chloride ion diffusion is inhibited by the compound replacement of FA and GGBS, inducing more chlorides to be enriched on the concrete surface.Moreover, chloride diffusion is dependent on moisture transport, while more moisture migrates from the concrete surface to the interior when the relative humidity of the environment is lower.It can be further observed that the penetration depth of concrete decreases with the increase in internal humidity; the penetration depth of chlorides is about 20 mm, 15 mm, 15 mm, and 10 mm when the relative humidity is 45%, 65%, 80%, and 100%.In addition, it is noted that under unsaturated conditions (not a relative humidity of 100%), there are convection zones that exist on the concrete surface due to capillary adsorption.Therefore, the chloride diffusion will be inhibited by the increased relative humidity; meanwhile, the chloride will be enriched on the concrete surface.8).There is a linear decreasing trend of the chloride diffusion coefficient with the rising of the relative humidity.The main reason for this phenomenon is that the capacity of chloride penetration is stronger when relative humidity is low due to the water absorption ability.Meanwhile, chloride ion concentration will be much higher, accompanied by moisture going inside the concrete.The capillary absorption capacity is weakened and almost zero when the relative humidity reaches 100%. where D is the chloride migration coefficient (10 −12 m 2 /s), R H is the relative internal humidity of concrete, a and b are constant values which depend on the concrete type and temperature. 2 Effect of Mineral Admixtures on Chloride Transport (1) Effect of adding FA on chloride transport Figure 8 shows the chloride ion concentration of concrete with different contents of FA under different internal relative humidities.There is no significant variation in chloride ion content at the same depth, especially for the concrete prepared with 15% and 30% FA.However, for the concrete in the environment with a relative humidity of 45%, its chloride ion concentration is increased significantly at the same depth when the replacement of FA reaches 50%.Therefore, there is little effect on porosity and chloride diffusion coefficient when the content of fly ash is less than 30%.In addition, FA is advanced in larger surface area and hollow structure, which provides the reaction sites and increases the absorption capacity of chloride ions.Therefore, considering the strength and chloride diffusion of concrete, the replacement of FA is suggested as less than 30%. (2) Effect of adding GGBS on chloride transport Figure 9 shows the chloride ion concentration of concrete with different contents of GGBS under different relative humidities.For F53 concrete under the relative humidity of 45%, its chloride ion concentration is the lowest at the same depth, indicating that the concrete with 50% GGBS admixture has the strongest antichloride capacity.For the concrete under the relative humidity of 45%, 65%, and 80%, respectively, the chloride ion concentration at the same depth is gradually increased, which is higher than that of the concrete without GGBS mixing.Therefore, considering the strength and chloride diffusion of concrete, the replacement of GGBS is suggested as less than 50%. (3) Effect of compounding FA and GGBS on chloride transport Figure 10 presents the evolution of chloride ion concentration in concretes under the relative humidity of 65% and different temperatures.The chloride diffusion is inhibited by compounding FA and GGBS, especially under higher temperatures.Meanwhile, the difference in the chloride diffusion coefficient between the normal concrete and that of compounding FA and GGBS is more significant with the increased temperature.The chloride diffusion coefficient of LF50 concrete under the temperature of 65 • C is decreased by about 42.4% less than that of L50 concrete.Therefore, compounding 17% FA and 32% GGBS to replace the OPC is beneficial for improving the antichloride capacity of concrete.Meanwhile, the strength and porosity of concrete is also improved.In summary, temperature, relative humidity, and mineral admixtures will influence the chloride ion transport at the concrete surface.Yet, only individual effects of temperature, relative humidity, and mineral admixtures on the chloride ion transport were analyzed in this paper; also, the conditions of impact factors were limited in this experiment, and a gap with the reality in practical engineering still exists, which may influence the service life prediction of reinforced concrete structures.Moreover, the coupling effects have not been investigated in detail and still need to be deeply studied in the not-distant future.Compared to the experimental result, the effect tendency of temperature on chloride transport obtained with simulation is identical.However, it should be noted that the effect of relative humidity is not considered in Equation ( 4), inducing an error existing in the diffusion depth.In addition, the chloride enrichment at the ITZ between the aggregate and cement paste is weakened when the temperature reaches 65 • C. (2) Effect of relative humidity on chloride transport Based on Equation ( 3), the single effect of relative humidity on chloride transport in concrete for 30 d was simulated using the COMSOL software.Figure 12 presents the cloud images of chloride distribution in concrete at the relative humidity of 45%, 65%, 80%, and 100%.The chloride concentration is decreased with the depth of concrete, and the chloride migrates to greater depths as relative humidity decreases.This effect tendency of relative humidity on chloride transport is consistent with the experimental result presented in Figure 6.The chloride diffusion depth in concrete is 25 mm, 20 mm, 12.5 mm, and 7.5 mm when the relative humidity is 45%, 65%, 80%, and 100%, respectively.In addition, the chloride is enriched at the ITZ between the aggregate and cement paste, and this phenomenon becomes more pronounced with the decrease in relative humidity.The main reason is that the moisture transport is inhibited by the aggregate, resulting in the decrease in (2) Effect of relative humidity on chloride transport Based on Equation (3), the single effect of relative humidity on chloride transport in concrete for 30 d was simulated using the COMSOL software.Figure 12 presents the cloud images of chloride distribution in concrete at the relative humidity of 45%, 65%, 80%, and 100%.The chloride concentration is decreased with the depth of concrete, and the chloride migrates to greater depths as relative humidity decreases.This effect tendency of relative humidity on chloride transport is consistent with the experimental result presented in 6.The chloride diffusion depth in concrete is mm, 20 mm, 12.5 mm, and 7.5 mm when the relative humidity is 45%, 65%, 80%, and 100%, respectively.In addition, the chloride is enriched at the ITZ between the aggregate and cement paste, and this phenomenon becomes more pronounced with the decrease in relative humidity.The main reason is that the moisture transport is inhibited by the aggregate, resulting in the decrease in chloride diffusion to the interior of the aggregate.(2) Effect of relative humidity on chloride transport Based on Equation ( 3), the single effect of relative humidity on chloride transport in concrete for 30 d was simulated using the COMSOL software.Figure 12 presents the cloud images of chloride distribution in concrete at the relative humidity of 45%, 65%, 80%, and 100%.The chloride concentration is decreased with the depth of concrete, and the chloride migrates to greater depths as relative humidity decreases.This effect tendency of relative humidity on chloride transport is consistent with the experimental result presented in Figure 6.The chloride diffusion depth in concrete is 25 mm, 20 mm, 12.5 mm, and 7.5 mm when the relative humidity is 45%, 65%, 80%, and 100%, respectively.In addition, the chloride is enriched at the ITZ between the aggregate and cement paste, and this phenomenon becomes more pronounced with the decrease in relative humidity.The main reason is that the moisture transport is inhibited by the aggregate, resulting in the decrease in chloride diffusion to the interior of the aggregate.5), the coupled effect of temperature and relative humidity on chloride transport in concrete for 30 d was simulated using the COMSOL software.Figure 13 presents the cloud images of chloride distribution in concrete at a constant temperature and relative humidity.The chloride concentration is decreased with the depth of concrete.The effect of temperature on chloride transport in concrete is more significant than that of relative humidity because the chloride migrates to greater depths as the temperature, as shown in Figure 13b.The chloride migrates to the deepest interior of the concrete when the relative humidity is 45% and the temperature is 25 °C.However, there is no significant difference in the chloride transport in concrete at the constant temperature and different relative humidity.It is worth noting that the chloride diffusion capacity of coarse aggregate is improved by the higher temperature while the transport distance is prolonged; furthermore, coarse aggregate has better resistance to chloride penetration than fine aggregates.In addition, the phenomenon of chloride enrichment in the ITZ between the aggregate and cement paste is alleviated.5), the coupled effect of temperature and relative humidity on chloride transport in concrete for 30 d was simulated using the COMSOL software.Figure 13 presents the cloud images of chloride distribution in concrete at a constant temperature and relative humidity.The chloride concentration is decreased with the depth of concrete.The effect of temperature on chloride transport in concrete is more significant than that of relative humidity because the chloride migrates to greater depths as the temperature, as shown in Figure 13b.The chloride migrates to the deepest interior of the concrete when the relative humidity is 45% and the temperature is 25 • C.However, there is no significant difference in the chloride transport in concrete at the constant temperature and different relative humidity.It is worth noting that the chloride diffusion capacity of coarse aggregate is improved by the higher temperature while the transport distance is prolonged; furthermore, coarse aggregate has better resistance to chloride penetration than fine aggregates.In addition, the phenomenon of chloride enrichment in the ITZ between the aggregate and cement paste is alleviated. Conclusions In this work, the coupling effect of temperature, relative humidity, and mineral admixtures (FA and GGBS) on chloride transport in concrete was investigated through experimental and numerical simulation work.The main conclusions are as follows: Conclusions In this work, the coupling effect of relative humidity, and mineral admixtures (FA and GGBS) on chloride transport in concrete investigated through experimental and numerical simulation work.The main conclusions are as follows: 1. The replacement of OPC by FA and GGBS is beneficial for improving the strength and capacity of antichloride, and its best dosage is suggested as compounding about 15% FA and 30% GGBS. 2. The chloride diffusion coefficient is decreased with the decreased temperature and growth of relative humidity; however, the chloride concentration on the concrete surface is increased with the growth of temperature and relative humidity.Moreover, the chloride migrates to the deeper interior as the temperature increases. 3. The effect tendency of temperature and relative humidity on chloride distribution in concrete obtained from the numerical simulation is consistent with the experimental result.Moreover, its accuracy is improved when considering the coupled effect of temperature or relative humidity on chloride transport. Figure 1 . Figure 1.Sensors arrangement and monitoring: (a) sensors and their acquisition instrument and (b) sensors embedding and immersion environment construction.Figure 1. Sensors arrangement and monitoring: (a) sensors and their acquisition instrument and (b) sensors embedding and immersion environment construction. Figure 1 . Figure 1.Sensors arrangement and monitoring: (a) sensors and their acquisition instrument and (b) sensors embedding and immersion environment construction.Figure 1. Sensors arrangement and monitoring: (a) sensors and their acquisition instrument and (b) sensors embedding and immersion environment construction. Figure 2 . Figure 2. Schematic diagram of ion transport.Figure 2. Schematic diagram of ion transport. Figure 2 . Figure 2. Schematic diagram of ion transport.Figure 2. Schematic diagram of ion transport. Figure 4 . Figure 4. Compressive strength of concrete.(a) Effect of FA; (b) effect of GGBS; (c) effect of compounding FA and GGBS. Figure 5 . Figure 5. Free chloride concentration of concrete surface: (a) effect of temperature, (b) fitting curve, and (c) regression results between chloride diffusion coefficient and temperature. Figure 7 Figure7shows the fitting result of the chloride diffusion coefficient and relative humidity in concrete.A new function is established, as shown in Equation(8).There is a linear decreasing trend of the chloride diffusion coefficient with the rising of the relative humidity.The main Figure 7 Figure7shows the fitting result of the chloride diffusion coefficient and relative humidity in concrete.A new function is established, as shown in Equation(8).There is a linear decreasing trend of the chloride diffusion coefficient with the rising of the relative humidity.The main reason for this phenomenon is that the capacity of chloride penetration is stronger when relative humidity is low due to the water absorption ability.Meanwhile, chloride ion concentration will be much higher, accompanied by moisture going inside the concrete.The capillary absorption capacity is weakened and almost zero when the relative humidity reaches 100%.D = a − bR H(8) Figure 7 . Figure 7.The result of the chloride diffusion coefficient relative humidity. 4. 3 . Numerical Simulation 4.3.1.Single Effect of Temperature or Relative Humidity on Chloride Transport (1) Effect of temperature on chloride transport Based on Equation (4), the single effect of temperature on chloride transport in concrete for 30 d was simulated using the COMSOL software.Figure 11 presents the cloud images of chloride distribution in concrete at the temperatures of 5 • C, 25 • C, 45 • C, and 65 • C. The chloride concentration is decreased with the depth of concrete, and the chloride migrates to greater depths with the increase in temperature.The chloride diffusion depth in concrete is 5 mm, 7.5 mm, 12.5 mm, and 25 mm when the temperature is 5 • C, 25 • C, 45 • C, and 65 • C, respectively. Figure 12 . Figure 12.Cloud images of chloride distribution in concrete with a relative humidity of (a) 45%, (b) 65%, (c) 80%, and (d) 100%.4.3.2.Coupled Effect of Temperature and Relative Humidity on Chloride Transport Based on Equation (5), the coupled effect of temperature and relative humidity on chloride transport in concrete for 30 d was simulated using the COMSOL software.Figure13presents the cloud images of chloride distribution in concrete at a constant temperature and relative humidity.The chloride concentration is decreased with the depth of concrete.The effect of temperature on chloride transport in concrete is more significant than that of relative humidity because the chloride migrates to greater depths as the temperature, as shown in Figure13b.The chloride migrates to the deepest interior of the concrete when the relative humidity is 45% and the temperature is 25 °C.However, there is no significant difference in the chloride transport in concrete at the constant temperature and different relative humidity.It is worth noting that the chloride diffusion capacity of coarse aggregate is improved by the higher temperature while the transport distance is prolonged; furthermore, coarse aggregate has better resistance to chloride penetration than fine aggregates.In addition, the phenomenon of chloride enrichment in the ITZ between the aggregate and cement paste is alleviated. Figure 12 . Figure 12.Cloud images of chloride distribution in concrete with a relative humidity of (a) 45%, (b) 65%, (c) 80%, and (d) 100%.4.3.2.Coupled Effect of Temperature and Relative Humidity on Chloride Transport Based on Equation (5), the coupled effect of temperature and relative humidity on chloride transport in concrete for 30 d was simulated using the COMSOL software.Figure13presents the cloud images of chloride distribution in concrete at a constant temperature and relative humidity.The chloride concentration is decreased with the depth of concrete.The effect of temperature on chloride transport in concrete is more significant than that of relative humidity because the chloride migrates to greater depths as the temperature, as shown in Figure13b.The chloride migrates to the deepest interior of the concrete when the relative humidity is 45% and the temperature is 25 • C.However, there is no significant difference in the chloride transport in concrete at the constant temperature and different relative humidity.It is worth noting that the chloride diffusion capacity of coarse aggregate is improved by the higher temperature while the transport distance is prolonged; furthermore, coarse aggregate has better resistance to chloride penetration than fine aggregates.In addition, the phenomenon of chloride enrichment in the ITZ between the aggregate and cement paste is alleviated. Materials 2024 , 15 Figure 13 . Figure 13.Cloud images of chloride distribution in concrete at (a) constant temperature of 25 °C and (b) constant relative humidity of 45%. Figure 13 . Figure 13.Cloud images of chloride distribution in concrete at (a) constant temperature of 25 • C and (b) constant relative humidity of 45%. Table 3 . Parameters and values for calculation. Note: D 1 , D 2 , and D 3 are the chloride diffusion coefficients of the cement paste, ITZ, and aggregate (coarse and fine aggregate), respectively.T ref is the standard temperature.q is the activation constant.Materials 2024, 17, x FOR PEER REVIEW 6 of 15 Table 3 . Parameters and values for calculation.D1, D2, and D3 are the chloride diffusion coefficients of the cement paste, ITZ, and aggregate (coarse and fine aggregate), respectively.Tref is the standard temperature.q is the activation constant. Author Contributions: Conceptualization, Z.D.; Methodology, R.Z.; Software, H.X. R.Z.; Investigation, H.X.; Writing-original draft, Z.D. and S.L.; Writing-review & editing, Z.J. and S.L.; Visualization, Z.D.; Funding acquisition, Z.J.All authors have read and agreed to the published version of the manuscript.This work is part of projects financially supported by the National Natural Science Foundation of China (NO.52078259).Also, this work is financially supported by the Taishan Scholars of Shandong Province (No. ts20190942). Funding:Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
2024-02-25T05:22:29.820Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "499477a37d9088f290fbae00e4a23190eebf1b5a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "499477a37d9088f290fbae00e4a23190eebf1b5a", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
117990329
pes2o/s2orc
v3-fos-license
On the limiting procedure by which $SDiff(T^2)$ and $SU(\infty)$ are associated There have been various attempts to identify groups of area-preserving diffeomorphisms of 2-dimensional manifolds with limits of SU(N) as $N\to\infty$. We discuss the particularly simple case where the manifold concerned is the two-dimensional torus $T^2$ and argue that the limit, even in the basis commonly used, is ill-behaved and that the large-N limit of SU(N) is much larger than $SDiff(T^2)$. I. INTRODUCTION Groups of area-preserving diffeomorphisms and their Lie algebras have recently been the focus of much attention in the physics literature. Hoppe [1] has shown that in a suitable basis, the Lie algebra of the group SDif f (S 2 ) of area-preserving diffeomorphisms of a sphere tends to that of SU(N) as N → ∞. Similar arguments have been made associating various infinite limits of Lie algebras of classical groups with Lie algebras of groups of area-preserving diffeomorphisms of 2-dimensional surfaces. This has obvious interest in connection with gauge theories of SU(N) for large N. The use of SU(N) for finite N as an approximation to groups of area-preserving diffeomorphisms has also been used in studies of supermembranes [2][3][4] and in particular has been used to argue for their instability. The authors of references [3] and [4] have especially emphasized the difficulties in relating such infinite limits with Lie algebras of area-preserving diffeomorphisms. Various authors have considered special limits and/or large-N limits of other classical Lie algebras [6][7][8][9][10] as relevant for 2-manifolds other than spheres. The purpose of this Letter is to clarify the nature of the limiting procedure by which SU(∞) has been related to SDif f (T 2 ). II. THE LIE ALGEBRAS OF SDIF F (T 2 ) We follow here the treatment of [7], which is particularly clear. The torus T 2 is represented by the plane Ê 2 with coordinates x and y and the identifications (x, y) = (x + 2π, y) and (x, y) = (x, y + 2π) A basis for functions on the torus is chosen as with m, n running over all integers. The local area-preserving diffeomorphisms are then generated by the vector fields with indices a, b = 1, 2. In other words, the divergence-free vector fields are those which are the curl of something else. The generators clearly close under commutation, with the commutator III. THE LIE ALGEBRA OF SU (N ) To construct the Lie algebra of SU(N), again following [7], we sketch the basic idea. Fix a positive integer N and a complex number ω such that ω N = 1 but ω r = 1 for 0 < r < N. ω is called a primitive root of unity. Then we have ω = exp(2πik/N) for some k relatively prime to N. Now we find unitary, traceless matrices g and h such that Then the set of matrices J m,n = ω mn/2 g m h n for 0 ≤ m, n < N are linearly independent and are a basis for the N × N matrices. IV. THE N → ∞ LIMIT The claim now is that in the limit N → ∞ that the commutation relations in equation III go over to those in equation II. Naively, of course, one would like to argue that as N → ∞, and drop the terms of order 1/N 2 and higher. However, let us keep the next term and examine whether or not it can indeed be taken to be small. Now consider any choice of (m, n) = (N/a, 0) and (m ′ , n ′ ) = (0, N/b) where a and b are arbitrary integers that divide N (including one). Then which is clearly not negligible as N → ∞. It would seem that there are many elements of the Lie algebra of SU(N) which do not belong to SDif f (T 2 ). This is in keeping with ideas raised in [11] suggesting that SU(∞) is much larger than the group of area-preserving diffeomorphisms of a surface, and perhaps descibes some sort of theory incluing topology change. Other work demonstrating that topologically, SDif f (T 2 ), and indeed all the area-preserving diffeomorphism groups, are inequivalent to SU(∞) is in [12]. V. ACKNOWLEDGEMENT The author would like to my colleagues at Northeastern University and the National Science Foundation for their support.
2019-04-14T02:52:36.016Z
2004-05-01T00:00:00.000
{ "year": 2004, "sha1": "83661a40526a9bde0c18331560c3f181856c65a6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "83661a40526a9bde0c18331560c3f181856c65a6", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
22994016
pes2o/s2orc
v3-fos-license
Quantifying metastatic inefficiency: rare genotypes versus rare dynamics We introduce and solve a ‘null model’ of stochastic metastatic colonization. The model is described by a single parameter θ: the ratio of the rate of cell division to the rate of cell death for a disseminated tumour cell in a given secondary tissue environment. We are primarily interested in the case in which colonizing cells are poorly adapted for proliferation in the local tissue environment, so that cell death is more likely than cell division, i.e. θ < 1 ?> . We quantify the rare event statistics for the successful establishment of a metastatic colony of size N. For N ≫ 1 ?> , we find that the probability of establishment is exponentially rare, as expected, and yet the mean time for such rare events is of the form ∼ log ( N ) / ( 1 − θ ) ?> while the standard deviation of colonization times is ∼ 1 / ( 1 − θ ) ?> . Thus, counter to naive expectation, for θ < 1 ?> , the average time for establishment of successful metastatic colonies decreases with decreasing cell fitness, and colonies seeded from lower fitness cells show less stochastic variation in their growth. These results indicate that metastatic growth from poorly adapted cells is rare, exponentially explosive and essentially deterministic. These statements are brought into sharper focus by the finding that the temporal statistics of the early stages of metastatic colonization from low-fitness cells ( θ < 1 ?> ) are statistically indistinguishable from those initiated from high-fitness cells ( θ > 1 ?> ), i.e. the statistics show a duality mapping ( 1 − θ ) → ( θ − 1 ) ?> . We conclude our analysis with a study of heterogeneity in the fitness of colonising cells, and describe a phase diagram delineating parameter regions in which metastatic colonization is dominated either by low or high fitness cells, showing that both are plausible given our current knowledge of physiological conditions in human cancer. Introduction Cancer metastasis is the process of dissemination of cancer cells originating from a solid tumour, and their subsequent colonization of distant tissues [1,2]. Metastatic disease is responsible for the great majority of cancer deaths. It is generally considered to be a succession of unlikely events rendering in toto a highly inefficient process [3][4][5]. In the conventional view of metastasis, only certain cancer cells from the primary tumour can successfully metastasise as they are thought to require a number of attributes, such as the ability to invade local tissue, enter, survive in, and leave the bloodstream, and colonize new tissue environments. As such, they have been likened to decathletes, in that they possess multiple pre-adapted abilities, all of which are required to form a new colony in a distant tissue [6]. This view explains metastatic inefficiency, as it highlights the low probability of Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. acquiring all the specific genetic mutations necessary to achieve the intermediate goals en route to successful metastasis [7,8]. However, as pointed out by Bernards and Weinberg [9], this picture contains a conceptual difficulty concerning the microevolution of such cells. The acquisition of these abilities would appear to endow the cells with no selective advantage in the primary tumour, and, as such, one might expect the sub-population of fully-fledged metastatic cells to be vanishingly small. The resolution put forward by Bernards and Weinberg posits pleiotropy, namely, that genetic mutations enabling metastasis might also provide selective advantage in the primary tumour, but this view is then hard to reconcile with metastatic inefficiency. An additional and important piece of information is the observation of large numbers of circulating tumour cells (CTCs) and disseminated tumour cells (DTCs) in cancer patients with no discernible metastatic disease [10][11][12], which provides a challenging backdrop in which to understand metastatic inefficiency. Clearly, there is scope for a more precise and quantitatively based understanding of metastasis, the first steps of which we hope to provide in this paper. Given the large number of CTCs in the blood stream and DTCs in the bone marrow and other secondary sites of cancer patients, one can posit a large number of individual attempts at metastatic colonization occurring over time, most of which are unsuccessful; so that the inefficiency of metastasis can be approached as a problem in the statistics of rare events. For the purposes of this paper, we delineate such rare events into two types. First, rare genotypes, as considered in the mainstream paradigm; it being assumed that there exists only a minute subset of colonising cancer cells capable of successful metastasis. Second, rare dynamics, which is the concept we wish to explore in depth in this paper; in which we posit that highly unlikely colonization dynamics can arise from the statistical fluctuations of birth and death of otherwise poorly adapted seeding cells. A helpful metaphor to understand the difference between these two types of rare events is the use of special forces versus infantry to achieve a military mission. 'Rare genotypes' corresponds to a highly trained yet small group of soldiers who each have the specialized skills to accomplish the mission. 'Rare dynamics' corresponds to a large number of poorly trained infantry, none of whom have the specialized skills required, but each of whom has a very small but non-zero chance of accomplishing the mission. We will study these rare events in the context of the final stage of metastasis, that of tissue colonization, which is arguably the most poorly understood step of metastasis, and yet observed to be the highest barrier for DTCs to overcome in forming a new tumour [2,4,5]; in mouse models it is observed that the vast majority of metastatic cells readily die once they reach a distant tissue site and attempt to colonize it [13]. We propose a simple birth-death model of stochastic colonization which will allow us to quantify the temporal statistics of rare dynamics. The results arising from this model are counterintuitive at first glance, and provide a fascinating duality between colonization arising from rare genotypes and rare dynamics. We then proceed to quantify the relative likelihood of colonization from these two different sources, based on human cancer data to hand, and show plausibility that metastatic colonization can equally well arise purely by chance, not requiring mutations of key genes. Methods We are interested in the probability and dynamics of metastatic colonization, namely, the process by which a DTC, having successfully become located in a secondary tissue environment, begins proliferation to attempt to create a new tumour (a micro-metastasis). We will approach this problem as a stochastic rather than a deterministic process, namely, we ascribe to the founding cell a rate (probability per unit time) of cell death μ, and, similarly a rate of cell division r. The key parameter in the model is the ratio of these two rates, which we denote by θ μ = r / , and which we term 'fitness'. It is very important to understand that the fitness is a property of a given cell in a given environment (i.e. not a property that can be ascribed to a cell per se). Later in our analysis, we will consider heterogeneity of fitness in the DTC population in order to discuss how our model relates to Pagetʼs classical seed and soil hypothesis [6], which is used to explain to correlation of primary and secondary sites across different cancer types. Given that we are interested in the early stages of secondary tumour growth, we can assume that the birth and death rates are independent of the number of cells in the colony (i.e. the growth and death rates are density-independent). Thus, the stochastic process defining our model is a linear birth/death process [14]. We assume that the realizations of this process, i.e. the seeding events occurring in secondary sites by DTCs, are statistically independent. We also assume that progeny cells have the same fitness as the founding cell. This assumption will clearly break down, with interesting consequences, for a long-lived colony of cells, but will be a good approximation for the nascent micro-tumours of interest here. We note here that there is prior work on applying stochastic processes to tumour growth, but these have all been concerned with growth arising from high fitness cells in a microevolutionary framework [15][16][17]. The fitness parameter θ encapsulates all the factors affecting the proliferative potential of single cells in a particular microenvironment (vulnerability to apoptosis, interaction with stroma and immune system, etc). For cases when θ > 1, a founding DTC is more likely to divide than die, as are its progeny, and so it is likely that a tumour will arise, following an approximately exponential growth trajectory. Conversely, if θ < 1 cells tend to die more often than dividing, and the overwhelmingly most likely outcome of each seeding is that the nascent secondary tumour will rapidly succumb to extinction. However, rare events can occur in which a nascent colony can achieve a critical size, which, as we describe shortly, we define to be quasi-stable. A primary concern in this paper is to calculate the absolute probability and associated temporal statistics of these rare dynamical events. If cells remain poorly adapted to their environment, with θ < 1, every attempt to form a sustainable tumour will ultimately fail given long enough time, since zero cells is the only absorbing state for this stochastic process, as described. Biologically, there are at least two ways in which a fragile tumour arising from a low-fitness founding cell can evade extinction. First, by reaching a critical size N that affords a less hostile internal microenvironment to the cells. In a tumour of this moderate size, cells internal to the tumour, which are not in direct contact with the hostile microenvironment, will have a higher effective fitness, θ′ > 1, rendering the tumour quasistable and allowing essentially deterministic growth thereafter. Second, given that cancer cells are genetically unstable, each division event carries a chance to produce daughter cells of a slightly different genotype, which after several generations provides the possibility, through microevolution, of adapted cells arising in the tissue niche. In either case, there is a critical threshold that new colonies have to surpass in order to become stable. The first (microenvironment) case requires a critical colony size N to be reached, while the second (microevolution) case requires a critical number of division events to allow significant diversity to arise and thus the possibility for adaptation. We shall address here the first case, assuming cells to retain the fitness of the founding cells, which already provides rich and interesting results. The statistical properties of the second case are more complicated and are not within the purview of this paper. In brief, the basic assumptions of our birth/death model are: • All colonization attempts start with single DTCs and are independent of each another. • All cells within a nascent tumour have the same fitness. Dynamical adaptation and microevolution are not considered. • Cell divisions and deaths occur stochastically according to Poisson statistics. • Only small nascent metastases, not limited by nutrients, are considered. As such, the rates of cell division and death are assumed independent of the cluster size. • For metastases arising from a low-fitness cell, there is a critical size N at which nascent tumours render a new internal microenvironment, shielding inside cells from the deleterious effects of the external tissue, and stabilising the stochastic growth. These assumptions are sufficient for us to formulate this model as a stochastic process which can be analysed using a linear master equation, as described below in some detail. At first glance one would imagine that a simple model such as this would have been studied in detail decades ago. To our knowledge this is not the case, although we note that Basan et al [18] studied a non-linear version of this model in a biomechanical study of metastatic growth, wherein forces within the host tissue act to oppose tumour growth until a critical mass is achieved. Similar models of stochastic colonization have been studied in island ecology, with the biological assumption that new individuals are pre-adapted to the new habitat (θ > 1), and thus most likely proliferate [19]. The interest of such studies resides in calculating the statistics of small population size extinctions wiping out strong, but fledgling colonies. This reasoning has also been applied to stochastic re-colonizations of pre-adapted metastatic cancer stem cells during treatment cycles [17]. We use well-known analytical methods of stochastic process theory [14] to construct and solve our model. Solutions for temporal statistics of interest can be calculated exactly, albeit after lengthy calculations, using generating functions and Laplace transform techniques. Furthermore, our results are fully supported by numerical calculations and simulations. These simulations provide us with explicit rare event realizations that are helpful in guiding our understanding of some of the rather counterintuitive results that we report. First, we consider the colonization process arising from a population of DTCs with the same fitness θ. These results will be used later when considering heterogeneity. We define the function P n t ( , ) to be the probability that the nascent tumour has n cells at time t, given that there existed a single cell at time t = 0. This probability increases when a division event occurs in the − n ( 1)-state or a cell death event occurs in the + n ( 1)-state. Likewise, P n t ( , ) will decrease when division or death events occur in the n-state. The rate of change of this probability function is given by the master equation: defined for ⩾ n 0 with the understanding that and the single-seeding initial condition δ = P n ( , 0) n,1 . We have scaled time by the cell death rate μ for convenience. In the terminology of stochastic process theory, n = 0 is an absorbing state. Since our primary interest is in low-fitness cells (θ < 1), and we assume the existence of a critical size N, we will study the statistics of first passage by imposing a second absorbing state at n = N [20]. This is accomplished by limiting equation (1) to ⩽ ⩽ − n N 0 ( 2), and disallowing death events from the state n = N: It is helpful to introduce δ Q t t ( ) N as the probability of reaching the critical size N in the time interval δ + t t t ( , ), from which we have For fitness values θ > 1 the existence of N is of little biological relevance, but one can still ask the corresponding statistical question: what are the first-passage temporal statistics for colonies arising from pre-adapted cells reaching size N? As such, we will use the same model to discuss nascent tumour growth for colonies arising from both low and high fitness founding DTCs, since the details of the calculations are insensitive to the size of θ. Solving the + N ( 1) differential equations (1)-(4) yields P N (t) and Q N (t), from which all statistics of successful colonization can be derived. In particular we are interested in the following three quantities: (i) The probability of successful establishment of a colony of size N: The average time to reach size N in the subset of successful realizations: The variance in the time to reach size N in the subset of successful realizations: The Laplace transform method is used to solve equations (1)-(4) in order to obtain Q N (t), and then calculate these statistics. Because our primary interest is in non-preadapted cells (θ < 1) we shall present some results in a form that it is most appropriate for this case, but the derivation and general results are valid for arbitrary θ values. Probability of establishment We provide details of the calculation of various statistical quantities, starting with the probability of establishment of a micrometastasis of size N. By definingP s ( ) n to be the Laplace transform of P n (t), the master equations and boundary conditions take the form of the following + N 1 algebraic equations:ˆ=ŝ Equations (11)-(14) form a closed set of − N 1 equations and can be solved explicitly. We now define the set of functions H s { ( )} n via the recursion relation (17) and eliminatingˆ− P N 2 using (14) and then (5) we obtain a simple expression forQ s ( ) It is subsequently convenient here to work with the set of and using (16) we get the recursion relation: Now, using the definition of the Laplace transform, the integrals of Q N (t) with weights 1, t, and t 2 are respectively given byQ (0) Therefore the probability of successful establishment (equation (6)) is Substituting in (22) we have our first main result, viz. the probability of establishment is given by: Mean and variance of colonization times The temporal statistics can be obtained in a similar manner. From (21) we have that Differentiating (20) with respect to s and setting s = 0 one finds n n n 1 2 n Solving by iteration for ′ J (0) n and using equation (27) gives an exact solution for the average time of colonization: Following the same procedure, in solving by iteration for ″ J (0) n , one finds after a lengthy calculation an exact solution for the variance in the time of colonization: Asymptotic results for large N We are interested in micrometastases of moderate size, so it is useful to extract the asymptotic form of the temporal statistics for ≫ N 1. Equation (26) gives the probability of establishment of a tumour of size N for any fitness θ. For θ < 1, but not too close to unity (i.e. θ − ≫ N 1 1 / ), this equation takes the simple exponential form: Quite intuitively, the probability of establishment decreases exponentially for increasing N. For example, a micro-tumour of 20 cells with fitness of θ = 0.5 has a probability of forming ∼ ∼ − P (0.5) 10 20 20 6 . Hence, one in a million progenitor cells will achieve this size for this fitness level. The expressions for the average and variance in colonization times are too complicated to easily infer the dependence on the fitness θ and critical size N. By recasting the summations as integrals and performing an asymptotic analysis for ≫ N 1 it is possible to extract the leading behaviour, which turns out to be quite simple. In these calculations, it is convenient to use the integral representation in order to explicitly perform the summations, which have the form of a geometric series. For the average colonization time we find, for θ < 1 and not too close to unity, . is Eulerʼs constant [21]. Because of the logarithmic dependence of the average colonization time on N, we can rewrite this expression as a simple exponential growth law: eff . Thus, we find that the average growth dynamics is exponential with a rate anti-correlated to the fitness. In other words, the less fit the cells are, the more aggressive the colonization dynamics of (rare) successful tumours will appear to be. On first reading, this result is counter-intuitive. One might imagine that the rare dynamics leading to colonies of low-fitness cells would be slow growing, relying on repeated chance avoidance of extinction. Despite the huge combinatorial weight of such meandering trajectories, the statistically relevant successful trajectories are those which grow rapidly, relying on repeated birth events and a relative absence of death events. This extreme bias becomes ever more important as θ decreases, hence the more rapid successful trajectories for cells of lower fitness. Similar counter-intuitive results of this type are known in disparate fields, such as the 'bold play' strategy in gambling (successive 'all or nothing' gambling on high odds games is an optimal strategy for rare but large wins) [22], and transition rate theory in chemistry to quantify spontaneous dissociation of chemical bonds (stronger bonds break rarely but explosively). The Freidlin-Wentzell theory of large deviations provides a general mathematical framework to further understand such rare event dynamics, including the timereversal duality discussed below [23]. We are obliged to ask whether this average time is a meaningful statistical measure. Perhaps the fluctuations in successful realizations are so great that the average time is a statistical oddity, with little bearing on the true nature of the dynamics. The variance of the distribution is informative in this regard. Performing an asymptotic analysis of equation (30), and after a lengthy calculation, we find that the variance in colonization times is given by where the Riemann Zeta function ζ π = (2) /6 2 [21]. Here we see that for large N the variance is independent of the critical tumour size and decreases with θ. This means that successful colonization trajectories are effectively more deterministic (i.e. less noisy) for lower fitness cells than for higher fitness cells. This is consistent with the preceding discussion, in that the statistically dominant growth trajectories of low-fitness cells are strongly biased to repetitive birth events, and will thus have a low degree of statistical variation. In the limit of extremely low fitness, θ → 0, it is relatively straightforward to calculate the entire distribution of colonization times for ≫ N 1, through a general solution of the generating equation (20). One finds that the colonization times are distributed according to the Gumbel distribution (well-known in extreme value statistics), centred at * = t N log (which, according to equation (33), is also the average colonization time in this limit). Explicitly, one finds for the distribution function of colonization times: This result suggests that the colonization time is drawn as an extreme value in samples of random variables with exponential distribution [24]. Numerical verification and stochastic realizations Despite the existence of exact solutions and asymptotic analysis, given the counter-intuitive nature of the results it is reassuring to check them against a numerical integration of the original master equation (1) with the boundary conditions equations (2)-(4). Integration was performed with a 2nd order Runge-Kutta scheme, and equations (6)-(9) were used to evaluate the statistics of interest. In figure 1 we compare the numerical solution to the exact results for the probability of establishment (26), the average time of colonization (29), and the standard deviation of the time of colonization (30). In each case we have perfect agreement as expected. We also compare a numerical evaluation of the exact results with the asymptotic analysis (for large N) in figure 2 for the average time of establishment (33) and the standard deviation of the time of colonization (35), and agreement is found, improving as N increases. Numerical generation of stochastic realizations of the birth-death process are useful to get better intuition for the actual realizations of successful colonization events. This was achieved by implementing the Gillespie method [25] to generate many realizations of the birth/death stochastic process corresponding to the master equation (1). Results for the mean and standard deviation of establishment times are supported by these numerical results as shown in figure 3. Furthermore, we show some representative realizations of colonization for cells of two different fitnesses (figure 4) demonstrating explicitly that successful events arising from lower-fitness progenitors are indeed typically more explosive and deterministic in their dynamics than those for higherfitness cells. Dualities Given that our calculations for temporal statistics are valid for any value of θ, it is interesting to compare our results for 'rare dynamics' (i.e. rare successful trajectories arising from cells with θ < 1), with results for 'rare genotypes' (i.e. successful trajectories arising from rare cells with θ > 1). So, we consider θ > 1 in the exact expressions for the growth statistics given in equations (26), (29) and (30). Taking the limit ≫ N 1 we find that N N Thus for colonization from high-fitness cells, the fraction of successful colonies is of order unity as one would expect. Analysis of equations (29) and (30) shows the exact mapping: For ≫ N 1, using the above relations with the asymptotic results for the temporal statistics found in the low-fitness regime, we have for the high-fitness colonization-time statistics: Interestingly, comparing the leading terms in these expressions with those for the low-fitness colonies, equations (33) and (35), we can see that the expressions are identical if eff for θ > 1. The two solutions are 'dual' to one another for large N: the growth of a rare successful low-fitness colony, of fitness θ < 1, is statistically indistinguishable from the growth of a high-fitness colony of fitness θ − > (2 ) 1. This duality is illustrated in figure 5 where the exact expressions for mean and standard deviation of colonization times are plotted as a function of θ for θ ∈ (0, 2). Relating this mathematical result back to the biological context of metastatic colonization leads to the remarkable conclusion: the dynamic progression of micro-metastases at early stages for rare dynamics and rare genotypes are indistinguishable. If one were to observe, in vivo, the early exponential growth of a micrometastasis, our results show that one would not be able to infer whether that event was seeded by a pre-adapted or non-pre-adapted cell. A distinction does occur for colony growth beyond the critical size N. Such colonies seeded by low-fitness cells are highly unlikely to sustain the rapid subsequent growth, and would enter a quasistable state in which adaptation and microenvironment reconfiguration will presumably enable further, and possibly slow, growth. By contrast, in colonies seeded by fit cells the critical size N is of little relevance, and exponential growth would presumably continue until access to nutrients becomes a limiting factor. ( , ) gives a normalized two-dimensional histogram, indicating the probability distribution of stochastic trajectories. The vertical axis represents the likelihood that a trajectory will hit the state n at time t. The surface crest denotes the most likely trajectory while the spread indicates the size of statistical fluctuations in the ensemble of trajectories. As fitness increases, typical trajectories take longer to reach the target, and histograms become broader indicating less deterministic dynamics. A duality of time-inversion is also implied in the exponential growth dynamics of non-pre-adapted cells given in (34). An initial tumour of size N comprising cells with fitness θ < 1 would most likely decay exponentially, with an effective decay rate of θ − 1 . The dynamics would be the timereversed dynamics of our rare event solution, a result reminiscent of reversible solutions in the Freidlin-Wentzell theory of rare event dynamics. Yet another duality exists, in fact, if one considers the dynamics of an initial tumour of size N comprising high-fitness cells (θ > 1), and asks for the temporal statistics of the rare event that such a tumour vanishes. The probability of such an event will be exponentially small, and will typically occur following an exponential decay, for the (time-reversed) reasons discussed at length above. This result implies that spontaneous remission of small tumours comprising fit cells, although rare, will occur rapidly and deterministically, an effect similar to the rare event study here, that would be interesting to pursue in future. Heterogeneous populations It is crucial to recognise that the fitness of a cell attempting colonization depends on both the cell phenotype and the nature of the secondary tissue. Clearly we expect significant heterogeneity in the millions of colonization initiation events happening during the progression of cancer, due both to the heterogeneity of cells leaving the primary tumour, and the heterogeneity of secondary tissue environments in which colonization is attempted. During later stages of metastasis genetic instability, adaptation and microevolution will induce cell heterogeneity within single metastatic tumours. Here though we are only concerned with the very early steps of metastatic colonization. Heterogeneity can then be described by introducing a distribution of fitnesses in the population of progenitor cells (i.e. those single DTCs attempting to proliferate to form a new colony). We denote this distribution θ P ( ) het . Since this distribution involves properties of both DTCs and secondary tissue sites, it describes the attempted events of colonization throughout the body over the time course of metastatic disease. For larger values of fitness, θ P ( ) het will be a rapidly decreasing function of θ because progressively fitter (or better adapted) cells should be progressively more rare in order to account for metastatic inefficiency. In this regard, it is helpful to introduce ϵ as the fraction of pre-adapted cells in the population: The overall probability distribution for successful colonization events in the body is then given by the product of θ P ( ) N and θ P ( ) het . We will denote this distribution by P succ : where we have used equation (26). It is convenient to rewrite this expression as too unfit. Therefore θ 0 represents the optimal balance between the rarity of initial phenotypes and the rarity of their corresponding dynamics to maximally contribute to nascent metastases. As such, if θ < 1 0 , the colonization process is dominated by rare dynamics ( figure 6(a)), whereas if θ > 1 0 colonization is dominated by rare genotypes (figure 6(c)). We can use the condition θ ≡ 1 0 to define a phase boundary of the parameter space. This can be written as the condition: Combining equations (45) and (46), we find that the phase boundary condition provides a relationship between the threshold metastasis size N and the distribution of heterogeneity P het : To proceed further, it is necessary to make an explicit choice for the form of P het . Unfortunately this distribution is not available experimentally, but one can make a pragmatic choice nonetheless. First, the details of the distribution for low fitness values are not very important since, from equation (47), one will be differentiating the distribution at the threshold fitness value of unity. Second, due to the experimental fact that metastasis is a very inefficient process [4], a reasonable assumption is that the tail of the distribution decreases exponentially. Such an exponential form could arise, for example, if accumulating discrete random events were required to endow cells with higher fitness values; an idea consistent with popular conceptions of metastatic potential of cells arising from successive mutations of key genes [26]. We therefore take P het to have a generic exponential form: The normalization condition (43) fixes where Γ z ( ) is the Gamma function [21]. The second integral condition (42) relates b and s to ϵ as follows . Since high-fitness cells are rare, we have ϵ ≪ 1. This enables us to give an approximate solution to the transcendental equation (49): Note this solution is exact if P het is a pure exponential function (s = 1). It is now straightforward to insert the explicit form of θ P ( ) het (equation 48) into the condition (47) to derive a relationship between N and ϵ. Taking = s ( 1) for simplicity we get: This relation demarcates the two domains in the ϵ N ( , ) parameter space as shown in figure 6(b). Specifically, rare dynamics dominates when the fraction of high-fitness cells is small and the critical size N is not too large. The impact on values of N defining the boundary is only weakly (logarithmically) dependent on ϵ. Geometrical considerations Statistical arguments to follow will allow us to estimate relevant sizes for N. First though, in this subsection, we provide here a simple argument for the lower limit on the size of a nascent metastatic colony which will allow a protective microenvironment for non-surface cells within the cluster. This argument is mainly illustrative, relying purely on geometrical considerations. The true efficacy of a microenvironment depends on many factors such as the biochemical and signalling milieux of the tumour, as well as the morphology of its constituent cells. In addition, for the cluster to be stable, the rate of division of core cells must be high enough to at least replace surface cells which are lost to cell death. This balance will depend on the relative fitnesses of core and surface cells, which is a biological aspect we do not account for in the geometrical arguments below. Assuming that cells in the metastasis are spherical and of identical sizes, the minimal number of cells required to shield one single core cell is 12, since this is the 'kissing number' for spheres in three dimensions [33]. If indeed a surface of 12 cells provides a sufficiently beneficial microenvironment to the single core cell, one can assign this core cell a high fitness. The subsequent dynamics are, however, not particularly robust, since the progeny of a single high-fitness cell are at significant risk of stochastic extinction. A more robust core would require more than one cell. Using simple geometrical considerations, and assuming a large cluster, one finds that the number of surface cells N s required to cover N i internal core cells is given by the relation where e is the packing efficiency, a value that typically varies between ∼ e 0.52 for randomly packed spheres and ∼ e 0.74 for Gaussʼs optimal sphere packing. According to this formula a core comprising 2-10 cells (allowing a more deterministic growth dynamics) would require approximately 25-50 surface cells ( figure 7). A more accurate estimate can be gained from numerically packing spheres according to face-centred-cubic close packing, which produces somewhat lower estimates that equation (52). In this case 20-40 surface cells are required to enclose 2-10 core cells. Thus, on geometric grounds, the minimum stability threshold for a protometastasis, with greater than one core cell, is around ∼ N 20 -50 cells. It is possible that the stability threshold could be smaller if surface cells are afforded some degree of protection also. Since cells are likely to adapt their morphology to that of their neighbours, via adhesive interactions, one can think of surface cells in the tumour forming a contiguous surface layer having only one facet in direct contact with the secondary tissue. This partial screening may be sufficient to raise the fitness of surface cells above unity, in which case all cells in the protometastasis (both core and surface) would be capable of deterministic growth. In this case, the minimum stability threshold might be as small as approximately 12 cells. Connections to human cancer To proceed further, in discussing the relevance of our theoretical results, it is necessary to make closer contact to human cancer, and infer what we can from the dearth of empirical data on the dynamics of metastatic colonization in humans. Indeed, little is known about the properties of DTCs in human cancer. Even measures such as the typical rate of shedding of cancer cells from the primary tumour are poorly known. As such, physiological parameters describing the number and diversity of DTCs will be subject to high uncertainties. Fortunately, for our purposes, it will transpire that only the logarithm of these parameters will be relevant to calibrating our model, so that approximate orders of magnitude in the physiological parameters will suffice. For concreteness, consider individual DTC colonization attempts occurring over a period of one year in a hypothetical cancer patient who is at risk of metastatic disease, but not in late stage disease from previous successful metastatic colonization. The total number of such attempts, which we denote by M, will be very large, ranging from tens of thousands to hundreds of millions. The typical concentrations of CTCs found in patients with metastasis disease can be estimated at about one to ten cells per millilitre of blood [11,27], which corresponds to about 10 4 -10 5 total CTCs in the human body at any given time. Although a fraction of CTCs will die in the bloodstream, we expect that a significant fraction are rapidly arrested and extravasated from the vascular system as shown by Luzzi et al [4] (note, this is an extrapolation from a mouse model), and most of them are not in the blood stream for more than 24 hours [11,28]. Thus, over a one year period we can expect approximately 10 6 -10 7 different DTCs to be attempting colonization. This estimate appears low when compared to other experimental sources. For example, Butler et al [29] showed that approximately one million cells per day are shed per one gram of tumour in a rat mammary carcinoma model. If this figure were translated to humans, a one gram tumour would yield over two orders of magnitude greater number of DTCs than quoted above. A lower bound for this number comes from studies of DTCs in bone marrow of prostate tumour patients. In such cases it is estimated that 10 4 -10 5 DTCs may be in evidence, most likely in a dormant state [30]. Given this great uncertainty, for our purposes we take M in the range 10 4 -10 8 . Assuming that rare dynamics is the dominant mechanism for colonization, we can crudely estimate a range of values for N as follows: let m be the number of colonization attempts in the relevant range θ Δθ ± 0 0 where, ultimately, successful events are likely to emerge. It is not possible to estimate systematically what fraction of the M attempts will contribute to m, so given the enormous range of uncertainty in M, we will assume that the range of m is equally large and uncertain. We denote by K the number of m attempts resulting in a metastatic colony. The great inefficiency of metastasis implies that successful colonization events are exceedingly rare [4] and as such K will be orders of magnitude smaller than m. From the definitions given above we have the direct Assuming that θ 0 is not too close to unity, equation (31) gives , which together with (53) allows us to express the stability threshold N in terms of the other quantities, viz. where we have used in the second step the fact that ≫ m K. On varying m within the range 10 4 -10 8 and θ 0 within the range 0.25-0.85 one finds that N takes a modest range of values, between 10 and 100. The fact that the range of N is relatively well-defined given the enormous uncertainty in the number of attempts and the fitness of DTCs is due to the logarithmic dependence of N on these parameters. There is a remarkable concurrence of this size range with both geometrical reasoning and biological observations. As discussed in the previous subsection geometrical estimates on the size of a tumour which would enable protection of the core from a hostile microenvironment yields a range for N of about 20-50 cells. Observations of induced metastatic colonization in zebrafish reported that clusters as small as 15-30 cells were able to induce angiogenesis for subsequent growth [31]. There have also been reports of stable metastatic cell clusters in mice of size 30-60 cells [32]. We believe the congruence of biological observations with geometrical considerations and our rare event analysis in identifying 10-100 cells as a threshold size for stability argues for closer experimental examination of such minute clusters, which we term 'proto-metastases'. Absolute probabilities and timescales for colonization We have used equation (54) to determine the possible range of N given crude estimation of physiological ranges of θ 0 and m, finding this range to be 10-100. We can take a different strategy and use (54) to provide a range of values of θ 0 assuming a value for N motivated by biological data, e.g. N = 30 as discussed above. Thus, taking N = 30 in (54), and the physiological range ∼ m 10 4 -10 8 , we find that θ 0 takes values in the rather confined range 0.55-0.75. The absolute probability of colonization for a given event will be trivially given by K/m, and thus will lie in the large range of 10 −8 -10 −4 , which simply reflects the large uncertainty in the number of DTCs. Using the range of θ 0 and N = 30 we can estimate the average time for a given successful colonization event using the leading term of equation (33). This equation gives the average time in units of the mean time (the inverse of the Poisson rate) of cell death, which we crudely estimate for non-dormant DTCs to be of order one day. One then finds 〈 〉 t 30 in the range 7.5-13.5 days, i.e. between one and two weeks. This time-scale of initial growth of a successful protometastasis is rapid compared to the time interval separating successful events, and relatedly, the typical time-scale of years over which human metastasis clinically progresses. Conclusions We have presented results arising from a linear birth-death process designed to interrogate the notion that metastatic inefficiency in human cancer may be due to rare dynamical events rather than rare pre-adapted cells. This is a fundamental issue in interpreting and combatting metastatic disease: does colonization of secondary tissue sites arise from deterministic growth of rare pre-adapted cells or from rare stochastic dynamics of common non-pre-adapted cells-the special forces versus infantry in our military metaphor. We have shown, in the context of rare events, that rare dynamics is equally plausible to rare genotypes, and that the dynamics of emerging colonies from each mechanism are, in their early stages, statistically indistinguishable. Despite the paucity of relevant human data, we have been able to provide estimates on key aspects of rare dynamics coloniation-such as the critical colony size N and the mean time of growth-due to the logarithmic dependence of these quantities on the poorly known physiological parameters. Our estimate for N, being in the range 10-100 cells, is, remarkably, in the same range as independent estimates from geometrical considerations and observations of proto-clusters in mice and zebrafish models. Given the rare event nature of these concepts, direct experimental validation is challenging. The following two experimental avenues might provide such validation. Rare colonization events leading to proto-metastases could occur in proximal stromal tissue surrounding the primary tumour, as well as in distant secondary sites (which has been the focus of the main article). Exhaustive study of high resolution histology sections of such tissue removed along with a primary tumour in a cancer patient would be a means of identifying these stable nascent colonies. If our model is correct one would expect to see a number of tiny metastatic clusters of size 10-100 cells. If only pre-adapted cells are causative agents of metastasis, there will be no significance to this size range, and the size distribution of micro-metastases will show no signature in this range. Better would be high-resolution examination of tissue from secondary sites in a cancer patient with a small recently diagnosed primary tumour. For example, a biopsy from the lung or liver in a breast cancer patient who does not yet have a diagnosis of metastatic disease. However, there are serious medical and ethical concerns about obtaining such tissue samples. An in vitro alternative is an experiment inspired by the classic Fidler experiments [6,8]; to culture a large number of individual cancer cells from an in vivo heterogeneous tumour (preferably from a human patient), but in relatively unfavourable growth conditions. Presumably only a small number of replicates will yield a significant new colony. To determine whether these colonies are due to rare pre-adapted cells or rare dynamics, one would attempt to culture cells from these colonies singly under the same conditions. If a large fraction succeed, then one is observing 'special forces', if a large fraction fail, then one is observing 'infantry'. We must emphasise here that it is imperative that cells are removed from the first round of successful colonies very early on before adaptation has occurred, which can effectively transform 'lucky infantry' into special forces for the given metastatic milieu. In summary, successful establishment of metastases from low-fitness cells is (i) exponentially rare, (ii) explosive, and (iii) effectively deterministic. As a result of statistical duality, if one were to observe the early dynamics of only successful realizations of colonization arising from low fitness cells, they would resemble perfectly the deterministic, exponential dynamics of pre-adapted cells (θ > 1), leading one to the erroneous conclusion that successful tumours arise from cells with a rare genotype that enabled them to have positive fitness in the secondary tumour site. This duality provides a clear warning in interpretation of metastatic growth statistics-rare dynamics can be as likely as, or more likely than, rare genotypes in causing colonization, and yet can appear phenotypically similar in early stages. Our results are in accord with the seed and soil hypothesis, in that fitness is a key parameter within our model, and fitness, as we have repeatedly stressed, is a co-function of a given DTC (from a given subclone of the primary tumour) in a given secondary tissue environment. The empirical correlation between primary and secondary sites can be interpreted in the context of the rare dynamics model in terms of particular environments providing DTCs with lesser degrees of exponential difficulty of colonization than others. While colonization from rare genotypes provides a rationale for searching for novel drug targets aimed at the particular genetic mutations or phenotypic adaptations that allow such rare cells to be pre-adapted to secondary sites, colonization from rare dynamics, occurring by chance, will require an entirely different paradigm of therapy, presumably more generic than targeted in nature. On the positive side, our results show that the rare dynamics is exponentially sensitive to both the critical cluster size N and the optimal fitness θ 0 . Therefore treatments that can increase N and/or lower θ 0 , even marginally, will have a significant impact in lowering the overall probability of successful colonization, directly leading to increased periods of an absence of metastatic disease. With particular regard to targeting the critical cluster size, it will be worthwhile to connect the purely demographic analysis in this paper to more detailed cell biological and biomechanical studies of the microenvironment, such as the model of Basan et al [18], in which the initial growth of metastases is hampered by homeostatic pressure from the surrounding tissue, essentially rendering all progenitor cells 'unfit', and leading to dynamics analogous to nucleation. We end with three speculative remarks. First, although this paper has been concerned with metastatic colonization, this entire discussion could be recapitulated in the context of primary tumour initiation. Again, the standard model is rare genotypes (key successive mutations allowing a particular cell to clonally expand), whilst rare dynamics would describe a process of aberrant proliferation against the odds, which very rarely leads to a critical tumour size in which the nascent colony is less fragile due to defining its own new microenvironment. Second, whether one speaks of rare dynamics in terms of metastatic colonization or primary tumour initiation, the rare event statistics show that successful growth will be very rapid. This explosive growth may enable such fragile colonies to effectively evade immune response. We estimated a time scale of days for growth to a colony size of tens of cells, and it would be interesting to explore this time scale in the context of immune response. Third, and last, our analysis of colonization is not limited, in terms of the modelling, to cancer. Rare dynamics, as a mode for non-pre-adapted individuals to reach critical size in a hostile environment, could find application in infectious disease dynamics within the body, as well as in more traditional areas of modelling such as ecological colonization.
2017-07-31T19:24:27.584Z
2014-07-17T00:00:00.000
{ "year": 2014, "sha1": "9c09847fb90328b272aa7930d1fa145e076b2cca", "oa_license": "CCBY", "oa_url": "http://iopscience.iop.org/article/10.1088/1478-3975/11/4/046003/pdf", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "c555e7b928262c6d3e52d622bdcdc58c0a52455f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Physics", "Biology" ] }
268041614
pes2o/s2orc
v3-fos-license
Practical Derivations of Fermion and Gauge Boson Reduction Formulae in Curved Spacetimes LSZ-type reduction formulae are derived for gauge fields and fermions in curved spacetime. The formulae are derived using a conserved current method applicable also to flat spacetimes. The method generalizes to more general quantum field theories. The formulae are then applied to a few problems to illustrate their use. I. INTRODUCTION It is well known that calculations in perturbative quantum field theory in curved spacetimes are exceedingly difficult.The propagators are hard to determine, and quantization is impossible for arbitrary spacetimes; particle interpretations can only be established in very few time-dependent spacetimes [1] [2].The existence of the S-matrix is also dubious, though can be proved in some spacetimes (notably static spacetimes and a restricted class of non-static ones) [3]. Nevertheless, in some spacetimes, perturbative calculations are possible.Various Robertson-Walker models, important in cosmology, are examples of a metric in which mode functions can be determined [4].In principle, perturbative calculations can be performed using an Lehmann-Symanzik-Zimmerman [5]-type formalism [6], or by using the Schwinger-Keldysh formalism [7].Calculations can be performed either in the in-out formalism in which the propagators are calculated as the matrix element between two vacuums or the in-in formalism, where one picks either the in or out vacuums and calculates expectation values using only that.The calculations usually are done in the in-in formalism, since it seems to more easily lend itself to physical interpretation [7], and in any case the calculations are often easier [8]. Our contribution will be to present the reduction formulae for fermions and gauge fields in globally hyperbolic curved spacetimes, suitable for use in either the in-in or in-out formalisms.Such reduction formulae have previously been published for scalar fields [1,6], but to our knowledge not for fermions or gauge fields.We present two ways to derive the reduction formula with the hope that the second way is more technically expedient than what is typically found in textbooks on quantum field theory.We will also apply the formulae to a few topical systems to illustrate their use. In the following, we will assume that the model under investigation is mathematically well-defined.The relevant criterion is the existence of the S-matrix; for the Smatrix to exist, the Bogoljubov transformation between the different vacua of the theory must exist.This essentially requires that the sum of the Bogoljubov coefficients converges; full details can be found in [2].We also require space to be globally hyperbolic and asymptotically stationary, such that it is possible to define vacua for the in and out states; more details in [6].We point out that if the Bogoljubov transform cannot be found, then there is no unitary transformation between the in-and out states and the S-matrix does not exist.In that case, we cannot hope to do any calculations.We therefore limit ourselves to the class of spacetimes in which the S-matrix exists.In particular, it is known to exist for the spacetimes used in IV. In the present paper we study the various types of fields in the curved space time, present novel derivations and formulae for spinors and gauge fields, and discuss their usage in practical calculations. A. Fermions In curved spacetime with metric g µν , the gammamatrices is modified to satisfy the anti-commutation relations We then define the curved space gamma matrices with the help of the tetrads e a µ .The details may be found in [1].Taking advantage of the principle of equivalence, we can set up an inertial coordinate system ξ at every point such that the tetrad is given by its relation to the general curved coordinates: Then, We have adopted the notation that latin indices refer to inertial coordinate systems, and parentheses refer to flat spacetime gamma matrices.The covariant derivatives ∇ µ are then given by and the Dirac conjugate is defined as ψ = ψ † γ (0) .Here, the Γ ν νµ refers to the Christoffel symbol, whereas Γ µ is the spin connection, explicitly Quantization proceeds in analogy to the flat spacetime case.One sets up the equal-time anti-commutation relations where π is the conjugate momentum.We then write the field operator as where s is the helicity index and f s and g s are the mode functions.Further details are available in standard textbooks, for example [1]. With the preceding notations the Dirac Lagrangian in curved spacetime (see e.g.[1]) leads to the well-known equation of motion B. Gauge fields The free Lagrangian of a gauge field in curved spacetime is given by with ∇ µ the usual covariant derivative.This expression of free, quadratic Lagrangian is suitable for all gauge components of any gauge field, commutative or not.To avoid irrelevant indices, we omit them, or consider only one Abelian gauge field.Thus the equation of motion of the free gauge field reads as Moreover, in this article, we work in the covariant Lorenz gauge ∇ µ A µ = 0, a straightforward generalization of the flat space-time case.We assume that corresponding calculations in some other gauge are also possible, but we have not checked this. III. REDUCTION FORMULAE IN CURVED SPACETIME A. General reduction We first deal with the common generic part of the formula, which only depends on the Bogoljubov coefficients and not on the particular type of the field.Note also, that all following calculations should in principle be done using wave packets instead of sharply defined particle states (unnormalized Fock space states), but as in the flat spacetime case, this omission does not cause any difficulties (see almost any standard QFT textbook, e.g.[21]).So, we have omitted writing out the wave packets to save space and highlight the essential parts of the derivations.Moreover, we have suppressed all irrelevant indices, such as helicity indices, gauge group indices etc. to keep the formulae as simple as possible. We suppose that a, a † are the annihilation/creation operators associated with the in-vacuum and b, b † with the out-vacuum.To these operators are associated mode functions f (in-vacuum) and g (out-vacuum).We suppose, that the initial time is −∞ and the final time is ∞.The sets {p i } and {k i } denote the outgoing and incoming four-momenta, respectively. Then we are interested in reducing the matrix elements to the expectation vacuum values of some well defined operator of the fields.To that we need the Bogoljubov transformations, relation between in-and out-operators, which are given by [1] Here the sum should be understood in a generalized sense; if the spectrum is continuous, they may be re-placed by integrals.Using the following notation, where F is a hitherto unknown functional of the mode functions [22], we are able to write the matrix element in a reduced form. We express (15) as and writing from ( 18) we obtain Next we combine the Bogoljubov transformations ( 16) and ( 17) and apply (19) to the latter annihilation operator, with the aim of getting something we can use to operate on the right-side −∞ in-vacuum.We get which finally leads to the formula having the appropriate annihilation and creation operators: This is the single particle reduction from the right. The same steps using give immediately single particle reduction from left: In the corresponding in-in calculation, the difference is that the parts of ( 29) and ( 27) dependent on the Bogoljubov coefficients vanish as the in-and out vacuums are the same. To get the desired form of the reduction formula we need to find the functional F [f ].For scalars F is well-known and it is given by where K x = + m 2 is the curved spacetime Klein-Gordon operator with the curved spacetime D'Alembertian [6], and w is the positive energy mode function.Next we derive the reduction formulae for spin 1 2 and spin 1 fields. Derivation by EOM manipulation The reduction formula for fermions is not previously found in the literature.We derive it in two ways.First by brute force, direct calculation commonly found most textbooks.Secondly using conserved currents.The comparison between the two methods emphasises the relative simplicity of the latter method. The Lagrangian (11) leads to the equation of motion for a massless fermion given by with the covariant derivative satisfying equations ( 5) and (7).The inclusion of (Dirac) mass does not change the final formula, but merely make the intermediate expressions lengthier; it will be evident by the end that there is no essential difference in the calculation.We choose the inner product [23] and we are looking for the difference (18) between the creation/annihilation operators in the far past and future. When the field ψ has a mode expansion [24] ψ respect to an orthonormal basis of modes u s k (x) (and v s k (x)) satisfying the equation of motion (31) we have when the mode functions are orthonormal with respect to the inner product (32).We will henceforth suppress the helicity indices s, because they are of no consequence for the following calculation. Let us first use the fundamental theorem of calculus in operator form: Combining it and (34), we get From the equation of motion (31) we get We insert this to the second term in the square brackets of the eq.(36), obtaining Taking into account the integration of the term it can be rewritten as Where the second and third equalities follow from sim-ply taking the derivatives and applying Gauss' theorem. Inserting the result in to eq. (36 Now we can use eq.( 7) to modify the first term in this expression: Finally, using (41) in eq. ( 40) This result generalises the corresponding flat space formula, as expected.Using the massive fermion field with a mass m, we get the result where we have replaced the omitted helicity indices: the derivation is completely independent of them.The corresponding calculation for d s k operators follows the same pattern. Using a conserved current Let us now introduce the conserved current method as a way of finding reduction formulae using conserved currents.The idea is to find a conserved current and use it to find an inner product with respect to which the mode functions are orthogonal.This inner product -taken as the zeroth component of the conserved current -is then used to derive the reduction formula in a simple manner.The reference [25] provides further details on the requirements of inner products in relativistic quantum mechanics. We now find the appropriate inner product with respect to which the mode functions are orthogonal.In curved spacetimes, we should keep in mind that the words "appropriate inner product" do quite a bit of heavy lifting: inner products in QFT are not unique, since it is possible to explicitly construct unitarily inequivalent inner products [2].Unitarily equivalent inner products give the same physical results, but inequivalent ones may not; the inner product has to be fixed by some other method, like experimental data or defining the inner product to have the appropriate flat spacetime limit.Finding an appropriate structure might be difficult in the most general case, but it is possible for fermions and gauge fields. We find an appropriate inner product by considering the global and infinitesimal phase transformation ψ → ψ ′ = e iχ ψ ≈ (1 + iχ)ψ and using the the inner product determined by the conserved Noether charge.This guarantees that the inner product is conserved in time, independent on the time-slice, and thus allowing the sotr of probability interpretation mentioned in [25].In the following, we assume that the spacetime is globally hyperbolic and that it is possible to choose appropriate coordinates such that x 0 marks the time direction.Any other coordinate system would work, but this is the most convenient one. Starting directly from a standard expression for the change of the action where V a space-time volume bounded by two spacelike surfaces: ∂V = σ 1 ∪σ 2 .The last equality is due to the global U(1) symmetry of the Lagrangian (both interacting and non-interacting, for the interactions considered here).The index a here runs over the components of the fermion field and its Dirac conjugate.We can calculate the current (second bracketed term) directly: and if ψ and ψ are solutions of their equation of motion, then the first bracketed term in (44) is zero.Applying Gauss' theorem as usual to (45) and using (44) we get Here n µ is a future-oriented unit vector, while the spacelike surfaces σ i define a foliation of the spacetime.For the second line we have used our special coordinate system and the fact that the space is globally hyperbolic and χ = 0.This quantity is clearly conserved in time.Now we can replace the field appearing in the Dirac conjugate ψ(x) by completely independent spinor field ψ ′ which has same gauge transformation and obeys the same equation of motion as ψ.The current that we get is equally well conserved and has the form This gives the general form of the inner product determined by the phase transformation, eq. ( 32). Replacing the field ψ ′ by a mode u k , using Gauss' theorem and knowing that u k is a solution of the equation of motion but ψ is interacting, where σ 1 and σ 2 are the constant time surfaces, which are set to limits t → ±∞, correspondingly.The first equality follows from Gauss' theorem and the second from (34).The equation (49) provides the functional F which can then be plugged in to equations ( 27) and ( 29), thus providing us with a reduction formula.The latter procedure is completely general: if a conserved current is available as a sesquilinear inner product, we can use it to derive a reduction formula.Even if the current is not of the Noetherian, we can still use the procedure, as we will see presently. The inner product we have used here is by no means unique, which is a typical situation in quantum field theory.Let us consider an example of another current we could conceivably use for an inner product in the massless fermion case.The transformation ψ → ψ ′ = e iχγ µ γ 5 ψ (with curved γ µ and γ 5 ) generates another conserved Noether charge and inner product, and leads to the functional formula This is the same functional as previously except with a redefinition u → γ 5 u.If u satisfies the EOM, then so does γ 5 u as can be easily checked: where in the last equality we used the covariance properties of the curved spacetime γ 5 .The operators defined with this inner product are evidently not quite the same as the ones in the foregoing calculation, but nevertheless they are clearly unitarily equivalent.For massless fermions, there does not seem to be an obvious reason to prefer one over the other. A word of warning about using conserved currents in deriving reduction formulae is in order: it relies on assuming formula (34) is valid even when ψ is interacting and lives in different Hilbert space that the noninteracting field.This is a standard assumption made in QFT textbooks like [21], but runs afoul of Haag's theorem [26].In flat spacetime, this difficulty is not considered serious, since Haag's theorem relies on idealized assumptions that are presumably not realized in a practical calculation.In curved spacetime, the additional difficulty is that the S-matrix may not exist in some spacetimes -such spacetimes cannot be used for scattering calculations, so we do not concern ourselves with them.The foregoing calculation or basically any calculation using both interaction and non-interacting fields is strictly speaking non-rigorous.It works in the same sense as the standard calculations work: as a mnemonic that can be made more rigorous by careful study, in particular when using perturbation theory. C. The gauge field formula The equation of motion for a real spin-1 gauge field in curved spacetime derived from eq. ( 13) is where ∇ µ is the curved spacetime derivative and we have denoted As discussed earlier, we use the Lorenz gauge with ∇ µ A µ = 0.The choice is based on convenience alone.First of all, we establish that the following current is conserved when A, A ′ are solutions of (52) in covariant Lorentz gauge and thus can be used as an inner product: Let us first collect some general formulas to be used: The first formula is only the equation of motion, the second eq. is well-known commutator of covariant derivatives, and the third uses the gauge condition.As in the case of the scalar field, the use of the inner product relies on an extension of the real field to the complex space.When A µ satisfies (52), the latter term in (55) is zero.The complexification is needed to enable the inner product for the positive/negative energy modes.While assuming A ′ = B does not satisfy (52) but A µ does, we have employing eqs.( 55), (57), and (56) in a row.The last expression is clearly identically 0 if B satisfies the EOM as well, so this is an appropriate current for an inner product.As in the previous section, we will assume that B does not satisfy the free equation of motion, since it is supposed to be interacting. We may now directly apply the ideas of the previous section.We write where u µ is the mode function associated to ladder operator a k .Then, if A is an interacting field and u a free mode function, be Gauss' theorem we get where σ = σ 2 ∪ σ 1 is defined as previously.We then take the limit as σ i → ±∞: Hence we have derived the reduction formula. A note about the nature of the vector field is in order.The simple non-gauge vector field with Lagrangian leads to the very same reduction formula (65) as in the case of gauge bosons, but without the need of the gauge condition.The only difference is that the equation of motion A. The general setup To summarize, we first utilize the the general reduction formulae in section III A until we have vacuums on both sides of the bra-ket.We then plug in the calculated functionals F [g], which we put into the table IV A for convenience.The one-particle reduction formulae given above can be used recursively for any number of particle in/out states. We will utilize these formulae for two already known cases to illustrate how they work: the scalar decay in to fermions reported in [12] and the classic Bogoljubov coefficient calculation in [1,27]. B. In-in calculation: Decay of a scalar field We now apply the reduction formula in the in-in formalism to the decay of a massive scalar particle in to Functionals F for all the known cases with Kx = + m 2 the curved spacetime Klein-Gordon operator and Dx = iγ µ ∇µ − m and Dx = iγ µ ∇µ + m the Dirac curved spacetime Dirac operators [1].For fermions, there are two sets of operators, and for vectors, two polarization directions; adding those indices happens as expected. massless fermions.There is only a single vacuum in the calculation so there is no need for a Bogoljubov transformation. Let us denote the fermion (Dirac) operator at point x by D x and the Klein-Gordon operator as K x .The propagators are normalized as with a corresponding normalization for the Klein-Gordon operator. We are looking for the scattering element 1 ψ k1 1 ψ k2 |1 φ p .Now with b, d and a, c the fermionic and bosonic annihilation operators, respectively, satisfying the usual (anti)commutation relations. We then use the reduction formulae.Since the inand out-vacuums are exactly the same, α mn = δ mn and β nm = 0.Then, using (29) and (27), the only term left is the one with no Bogoljubov coefficients.Note that k i |k j = δ ij , which is why the term with only α nm is zero.We get where D yi are the Dirac operators and K x1 is the Klein-Gordon operator. Let us specify the interaction as with the subscript 0 indicating a free field.Then the field operator expectation value in (70) is written as Using Wick's theorem, we can write (73) as Applying the Dirac and Klein-Gordon operators to this expression and using the normalization (66), we get immediately Inserting this to (70), we get after a few integrations C. In-out calculations: using Bogoljubov coefficients Let us then deal with an example where we use only the Bogoljubov coefficients.For concreteness, starting with a 0-particle state in the Robertson-Walker metric, we wish to end up with an 2-particle state due to space-time particle creation.We assume there are no interactions, so that only spacetime particle creation is relevant.We deal with the scalar case to avoid the cumbersome use of indices.Then using formula (29) we get and furthermore with the vertical bars indicating the cardinality of the set.We see the expected result: particles are created in pairs.If we had only one particle in the out-state, the amplitude would be zero.If there were more particles to reduce, the formula would then be applied recursively until we end up with a vacuum.If interactions were present, more terms from e.g. ( 29) would be added; if there were fermions, you would take the formula (29) with the fermionic creation operations, and so on.Note that this is a separate issue from observing particles.Particles may of course be observed one at a time.Mathematically the construction for a "measurement device" is given in standard works, such as [1].We are here dealing with only the amplitudes of particle creation; observation requires a separate treatment. V. DISCUSSION We have derived curved spacetime reduction formulae for vectors and fermions in arbitrary spacetimes and applied them to a few example problems.In addition, we expressed the Bogoljubov-dependent part of the reduction formulae in a way which we hope is less oblique than the scalar field calculations in [6] [28], and therefore be more practical.The formulae were derived using a method not typically seen in the literature.We hope to use our the reduction formulae for scattering calculations in e.g.cosmological spacetimes in the future. The reason for our interest in reduction formulae is that they are relatively formalism-agnostic.Just as is the case in flat spacetime, the reduction formula binds together a variety of formalisms for doing scattering calculations: you can first use the formula and then use whichever method seems suitable to get the vacuum expectation values.We also have not seen these curved spacetime formulas published in full, though [1] mentions them and [6] includes the full scalar reduction formula.We hope they facilitate more calculations such as those in [9][10][11][12]. In flat spacetimes propagators are in practice obtained either by the operator formalism or by using path integrals.In curved spacetimes, however, there are a variety of methods; for example, variations of Schwinger's method [7,8], the added-up method as used in [29,30], operator methods like those in [6], and even Schrödinger picture methods [31].The variety of methods is a result of the complications in dealing with spacetime in quantum field theory, but whenever a scattering calculation is at hand, the preceding formulae may be used. We used the conserved current method for deriving the reduction formulae in this paper.We have not seen other examples of this method in the literature -possibly because in the flat spacetime case, it is relatively straightforward to derive the reduction formulae by manipulating equations of motion directly, such as we did here for the fermions.In curved spacetimes, as can be seen in the fermion calculation, this method quickly becomes laborious and technically challenging.The use of the conserved currents not only simplifies the calculation but conceptually relates conservation laws to the inner products used in QFT.It also generalizes to any field theory with conserved currents easily. We emphasize that the inner products used in QFT are not fixed by the algebraic structure of the theory.You can explicitly construct unitarily inequivalent representations of the commutation relations [2]; Haag's theorem also shows that interacting theories are not unitarily equivalent to free ones [26].This allows us to use our conserved current of choice; there may well be other unitarily equivalent choices, as we showed.Yet other choices may well be unitarily inequivalent, and there is no telling if they would lead to the same predictions. We think that using conserved currents directly to find the reduction formula might be more transparent and pedagogical than manipulating the equation of motion in the brute-force way, since the procedure seems more generally applicable.It also makes the arbitrariness of the inner product obvious, and in our opinion makes clearer the assumptions going in to putting interacting fields in to the current to get the creation operators.At any rate, obtaining reduction formulae using the direct method as in section III B 1 is a technically complicated endeavor, whereas using the conserved currents is quite easy.
2024-02-29T06:44:18.857Z
2024-02-28T00:00:00.000
{ "year": 2024, "sha1": "b48b14620f2560ebeb5ab96383af7e0e7146d2c0", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-024-12996-z.pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "b48b14620f2560ebeb5ab96383af7e0e7146d2c0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
248367846
pes2o/s2orc
v3-fos-license
Cholesterol Binds the Amphipathic Helix of IFITM3 and Regulates Antiviral Activity Graphical abstract Introduction Interferon-induced transmembrane (IFITM) proteins belong to a family of small transmembrane proteins known to interfere with diverse membrane fusion events important for human physiology. 14,45 IFITM proteins inhibit the cellular entry step of many enveloped viruses pathogenic to humans, like Influenza A virus (IAV), Ebola virus (EBOV), Dengue virus (DENV), SARS coronaviruses, and HIV-1. 35 IFITM3 is the most characterized family member owing to its antiviral potency and its links to genetic susceptibility to infection in human populations. 55 Previous research has shown that IFITM3 inhibits the viruscell membrane fusion process by blocking fusion pore formation, a terminal step of the entry process that enables access of enveloped virions to the host cell cytoplasm. 16,31 . We recently demonstrated that an amphipathic helix (AH) in the amino terminus of IFITM3 9 confers IFITM3 with the ability to alter the biomechanical properties of membranes in living cells. 40 Specifically, IFITM3 decreases membrane fluidity (increases membrane rigidity) in living cells in an AH-dependent manner. 40 This was confirmed in vitro by showing that a peptide corresponding to the AH of IFITM3 is sufficient to promote membrane rigidity in artificial membranes, and interestingly, the AH requires membrane cholesterol in order to alter membranes. 22 Furthermore, a sterol binding antibiotic, Amphotericin B, negates the antiviral activity of IFITM3 32 and prevents membrane stiffening by IFITM3. 40 These reports indicate that the AH and cholesterol are important for the functions of IFITM3, but the relationship between them is poorly understood. Establishing how IFITM3 interacts with and influences the membrane microenvironment is key to understanding how it inhibits membrane fusion pore formation. Cholesterol is a key regulator of the biomechanical properties of lipid bilayers and is known to influence the cell entry step of enveloped viruses. 8,49 A link between IFITM3 and cholesterol was first raised by showing that IFITM3 disrupts the function of VAMP-associated Protein A (VAPA), a protein controlling cholesterol transport between the endoplasmic reticulum and late endosomes/multivesicular bodies. 1 As a result, IFITM3 triggers cholesterol accumulation within late endosomes. However, the relevance of this phenotype to the antiviral mechanism of IFITM3 remains unclear. Some studies have shown that inhibition of the cholesterol transporter NPC1, which results in intraendosomal accumulation of cholesterol, blocks infection by IAV, EBOV and DENV at the entry stage. 5,27,39 However, other studies have reported that cholesterol redistribution to late endosomes is not sufficient to phenocopy the block to infection mediated by IFITM3. 16,32,51 Therefore, it is possible that both IFITM3 and cholesterol must be present in the same membranes for virus entry inhibition to occur. In this report, we reconcile previously conflicting pieces of evidence by showing that IFITM3 directly interacts with cholesterol. This interaction is dictated primarily by the AH, but a downstream cholesterol recognition motif (CARC) also contributes to cholesterol binding potential. We show that previously described loss-of-function mutations in the AH of IFITM3 disrupt helix formation and result in loss of cholesterol binding. These findings allow for an updated model of antiviral function for IFITM3, one in which the interaction of IFITM3 with its lipid environment alters the biomechanical properties of membranes to disfavor fusion pore formation at membranes serving as entry portals for virus infection. IFITM3 interacts with NBD-cholesterol and binding maps to the AH We generated peptides corresponding to regions of IFITM3, including the intramembrane domain (IMD) and the cytoplasmic intracellular loop (CIL), and tested them for cholesterol binding potential by performing fluorescence spectroscopy of NBDcholesterol 52 (Figure 1(A)). Following excitation at 470 nm, this sterol analog emits fluorescence when bound by protein or peptide, but it does not in the unbound state. We mixed increasing concentrations of peptide with a fixed amount of NBDcholesterol (500 nM) in NP-40-containing buffer, which is inferior to its critical micelle concentration (700 nM). 2 Therefore, under these conditions, NBD-cholesterol is not predicted to form micelles. As a result, NBD fluorescence most likely indicates a direct interaction between NBD-cholesterol and peptide in solution. As positive and negative controls for NBDcholesterol binding, we used peptides derived from rotavirus NSP4 that were previously shown to possess or lack cholesterol binding potential (referred to as NSP4 (+) or NSP4 (À), respectively) ( Table 1). 43 Relative to these controls, we found that a peptide spanning the IMD and CIL domains of IFITM3 (amino acids 56-107, referred to as P1) enhanced NBD-cholesterol fluorescence intensity in a dose-dependent manner (Figure 1(B, C)). To map the region capable of NBDcholesterol binding, we generated smaller peptides covering the IMD (referred to as P2 and P3) or the CIL (referred to as P4) ( Table 1 and Figure 1(B)). Compared to P1, NBD-cholesterol fluorescence intensity was enhanced by P2 to a similar extent while P3 or P4 had no significant effect ( Figure 1 (C)). These results demonstrate that the region conferring cholesterol binding potential is found within a portion of the IMD of IFITM3 corresponding to amino acids 56-69. Notably, this region encompasses the AH of IFITM3 (defined as amino acids 59-68). 9 Binding between P2 and NBDcholesterol was apparent following one hour of incubation and the measurement was robust up to at least 16 hours (Supplemental Figure 1). To measure the binding affinity between P2 and NBD-cholesterol and to exclude the possibility that increasing concentrations of peptide caused NBDcholesterol fluorescence through a non-specific, aggregation-based mechanism, we incubated increasing concentrations of NBD-cholesterol with a fixed concentration of P2. This approach allowed us to achieve saturation of NBDcholesterol fluorescence and to derive an apparent dissociation constant (K d ) of 1.59 mM (Figure 1(D)). In contrast to NBD-cholesterol, the fluorescence of NBD-phosphatidylethanolamine (PE) was not significantly enhanced by P2 ( Figure 1(E)), suggesting that lipid binding by the AH of IFITM3 is selective for cholesterol. To confirm that full-length IFITM3 protein also displays cholesterol binding potential in vitro, we assessed the capacity for recombinant GSTtagged human IFITM3 to enhance NBDcholesterol fluorescence intensity. Accordingly, GST-IFITM3 produced a dose-dependent increase in NBD-cholesterol fluorescence, while recombinant GST alone had no effect (Figure 2(A, B)). As a complementary approach to measuring peptide-lipid interactions, we assessed how intrinsic tryptophan fluorescence 50 of P2 was affected by the presence of NBD-cholesterol or NBD-PE. P2 contains a single tryptophan at amino acid 60 (W60), and fluorescence emission was detected by spectroscopy. In contrast, mutant P2 containing W60A was not fluorogenic (Figure 3 (A)). NBD-cholesterol has a minor excitation peak between 300 and 400 nm, and thus may absorb energy emitted by tryptophan as a result of Forster Resonance Energy Transfer (FRET). 33 Accordingly, we found that intrinsic tryptophan fluorescence of P2 resulted in FRET to NBD-cholesterol, while FRET to NBD-PE was minor (Figure 3(B)). This was accompanied by a decrease in tryptophan fluorescence of P2 in the presence of NBDcholesterol, but not in the presence of NBD-PE (Figure 3(C)). These results are strongly suggestive of a direct and selective interaction between P2 and cholesterol. Mutations in IFITM3 that disrupt AH formation inhibit cholesterol binding and inhibit membrane insertion To identify specific determinants for cholesterol binding within the AH of IFITM3, we introduced mutations into the P2 peptide ( Figure 4(A)). F63Q and F67Q correspond to previously characterized mutations that result in near-complete loss of antiviral activity against IAV. 9 Furthermore, F63Q was shown to abolish the capacity for an IFITM3 peptide to alter membranes in vitro. 22 We confirmed that IFITM3 containing F63Q or F67Q exhibited a significant loss in antiviral activity against IAV compared to IFITM3 WT following transfection into cells, while IFITM3 lacking the entire AH (D59-68) was completely inactive (Supplemental Figure 2(A, B)). We found that, compared to P2 peptide containing the WT AH of IFITM3, introduction of F63Q or F67Q strongly reduced NBD-cholesterol binding (Figure 4(B)). To confirm that these findings are most likely the result of direct peptide-cholesterol interactions, we examined fluorescence polarization of NBD-cholesterol as a function of peptide binding. Fluorescence polarization is performed by exciting reaction mixtures with plane-polarized light and recording the degree of depolarization of emitted light. Small fluorescent molecules in aqueous solution rotate or "tumble" very quickly and assume various orientations, resulting in a high degree of depolarization of emitted light. However, upon association with a larger intermolecular complex, rotation and orientation changes are reduced, and emitted light is more polarized in nature. Consistently, P2 enhanced NBD-cholesterol fluorescence polarization in a dose-dependent manner while peptides containing F63Q or F67Q only did so modestly (Figure 4(C)). These results suggest that phenylalanines within the AH of IFITM3 are crucial determinants for cholesterol binding. Experiments performed with decreased concentrations of peptide and decreased concentration of NBDcholesterol (50 nM) resulted in the same differential pattern of cholesterol binding among peptides, further ruling out a confounding effect of non-specific peptide-micelle interactions (Supplemental Figure 2 (C)). We also produced a shorter version of the P2 peptide (referred to as P2 0 ) that consists solely of the AH of IFITM3 (Figure 4(A)) and we observed a similar enhancement of NBD-cholesterol fluorescence polarization (Figure 4(D)). In a competition- Error bars indicate standard error. Asterisks indicate statistically significant difference (p < 0.05) between P2 and another peptide (condition indicated by asterisk color; control peptides excluded), as determined by one-way ANOVA. Chol.; cholesterol. See also Table 1. based assay, we pre-incubated P2 0 with excess unmodified cholesterol prior to adding NBDcholesterol. We found that unmodified cholesterol competed with NBD-cholesterol for binding to P2 0 , resulting in a partial inhibition of NBD-cholesterol fluorescence (Supplemental Figure 2(D)). The partial competition of NBD-cholesterol binding by excess unmodified cholesterol may result from incomplete solubility of the latter under the conditions tested. Nonetheless, this finding suggests that P2 0 exhibits the capacity to bind not only the cholesterol analog but native cholesterol as well. Collectively, these results further refine the cholesterol binding footprint of IFITM3 to amino acids 59-68 in the IMD (the AH itself). To better understand how F63Q and F67Q interfere with the cholesterol binding potential of the AH, we assessed the impact of these mutations on peptide secondary structure using circular dichroism (CD). The CD spectra obtained for P2 and P2 0 , which possessed a similarly high capacity for NBD-cholesterol binding ( Figure 4 (D)), are consistent with substantial alpha-helical character ( Figure 5(A)), in agreement with previous findings. 9 Secondary structure content analysis revealed that P2 and P2 0 exhibited 44% and 37% helix content, respectively ( Figure 5(B)). In contrast, P2 containing F67Q presented 17% helix content and P2 containing F63Q exhibited no helical signature whatsoever ( Figure 5(B)). These results suggest that the F63Q and F67Q mutations prevent proper folding of the AH, which may suggest that an intact and properly oriented helix are required for cholesterol binding. Next, we used all-atom molecular dynamics simulations to model peptide insertion into a model membrane mimicking a eukaryotic lipid bilayer consisting of phosphatidylcholine (POPC) and cholesterol. Peptide depth was quantified by measuring the location of the peptide's alphacarbon atoms (i.e. the first carbon of each amino acid branched from peptide backbone) relative to phospholipid tail termini (the bilayer midplane, defined as z = 0 A). As such, a larger z indicates peptide bound closer to the leaflet surface while a smaller z indicates peptide closer to the leaflet interior. We found that insertion of P2 WT peptide was significantly deeper than peptides containing F63Q or F67Q ( Figure 5(C)). Specifically, F63Q or F67Q resulted in more shallow membrane associations at amino acid residues 56, 57, 59, 60, 61, 63, 64, and 67. Since membrane insertion depth correlates with the capacity for amphipathic helices to alter membrane order and curvature, 17,46 these findings provide a possible mechanistic explanation for why full-length IFITM3 protein containing F63Q or F67Q exhibits loss of function in cells and in artificial membranes 9,22 (Supplemental Figure 2(B)). A Carc motif near the TMD of IFITM3 may exhibit NBD-cholesterol binding activity During the preparation of this manuscript, a preprint was posted that characterized IFITM3 as a sterol-binding protein. 15 The authors found that endogenous IFITM3 is among the suite of host proteins that cross-links with a cholesterol analog in human cells. Furthermore, they proposed that a region of IFITM3 proximal to the transmembrane domain (TMD) encodes a putative CARC consisting of 104 KCLNIWALIL 113 (underlined residues indicate the basic, aromatic, and aliphatic residues that define a putative cholesterol binding region, as seen in certain G-protein coupled receptors. 21 Deletion of this region led to partial loss of cholesterol analog binding, suggesting that this region of IFITM3 protein contributes to cholesterol binding in vivo. 15 Therefore, we tested whether a peptide overlapping with 104 KCLNIWALIL 113 conferred potential for direct cholesterol binding in vitro. As shown in Figure 1, P4 peptide exhibits little to no cholesterol binding activity (Figure 1(A, B)). However, the inclusion of the putative CARC motif in P4+ led to increased cholesterol binding relative to P4 ( Figure 6(A-C)). Therefore, the 104 KCLNIWALIL 113 motif in IFITM3 may also contribute cholesterol binding potential to IFITM3. Nonetheless, the impact of P4+ on NBD-cholesterol fluorescence is modest compared to that of P2 (Figure 6(C)). Sequence analysis of IFITM3 orthologs in diverse vertebrate species revealed that the AH is more highly conserved than the CARC ( 104 KCLNIWALIL 113 ) region ( Figure 6(D)). Notably, the latter is not conserved between human and mouse IFITM3. Since IFITM3 performs important antiviral activities in both species in vivo, 3,20 this CARC motif may not be essential for antiviral function. Therefore, while our data suggest that IFITM3 contains at least two membrane proximal regions capable of interacting with cholesterol, the cholesterol-binding activity of the AH may be the most functionally influential. Discussion By showing that IFITM3 exhibits direct cholesterol binding potential via at least two membrane proximal domains (the AH and a CARC motif near the TMD), our results suggest that IFITM3 interacts with this lipid during its transit through and residency within cellular membranes. Indeed, endogenous IFITM3 has been shown to associate with a cholesterol analog in cells following crosslinking. 15 However, since cross-linking is not necessarily indicative of a direct interaction, our results showing the direct binding of cholesterol by IFITM3-derived peptides in vitro provides an important confirmation of this phenomenon but also identified the protein domain(s) responsible. Since we found that the AH is a major contributor to the direct cholesterol binding potential of IFITM3, this discovery provides a considerable leap forward in our efforts to build a molecular model for how IFITM3 inhibits fusion pore formation between viral and cellular membranes. Previous work from our laboratory and others has shown that the AH is critical for the antiviral functions of IFITM3, and that it is alone responsible for membrane alterations that block virus entry (rigidity and curvature). 9,22,40 Therefore, our results raise the likely scenario that cholesterol engagement by the AH is functionally tied to its ability to alter membranes in a manner that disfavors fusion between virus and cell. The link between cholesterol binding by the AH and its known functions is supported by the fact that F63Q and F67Q mutations, which were previously shown to cause loss of function of IFITM3 in cells and in artificial membranes, 9,22 dramatically reduce cholesterol binding. The AH is conserved among human IFITM family members, but distinct antiviral specificities have been demonstrated for IFITM1, IFITM2, and IFITM3. This is due, at least in part, to their differential subcellular localization. For example, IFITM1 is primarily located at the plasma membrane while IFITM2 and IFITM3 accumulate in endosomal membranes following endocytosis from the cell surface. 45 We predict that IFITM1 and IFITM2 also engage cholesterol at their respective locations, and this may contribute to the block of virus entry at those sites. However, this prediction must be tested, for example by assessing whether the AH of IFITM1 and IFITM2 directly bind NBDcholesterol in vitro. Furthermore, the trafficking of IFITM proteins is dynamic and controlled by posttranslational modifications, including phosphorylation and S-palmitoylation. 11 The phosphorylation of IFITM3 negatively regulates its internalization from the plasma membrane by preventing endocytosis. 10,13,24 Furthermore, mutations in IFITM3 that prevent endocytosis similarly result in enhanced plasma membrane localization. IFITM3 at the cell surface performs important roles ranging from inhibition of HIV-1 infection, 12,13,34 to promotion of PI3K signalling. 29 Interestingly, depletion of IFITM3 from cells disrupts the formation of cholesterol-rich lipid rafts, 29 which are important for both of these processes. Moreover, IFITM3 at the cell surface has been shown to promote infection of SARS-CoV-2, a virus that depends on cholesterol for entry into cells. 44 Therefore, it will be interesting to ascertain how cholesterol binding contributes to the various functions, antiviral or otherwise, ascribed to IFITM3 and related IFITM proteins. The employment of additional methodologies, such as membrane flotation assays involving liposomes and recombinant IFITM3 protein, would be helpful in this regard. In addition to a role played by cholesterol in the effector functions of IFITM3, the lipid may indirectly influence function by affecting the conformation and stability of IFITM3 in membranes. S-palmitoylation (the covalent addition of palmitic acid, a fatty acid) at conserved cysteines in IFITM proteins has been shown to facilitate membrane anchoring and extend protein half-life, 53,54 but how cholesterol may affect IFITM3 localization and/or stability is unknown. It is unlikely that cholesterol binding itself impacts the degree to which IFITM3 is palmitoylated in cells, because it was previously shown that IFITM3 F67Q exhibits a similar degree of S-palmitoylation compared to WT. 9 On the other hand, in the aforementioned pre-print posted during preparation of this manuscript, it was reported that a palmitoylationdeficient version of IFITM3 exhibited less crosslinking with a cholesterol analog in cells. 15 Therefore, the relationship between S-palmitoylation and cholesterol binding remains unclear and should be a focus of further mechanistic studies of IFITM function. While we do not know how cholesterol binding by the AH of IFITM3 may regulate its functions, work performed on similar helices identified in other proteins provide important clues and future directions. For example, the M2 protein of IAV encodes an AH that also exhibits cholesterol binding activity. 18,19,36 In fact, the aromatic rings of membrane-facing phenylalanines within the AH are believed to play an important role in the M2cholesterol interaction. 18 As a result, cholesterol binding by the AH of M2 increases its depth in lipid bilayers as well as its orientation and promotes membrane rigidity and curvature necessary for virus budding from the plasma membrane. Therefore, we suspect that engagement of cholesterol by the AH of IFITM3 similarly increases its penetrative depth and positioning within membranes and confers it with a greater capacity to stiffen and bend membranes during the virus-cell membrane fusion reaction. Indeed, our molecular dynamics simulations suggest that mutations that disrupt AH formation and cholesterol binding potential inhibit peptide embedding into membrane leaflets. Furthermore, in the aforementioned pre-print, the authors used molecular simulations to suggest that cholesterol affects the positioning of the AH of IFITM3 in membranes. 15 Therefore, our description of the cholesterol binding potential of the AH likely contributes to these atomic-level observations. While we present clear evidence that the AH of IFITM3 is capable of binding cholesterol in a manner that requires phenylalanines, this region does not encode a clear cholesterol recognition motif (CRAC or CARC). However, this is not surprising since cholesterol binding has been demonstrated by many proteins lacking these motifs, and, importantly, the presence of these motifs does not necessarily predict cholesterol binding potential. 21,48 Rather, the AH of IFITM3 may correspond to a "tilted" peptide that achieves functional lipid interactions due to its angular insertion in the membrane and the presence of membrane-facing phenylalanines. On the other hand, IFITM3 contains at least one cholesterol recognition motif ( 104 KCLNIWALIL 113 ), and we show here that this site enables some degree of cholesterol binding in vitro. However, since the AH is necessary to modify membranes in living cells 40 and sufficient to modify artificial membranes in vitro, 22 the functional consequences of cholesterol binding elsewhere in IFITM3 are unclear. Perhaps the interaction between cholesterol and 104 KCLNIWALIL 113 affects the depth and orienta- Results represent the mean of three independent experiments and are normalized to 50 lM P2 peptide + NBDcholesterol (set to 100%). Error bars indicate standard error. Asterisks indicate statistically significant difference (p < 0.05) between P2 and another peptide (condition indicated by asterisk color), as determined by one-way ANOVA. Additional pairwise comparisons also resulted in statistically significant differences between indicated pairs (black asterisks). (D) Partial amino acid alignment of IFITM3 from select vertebrate species (common names are shown). Residues conserved with human IFITM3 are highlighted in gray. tion of the AH in the context of the full-length IFITM3 protein, and there is some evidence to support this possibility. 15 Nonetheless, 104 KCLNIWALIL 113 is not conserved in mouse IFITM3, despite being well characterized as a restriction factor against IAV in this species. 3,20 In contrast, the AH of IFITM3 is more highly conserved across vertebrates and is known to alter the biomechanical properties of membranes on its own. Therefore, cholesterol binding by the AH of IFITM3 may represent an important feature contributing to its evolutionary conservation. Furthermore, our work also has implications for the controversial role of VAPA as a co-factor for IFITM3 function. 1 Since the AH and the crucial phenylalanines at residues 63 and 67 found therein do not overlap with the region of IFITM3 that mediates an interaction with VAPA (the TMD), 1 it is unlikely that mutations within the AH influence VAPA binding. While our results do not rule out that VAPA is involved in the functions of IFITM3, they indicate that accumulation of cholesterol in late endosomes per se is unlikely to be responsible for the antiviral activity of IFITM3. Instead, they suggest that the coincidence of cholesterol and IFITM3 at membrane sites serving as portals for virus entry is responsible for restriction. A model whereby IFITM3 and cholesterol are both required to inhibit fusion pore formation reconciles previously conflicting data from multiple publications. Specifically, it has been shown that enforced cholesterol accumulation in late endosomes following NPC1 inactivation variably inhibits IAV, which undergoes pH-dependent fusion at this site. 16,32,51 In contrast, it is possible that loss of NPC1 function in cells expressing IFITM3 may result in virus entry inhibition. This possibility was supported by showing that inactivation of NPC1 by U18666A inhibits IAV infection in IFITM3-competent cells but lesser so in IFITM3deficient cells. 27 Nonetheless, direct evidence that IFITM3 function requires the presence of cholesterol demands methods that remove or inactivate cholesterol from cells. The use of methyl-betacyclodextrin to deplete cholesterol in IFITM3expressing cells has been reported to have variable effects on infection. 1,32 Since many viruses depend upon cholesterol for entry (including IAV), 47 other strategies are needed in order to directly test a cholesterol requirement in the antiviral activities of IFITM3. In addition to affecting the angular depth of the AH in membranes, which may allow it to inflict change to membrane fluidity and curvature, it is possible that IFITM3 binds to and sequesters cholesterol that is ordinarily used by viral envelope glycoproteins to complete the membrane fusion reaction. Overall, the findings presented and discussed here provide an advance towards our understanding of how IFITM3 impacts membrane microenvironments. However, it remains to be determined how coordination of cholesterol contributes to the precise mechanism by which IFITM3 disfavors fusion pore formation. A coordinated effort of in silico, in vitro, and in cellulo work will be required to resolve this important question and doing so will enable the development of innovative antiviral therapies that mimic the molecular action of IFITM3. Peptide synthesis and reconstitution Peptides listed in Table 1 were synthesized with >98% purity by Vivitide. Lyophilized peptides were reconstituted in DMSO or 30% acetonitrile containing 0.1% TFA to produce working stocks of 1 mg/mL and were stored at À20°C. 42 using a Tecan Infinite M1000. Fluorescence intensities obtained from NBD-cholesterol/NBD-PE alone (no peptide) were subtracted from fluorescence obtained from samples containing both NBD-cholesterol/NBD-PE and peptide. To derive the dissociation constant (K d for NBD-cholesterol binding to P2, fluorescence measurements were fitted to a nonlinear regression curve in Prism. The least squares regression method was used under the assumption that total binding to a single site is the sum of specific and non-specific binding. Background was constrained to zero. For competition assays, 50 lM of unmodified cholesterol (C8667; Sigma) was preincubated with peptides prior to addition of 500 nM NBD-cholesterol, and one hour later, NBD fluorescence intensity was measured. NBD fluorescence polarization was measured in reactions as outlined above, except reactions were incubated at room temperature for 1 h prior to excitation with plane-polarized light using a Tecan Infinite M1000 (excitation: 470 nm; emission: 540 nm). Analysis of recombinant protein binding to NBD-cholesterol Recombinant Glutathione-S-Transferase (GST) was obtained from Rockland Immunochemicals (001-001-200) and GST-IFITM3 was obtained from Abnova (H00010410-P01). GST was received at 1 mg/mL concentration in 0.2 M potassium phosphate, 0.15 M NaCl (pH 7.2) and GST-IFITM3 was received at 0.14 mg/mL concentration in 50 mM Tris-HCl and 10 mM glutathione. Binding of proteins to NBD-cholesterol was measured by combining increasing concentrations of protein (0.11-1.75 lM) with 500 nM NBD-cholesterol. Reactions were carried out in 96-well black flat-bottom plates for 1 hour at 4°C in the presence of 20 mM HEPES (pH 7.4), 150 mM NaCl, 0.004% NP-40, and 2% ethanol (v/v). NBD fluorescence intensities were measured at 22°C within 5 mins of removal from 4°C by fluorescence spectroscopy (excitation: 470 nm; emission: 540 nm) using a Tecan Infinite M1000. Fluorescence intensities obtained from NBDcholesterol (no protein) were subtracted from fluorescence obtained from samples containing both NBD-cholesterol and protein. Western blot analysis 2 mg of GST or GST-IFITM3 protein was loaded into a 12% acrylamide Criterion XT Bis-Tris Precast Gel (Bio-Rad). Electrophoresis was performed with NuPage MES SDS Running Buffer (Invitrogen) and proteins were transferred to Amersham Protran Premium Nitrocellulose Membrane, pore size 0.20 mm (GE Healthcare). Membranes were blocked with Odyssey Blocking Buffer (Li-COR) and incubated with the following primary antibodies diluted in Odyssey Antibody Diluent (Li-COR): anti-GST (sc-138; Santa Cruz Biotechnology) and anti-IFITM3 (EPR5242, ab109429; Abcam). Secondary antibodies conjugated to DyLight 800 or 680 (Li-Cor) and the Li-Cor Odyssey CLx imaging system were used to reveal specific protein detection. Images were analyzed and assembled using ImageStudioLite (Li-Cor). Structural characterization of peptides using circular dichroism (CD) 25 lg of peptide was lyophilized and resuspended in 10 mM sodium borate (pH 7.4), 150 mM NaCl, 3.3% ethanol and 25 mM SDS to achieve a final peptide concentration of 60 lM and to promote a hydrophobic environment and peptide folding according to a previous report. 9 A solution lacking peptide was used for background correction. Spectra were acquired at 25⁰C in continuous mode between 200 and 250 nm on a Jasco J-1500 CD Spectropolarimeter. The spectra were recorded at a scan rate of 10 nm/min, with a data pitch of 1 nm, a bandwidth of 1 nm, and a digital integration time of 8 seconds, and were presented as averages of three acquisitions. The spectra were deconvolved to estimate the secondary structural content using PEPFIT which uses a peptide-specific basis set. 41 Averages of the top five deconvolved estimates for each peptide are presented. Intrinsic tryptophan fluorescence and FRET 50 lM P2 peptide was incubated in the presence or absence of NBD-cholesterol/NBD-PE for 16 hours at 4°C and intrinsic tryptophan fluorescence was measured by fluorescence spectroscopy (excitation: 295 nm; emission: 205-300 nm) 42 using a Tecan Infinite M1000. FRET between peptide and NBD-cholesterol/NBD-PE was measured by fluorescence spectroscopy (excitation: 295 nm; emission: 500-600 nm) using a Tecan Infinite M1000. Molecular dynamics system build and simulations WT and mutant peptides were built as ideal alphahelices. For each peptide species, one peptide per leaflet was placed near the membrane-water interface. 25,28 The membrane was composed of 70:30 mol% POPC:cholesterol with 200 lipids per leaflet, and the solution was 50 H 2 O per lipid with 150 mM KCl. Each system type was built in triplicate. The systems were minimized and briefly equilibrated using NAMD 38 and then converted to Amber format 7 using ParmEd. The simulations used the CHARMM36m all-atom force field, 23,26 and Amber20 version of pmemd.cuda. 6 See the methods described for previous planar membrane simulations for more detailed information. 4,30 Briefly, constant pressure of 1 bar was maintained by a Monte Carlo barostat, and constant temperature was maintained at 310.15 K using Langevin dynamics with a 1 ps -1 damping coefficient. Nonbonded forces were switched off between 10 and 12 A. Covalent bonds involving hydrogen were constrained using the SHAKE and SETTLE algorithms. Long-range electrostatics were calculated by particle mesh Ewald summation. All independent replicas were simulated for 1 ls, and the analysis used the last 0.9 ls of each run. Sequence retrieval and alignment IFITM3 sequences from the indicated species (common name shown) were retrieved from NCBI GenBank, partial amino acid alignments were generated using MUSCLE. DATA AVAILABILITY Data will be made available on request.
2022-04-25T13:20:56.518Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "b97062ee84f17d1227475596d5e9ebffbe0a98df", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jmb.2022.167759", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "69d0ff091323d794af786947803f83206933d1df", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
242159687
pes2o/s2orc
v3-fos-license
Marketing Mix for Tutoring Agencies During the Pandemic in Indonesia —The temporary closure of formal educational institutions in an effort to contain the spread of the Covid-19 epidemic around the world has an impact on the education sector, not except in Indonesia. The non-formal education sector, such as the Tutoring Business Agencies, also experienced a very significant impact, where the tutoring agencies had difficulty in implementing the marketing mix. So that further studies are needed so that the tutoring business agencies will survive in a pandemic condition. To obtain data, it was done through literature studies from secondary and qualitative data. The habits of students who tend to prefer face-to-face activities make online tutoring activities less responsive and still choose face-to-face learning as one way to keep up with the school curriculum. The solution so that tutoring agencies can survive is to improve product and process factors. The products include limiting the number of students in the class and the variety of online and offline learning. The process is by spraying disinfectant at each class change and providing distance between students. This needs to be done so that parents who provide the needed funds will feel safe when their children study at the desired tutoring agencies. I. INTRODUCTION Tutoring business agencies or other name tuition center, have been a part of the world of education in Indonesia since the 1970s, until 2017 there were nearly 2000 officially registered tutoring agencies.Initially, the tutoring provided preparatory services for Senior High School students to participate in the selection to enter state universities.Recently, the business of intensive college entrance test services has expanded even further, targeting Elementary and Junior High School students in various cities in the country.Students who take part in the tutoring show more self-confidence than those who don't, so that there are still a lot of enthusiasts in the tutoring [1]. Since 2015, the National Examination no longer determines student graduation.However, the National Examination is still implemented as a means of mapping and measuring the quality of education.This does not have a significant impact, where students who choose to study with the help of tutoring agencies are still booming.In 2021, the Ministry of Education and Culture plans to remove the National Examination.Several large tutoring agencies business tactic is to adapt the material curriculum established by the government.In 2017 there was a big change in the form of tutoring agencies, from the conventional form of face-to-face to online which began to spread.Online tutoring is a trend that cannot be hindered as a result of technological advances, however, with a higher price than online tutoring, conventional tutoring is still an option. A big problem at the end of 2019, the outbreak of a type of corona virus changed the education system almost all over the world, including Indonesia.The learning method, which has been in the form of face to face, has turned online using all available media.This is in accordance with the stipulation of the Minister of Education and Culture of the Republic of Indonesia regarding Circular Number 4 of 2020 concerning Implementation of Education Policies in the Emergency of the Spread of Covid-19.Tutoring agencies that already have big brands tend to be better able to survive because of advertisements both from friends and the media, although there are also many small tutoring agencies that are still able to survive in this pandemic era.For this reason, it is necessary to carry out further studies on tips so that conventional tutoring agencies can still survive in the midst of a pandemic condition due to this corona virus. II. METHODS The method used was a literature study with secondary data collection and qualitative data.In data collection, the authors collected data about covid-19 and its impact on the development of tutoring agencies, especially the top 5 provinces with conventional tutors.In qualitative writing, data presentation can be done in the form of short descriptions, charts, relationships between categories and the like, but the most frequently used is narrative text. III. RESULTS Non-formal education is defined as all the implementation of education that is carried outside official educational institutions or does not originate from the school environment and their existence is protected by law.The number of covid-19 chases andtutoring agencies in Indonesia can be seen in Table 1.The number of educational businesses / companies located in Java Island reached 350,665 or the equivalent of 56.56 percent of the total education businesses.These businesses include tutoring agencies, expertise course institutions, and other types of non-formal educational institutions.From this data, it does not include tutoring that has not been officially registered but already has a big name, for that the subject matter will be more in the 5 provinces with the most number of tutoring agencies. A. Impact of Pandemic on Tutoring Agencies in DKI Jakarta The income of the conventional tutoring agencies dropped after the Governor implemented the PSBB scheme on April 10, 2020.In the first month of this implementation, the income did not exist at all because there was no teaching and learning process at all.The Income continues to decline, even teaching places have to be temporarily closed to cut operational costs.In the midst of increasing needs, teachers are forced to be sent home.Some teaching teachers who are still students also experience problems due to increased operational costs for online lectures, while their livelihoods have stopped [4].Covid-19 is not the main reason, but rather the successive economic impact caused them.The declining financial condition of the family amid the high costs of paying tutoring agencies.The existence of a national exam elimination policy were the main considerations of students for resigning. Many students and their parents admit that they are still interested in joining a tutoring agencies provided that the facilities and services obtained are commensurate with the high cost that must be paid.They will rejoining to preparation for admission to public higher education institutions. B. Impact of Pandemic on Tutoring Agencies in East Java The home study policy in East Java province has been implemented starting March 16, 2020.Not only are teaching and learning activities at school closed, but course and training institutions or tutoring agencies in East Java have also officially closed.This results in no income for the tutoring agencies business in East Java and the business turnover of the tutoring agencies has dropped dramatically.The Cendikia tutoring agencies is one of the tutoring agencies in the city of Surabaya which has been severely affected by the Covid-19 pandemic.To attract students to join, this tutoring agencies strictly implements health protocols.Before the Covid-19 pandemic, the Cendikia tutoring agencies was able to guide as many as 20-30 students per day, but currently it is only able to guide 2 students in one week.In this Cendikia tutoring agencies there are 100 teaching staff and each of them is paid the rate of the student they supervise [5]. C. Impact of Pandemic on Tutoring Agencies in Central Java The Several government policies that have succeeded in influencing of tutoring agencies include the requirement to conduct learning from home and the elimination of the National Examination.This requires that it must close their conventional service activities which have a long impact on decreasing the amount of business income.Rumah Akselerasi income has dropped to reach 80 percent, the special preparatory class for the national exam also ended in the middle of its implementation, this causes many students leave.The same thing happened to Smart Education Center and AIO Private tutoring agencies.The tutoring activities of both tutoring agencies have stopped completely following the implementation of this policy.This condition has led to new online teaching methods, many of which are given free of charge.This resulted in many teachers from conventional tutoring agencies being terminated or losing their income [6]. D. Impact of Pandemic on Tutoring Agencies in West Java The high number of covid-19 cases has led to a policy of eliminating activities that cause crowds, one of which is the business activities of tutoring agencies.Ganesha Operation is one of a tutoring agency in Bandung that continues to survive the Covid-19 pandemic.This business is still surviving by following the trend of online tutoring methods using the Go Kreasi application and virtual conferences.There are indeed obstacles related to the departure of some students due to the economic conditions of their parents who are experiencing difficulties, but the numbers are small so that they can still be anticipated properly [7]. E. Impact of Pandemic on Tutoring Agencies in North Sumatera Home learning policy has been implemented in North Sumatera since March 18, 2020.Schools and tutoring agencies are also closed and prohibited from providing face-to-face services.The number of students using tutoring agencies in North Sumatra Province has decreased which has resulted in a decrease in income from the tutoring agencies business.This is because most parents do not extend their child's learning period and the emergence of various kinds of free online tutoring.One of the tutoring agencies in North Sumatra, Smart Tutoring Agency which focuses on providing services in preparation for college entrance tests, has suffered a loss of IDR 30-50 million.About 30 teachers and their staff have been sent home following the implementation of the policy of eliminating national exams and postponing computer-based written exams for university entry [8]. A. Product The kinds of tutoring agency products include the large variety of classes offered, the frequency of meetings, textbook facilities, and the compatibility of the tutoring material with the exam material.The thing that needs to be considered in products during a pandemic is the number of students in one class, to reduce the process of spreading the Covid-19 virus [9]. B. Price The frequency of meetings, room facilities for learning, and learning modules are considered in determining the price, expensive or cheap to be relative depending on the suitability of the facilities offered.In this section, there is no change in facilities due to the pandemic, so that the decline in people's purchasing power can be a consideration.Even so, it is not advisable to reduce prices, because operational costs will increase. C. Place Easily accessible, the availability of transportation, and a good learning situation are the criteria of place to choosing a tutoring agency.There is no effect due to the pandemic, in the red and black zones face-to-face learning activities will be prohibited.So moving the learning location to another place is not a solution, because it will increase operational costs and change the location of the tutoring agency even further [10]. D. People Teacher is an important figure who makes students and parents choose to study at the tutoring agency.Teaching time discipline, ability, teaching methods, and friendliness of all staff, are things that are of concern, so that there is no change due to the pandemic [11]. E. Promotion The attractiveness of promotions and suitability of promotions with expectations are important factors to select a tutoring agency, it is including any discount or guarantee [12].Promotional activities to schools cannot be implemented because all schools are closed.It can be overcome by using social media, because almost every student and parent has social media.So that promotional activities provide information when is the right time to start joining the tutoring agency. F. Physical Evidence The size of the room compared to the number of students, the existence of a generator, sufficient parking space, clean bathroom, lighting, Wi-Fi connection, and a canteen if there is a break time, are the facilities that students and parents pay attention to.The availability of hand washing at the entrance and hand sanitizer is of the greatest concern during a pandemic, and spraying of disinfectants during class changes is an added value. G. Process Registration procedures, easy-to-understand teaching and learning processes, and administrative services are factors that need attention.The teaching and learning process will be more conducive if the number of students is small and keep a distance, because students will be more focused on learning [13]. Parents are most influence the sustainability of the agency apart from being child funders, because parents can usually be the right promotional agents.For this reason, it is necessary to explain the variety of class products, promotions, and the learning process, as well as the latest physical evidence according to the current conditions that have followed the health protocol [14]. From SWOT analysis, market conditions and technology support, the convenience of the internet for learning online, the ease of time and space, and expenses for family education are all factors of opportunity to consider in choosing a tutoring agency during a pandemic [15].The intense competition between agencies, expensive tuition prices, and high operating costs are the threat factors experienced by tutoring agencies at this time [16].Improved management, well-known brands, and word of mouth to achieve wide recognition and reputation are the strength factors that need to be considered for the tuition agency to survive.Loyalty of teachers, unclear teaching abilities, and lack of training for teachers to develop are factors of weakness that occur during a pandemic [17].For this analysis the conditions will be balanced, where students have difficulty in school lessons, needs increase, but income decreases. The factors that can adjusted during a pandemic are the appropriate products, processes, and availability of physical evidence.It could be by providing online and offline class facilities, despite the tendency for parents and students to choose offline, but the choice of online classrooms shows that the tutoring agency continues to advance following technological developments.For the process which initially held face-to-face meetings, it also provides online exam practice facilities, so it makes learning easier.Besides that, it can limit the distance between students, providing means of washing hands and hand sanitizers is one physical evidence that is considered in choosing a tutoring agency.The tutoring agency should limit the number of students to half and spray disinfectants every change of class hours.For example, the Quantum Community is the choice of high-achieving students in school, because the number of students is limited, making it easier for students and teachers to interact.Even though online tutoring is currently popular, this interaction has a big influence on students in understanding the material presented compared to using online media [18].When students and parents feel comfortable, they tend to remain loyal to the tutoring agency. V. CONCLUSION Factors that need to be considered for the conventional tutoring agencies to survive are products and processes.Products by providing a variety of choices for offline classes, limiting the number of students.The process is by providing disinfectant spraying when the student leaves, and also the strict application of health protocols.
2021-10-15T15:21:12.181Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b31f547f90c0c64e856cddbddef5f4e13690641d", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125961217.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fcee520f09aa81e969374ffc0cda3e03b9c4b35f", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
216328772
pes2o/s2orc
v3-fos-license
Synthesis, Characterization and Optimization of Electrophoretic Deposition (EPD) Parameters of YSZ Layer on Ti-6Al-4V Alloy substrate The objective of present work is to study the effect of electrophoretic deposition (EPD) parameters (voltage, time and concentration) using DC current based on the thickness, porosity and coating characterizations of YSZ layer on Ti-6Al-4V alloy substrate. The Taguchi approach was used in order to determine the optimal conditions for EPD and different criteria were applied for the deposit of biochemical coatings. Yttria stabilized zirconia powder (YSZ) was deposited on Ti-6Al-4V alloy substrate by electrophoresis using ethanol as a solvent under DC, to improve the quality of alloy surface and meet the requirements of biological orthopedic application activity. Ethanol was used as solvent to precipitate chitosan and YSZ on the alloy substrate. The composition of the surface and the cross section of coatings have been described by this electrode-enabled cathodic deposition for characterization. Many tests and inspections; zeta potential, water contact angle, XRD, SEM and optical microscopes were used to characterize surface morphology of YSZ layer. Optimum conditions for deposition of YSZ layer were used at 40 volt, 2 min and 1g / l for suspension being at room temperature. The contact angles values of coatings were changed between hydrophilic 67.489°C and 35.914°C Suitable percentages of porosity with pores size of 56.601-83.505 μm were obtained. Introduction Titanium (Ti) and its alloys are commonly used for manufacturing dental implants owing to their high biocompatibility and corrosion resistance. These favorable properties for in vivo implantation are due to Ti ability to form a passive oxide film, i.e. TiO2, within seconds after exposure to oxygen that makes it bio inert [1]. Even if the passive oxide is broken, it can be rapidly regenerated in the presence of oxygen, leading to protection of the metal surface [2]. This passive layer is very stable during function even under demanding mechanical and chemical conditions, such as during mastication and exposure to fluids in the oral cavity, thus protecting the titanium surface from corrosion [3]. Titanium and its alloys are widely used in dentistry and orthopedics for their mechanical properties, high strength-to-weight ratio, high corrosion resistance, and exceptional biocompatibility. An important feature of Ti-based alloys is their air-induced passivation, which creates a protective and stable layer of titanium oxide on the surface [4]. The metal of Ti-6Al-4V alloy has high biocompatibility, corrosion resistance, low elastic modulus, and low density [5]. 8 wt.% Yttria-stabilized zirconia (8YSZ) has exhibiting low thermal conductivity [2,5,6], superior mechanical properties and high thermal expansion coincident [7]. Ti6Al4V alloy commonly used for biomedical implants for example, artificial knee joints, dental implants, artificial hip joints, bone plates, and also implants for dental products, such as crowns, bridges, and dentures, which are mainly produced by the precision casting method. and part for orthodontic surgery, bone fixation material as nails, knee, shoulders, spine and wrist joint replacement parts for hip joint, such as screws, plates, housing devices for the pacemaker and artificial heart valve, …. etc [6]. Ti6Al4V alloy with α +β phases, which suitably treated, have an exceptional mixture of strength and ductility. They usually are stronger than the α or the β alloy individually [7,8]. Ti6Al4V alloy has higher biocompatibility than AISI 316L and cobaltbased alloy. This is return to the superior corrosion resistance of the Ti alloy than the other types referred to formerly [9]. The biomaterial surface is a chief issue forms undesirable reactions with the body. The chemical compositions on the surface and its topography are supposed to be essential in implants contacting bone [10,11] . Because of EPD a simple and low cost method therefore it is widely used as coating way [12][13]. One of the motivating EPD usage involve surface coating of metallic implant utilizing calcium phosphate (Cap) which is then simultaneously reduces the potentially harmful metal ions release and enhances the bioactivity [14][15]. However, EPD has several advantages over reported other coating techniques such as its accomplishment at room temperature [16] . Titanium and its alloys have recently been found widespread use in orthopedic surgery and dental applications in that Ti-6Al-4V has typically been used as bone plates and screws of easily removable implants after treatment based on specific requirement [17]. High biocompatibility, chemical stability, good aesthetic characteristics, flexural strength and fracture toughness are essential for dental materials in order to allow an efficient restoration of the tooth appearance and functionality. Biomedical grade yttria stabilized zirconia (YSZ), has been intensely investigated for this purpose.YSZ possess high flexural strength, high toughness, chemical inertness, and corrosion resistance with biocompatibility in the oral cavity. They are chemically inert materials, allowing good cell adhesion compared to other dental ceramics [18][19][20]. There have been several methods (electrodeposition, electrochemical, metal-organic chemical vapour deposition, Micro-arc oxidation, Plasma spraying, and electrophoretic deposition (EPD) [21, 22,23], etc.) are available for deposition of YSZ material for various applications. But, these methods required highly sophisticated equipment for depositing the ceramic materials. Moreover, uniformity of the coatings remains challenging. On the other hand, electrophoretic deposition (EPD) process required simple equipment and provides highly packed uniform coating from the alcoholic suspension. The advantages of EPD process includes rate of deposition can be controlled by varying applied voltage, low cost and coating process can be completed in a few minutes [24]. Zirconia (ZrO2) is a bio ceramic and is bio inert, which shows superior mechanical behavior and biocompatibility. Therefore, adding ZrO2 into HAP improves the interfacial bonding [25]. On versions of zirconia from tetragonal to monoclinic phase induces the increase in its strength and fracture toughness [26]. As a type of metallic oxide, zirconia is chemically stable at room temperature. However, it is possible to react powdered ZrO2 with water, alcohol or other organic solvents such as acetyl acetone under certain conditions. Actually, the natural surface of YSZ or zirconia has two similar forms: The Zr(OH)4•nH2O (hydroxide zirconium) and ZrO2•nH2O (hydrous zirconia) due to exposure to moisture in the air [27]. It is thought that the nonbridging hydroxo groups -OH in the former induces a condensation reaction between zirconia powders and thereby leads to hard agglomeration. The washing of powder with alcohol was ineffective to eliminate this agglomeration but helpful to reduce hard agglomeration in hydrous zirconia powders by the formation of ethnocide. surface chemistry of YSZ not only influences the colloidal behavior of the powder in a liquid (e.g. agglomeration) but also possible affects the consolidation of the powder compacts [28]. The aim of this work is devoted to optimize the EPD parameters including (voltage, time and concentration of suspension) using DC current by using Taguchi method to realize the YSZ coating on the Ti6Al4V alloy substrate depending on thickness values and then characterization of coating layers to select optimum condition. Electrophoretic Deposition (EPD) Coating Process The material used in this research was Ti-6Al-4V alloy as plate of thickness 5mm provided from (Company: Shanxi Joint industry co., ltd) which tested in Science and Technology Ministry by using analytical instruments model XEPOS and its name SPECTRO. Table 1 shows the chemical composition of Ti6Al4V alloy. used to adjustment range of pH value 4 of solutions by using (pH-EC-TDS Meter Portugal). Zeta potential was measured for each solution to ensure its stability. The EPD cell used in this study consists of a beaker and two electrodes immersed in the suspension as schematic EPD system used in this study is shown in Fig. 1. The EPD consists of a beaker and two electrodes immersed in the suspension. The Ti-6Al-4V alloy and stainless steel 316L were cathode and anode respectively. The deposition process was take place through three deposition conditions for yttria stabilized zirconia (YSZ) of concentration (1,2 and 3 g/L), periods of time (2,4 and 6 min) and voltages (20,40 and 60 volt) were taken at room temperature. Then the coated samples were dried by air. Design of experimental (DOE) Taguchi's approach is a statistical tool of design of experiments (DOE) by using (MATLAB PROGRAMMING). It was used to analyze data and for modeling the influences of different parameters or factors of EPD method on output whereas the inputs are described as factors while the outputs are described as response variables. This approach was used to determine optimal design parameters for performance and cost. The objective was to select the best combination of controlling parameters or variables so that the Taguchi method was strongest with respect to noisily factors. In this study L9 (3 3 ) orthogonal array were used in experimental work to deposit of YSZ layer. Table 2 shows the experiments design according to Taguchi's approach and Table.3 shows the parameters of orthogonal array for YSZ layer that used in design of experiments. The detailed methodology is demonstrated in the flowchart (Fig. 2). Microstructure Characterization X-ray diffraction analysis has been performed on Ti6Al4V alloy specimen to determine the existing phases. X-ray diffraction device used is (Lab X, XRD -6000) with 40 Kv and 30 mA. Scanning speed 2° per minute was used. The range of the diffraction angle was (20 º -80 º). The thickness of coating and its surface morphology was examined by optical and scanning electron microscopes. The distribution of elements at selected points /areas was detected by EDS detector. Zeta potential was necessary to study and analyze the stability of the suspension. This is one of the important tests in the EPD to ensure obtaining a homogeneous solution and thus ensure homogeneous coating layer. The particles were firstly dispersed in the corresponding solvent with a solid concentration lower than 1 g/l to obtain reliable results by ultrasonic treatment for 15 min. The results were ensured by the exposure of the suspension to the zeta potential measurement. Taguchi Results After the suspension preparation for YSZ prepetition the statistical approach of Taguchi was utilized to choose the optimum conditions for YSZ layer preparation on Ti6Al4V substrate. Since the YSZ layer preparation aims to gain the highest thickness of coating. Thus three values of voltage (V), time (t ) and concentration (C ) were selected from signal -to-noise ratio (S /N) as the best one as illustrated in the Porosity and Thickness Measurement The cross section of coating and its topography was examined by using optical microscopy. The results of the examination are illustrated in the Fig.4. Indicates the highest thickness of YSZ coating layer was 28 µm with dense, continuous and homogenous layer when the used conditions were 2%g/L of YSZ, 2 min deposition time and 60 V applied voltage. The percentage amount of porosity for every sample can be evaluated by using image -J program as shown in Fig.5. The results in Table 6 explain that the coating was obtained by using the above parameters achieved minimum porosity value equal to 2.0% Fig. 6 shows the XRD analysis for Ti6Al4V alloy in as received condition. It explains that the structure is consisted of (α+β), α phases represents αstabilizing element as Al element tend to stabilize α phase, while V element acted as β phase stabilizer [18]. Solution stability The measured of pH value of the suspension was pH4 which led to obtain Zeta potential with positive as depicted in the table 7, which enhance the homogeneity of YSZ precipitation. Increasing mobility by increasing absolute values of zeta potential. The high YSZ particles mobility is returned to the positive zeta potential value [2]. The zeta potential and mobility of YSZ nanoparticles for the suspension are illustrated Table 7 and Fig. 7 respectively. Contact Angle Measurement The hydrophobicity or wet-ability of the coated Ti6Al4V alloy and uncoated samples are measured using a steady drop of distilled water by measuring the solid and liquid contact angle via optical contact angle equipment type CAM 110-O4W which is provided with CCD camera. The coating lead to decrease the contact angle from 67.489º to 26.755º which in turn converted the uncoated surface of the Ti6Al4V alloy from hydrophobic to hydrophilic. Surface coating was more hydrophilic in the YSZ coating layer. In this case the bone regeneration will be more easy and rapt. When there are decreasing in contact angle with YSZ coating is deposited on substrate coating. This result is agreement with reference [17]. Microstructure with EDS analysis The microstructure of coated Ti6Al4V alloy observed by SEM with using EDS analysis were shown in the fig.9 and 10 respectively. Fig..9-a shows the nearly homogeneity of coating YSZ on the alloy surface while the Fig.8-b shows YSZ coating thickness which is approximately equal to 25µm. Fig.10 a & b indicates the EDS elemental analysis of uncoated Ti6Al4V alloy substrate and coated Ti6Al4V alloy with YSZ layer respectively. This chemical composition is in well agreement with the composition of the used alloy. 2-XRD analysis proved that EPD technique was suitable for Ti6Al4V alloy due to the existence of phase YSZ coating. 13 3-Good stability was obtained with all solutions for deposition coatings which confirmed by zeta potential. The higher stability value of solution was found 33.74 for 100% YSZ + 0.5% chitosan. 4-Water contact angles measurements for samples showed that the YSZ coating surfaces are altered between 67.489º which considered as a hydrophilic to 26.755º which considered as super a hydrophilic. 5-From microstructural observations, it was seen that the YSZ layer is homogeneous, dense, and continuous coating layer on alloy substrate. 6-Suitable percentages of porosity with pores size of 56.601-83.505 µm were obtained.
2020-04-02T09:38:14.349Z
2020-03-21T00:00:00.000
{ "year": 2020, "sha1": "8015be78d33527a10a5eeb361de46e650039d712", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/745/1/012082", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e076ad09fe9c92b3e1825851f0535b470ba53f1a", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
235355051
pes2o/s2orc
v3-fos-license
Efficacy of Tetravalent Dengue Vaccine: A Systematic Review and Meta-Analysis i ntrOduCtiOn Dengue is vector-borne disease transmitted by Aedes mosquito. There are four serotypes of dengue virus DENV 1, 2, 3, and 4. More than 1 lakh dengue cases are reported in India annually. It is more prevalent in Karnataka, Rajasthan, Maharashtra, Gujarat, and Delhi states/union territory of India. [1] Dengue is one of the neglected tropical diseases (NTD). The target number 3.3 under the sustainable development goal is to end epidemic of NTD by 2030. [2] Tetravalent dengue vaccine (TDV) is launched under trade name “Dengvaxia.” [3] Vaccine has an important role in the specific protection of the disease. This meta-analysis is conducted to find out the pooled efficacy of the vaccine and its safety profile. It may be helpful to develop future vaccination strategies. years were included in these studies. Statistical analysis was done by “R” software. Pooled relative risk among vaccinated versus control group was calculated using random-effect model. Pooled dengue vaccine efficacy was calculated from relative risk. Heterogeneity and publication bias were assessed using Baujat and funnel plot, respectively. Adverse effects following immunization were reviewed . Pooled vaccine efficacy is 58% (95% confidence interval 46%-67%). I 2 statistics is 81.4%. Comparison Control group in these studies was taken as placebo except three studies. In those three studies, control group was taken as some other vaccine instead of placebo. These vaccines were rabies vaccine, meningococcal polysaccharide vaccine, typhoid Vi polysaccharide vaccine, and pneumococcal vaccine. [4][5][6] Outcome Incidence of dengue cases among vaccinated and control group was confirmed by the presence of virologically confirmed dengue (VCD). Virological confirmation was done either by nonstructural protein 1 antigen or reverse transcription polymerized chain reaction test. Incidence of VCD cases among vaccinated and placebo group was compared. The pooled relative risk of getting VCD among vaccinated and placebo (nonvaccinated) groups was calculated. Study design Only randomized control trials (RCTs) which were conducted in phase two and three are included in meta-analysis: Timeframe The follow-up period in these studies was ranged from 12 months to 48 months. These studies were published between the years 2012 and 2020. Exclusion criteria Those studies were excluded in which dengue was confirmed by the tests other than nonstructural protein 1 antigen and reverse transcription polymerized chain reaction. Studies in which outcome was measured by seroprevalence of dengue were excluded from the study. Data collection and extraction Eleven articles were found suitable for meta-analysis. Data were entered in excel sheet from each study. Variables included were study name, publication year, number of events in vaccinated and control group, and total sample size in each group. Statistical analysis Study was analyzed using "R" software version 3.6.1. Random-effect model was used to estimate pooled effect from studies. From the pooled risk ratio overall vaccine efficacy was calculated. (Vaccine efficacy = 1-Relative risk), I 2 statistics was calculated to measure heterogeneity among studies. Graphical presentation A Forest plot was constructed to show graphically pooled estimates. Funnel plot was plotted to determine the presence of publication bias. Baujat plot was drawn to show the heterogeneity among the studies. diSCuSSiOn Articles were searched systematically from PubMed, Medline, and other databases. PRISMA guidelines were followed for article selection. Eleven studies were found suitable for meta-analysis after considering exclusion and inclusion criteria. These studies were conducted in phase two and three of RCT. These studies were published from 2012 to 2020 [ Table 1]. The age group included in the studies was 2-45 years. The vaccinated group was given three doses of the TDV (0, 6, and 12 months). The vaccinated and control groups were followed up for variable period, ranges from 12 months to 48 months. While searching literature, it was found that some trials are still going on. Most of these studies were conducted in Asia and Latin America. There is a need to conduct studies in other regions of the world where dengue is endemic. Total sample size after pooling is 135,399 (90,805 in vaccinated group and 44,594 in control group) [ Figure 2]. A study conducted by Arrendondo Garcia in 2018 has got more weightage (11.9%) with respect to random-effect model of a pooled analysis. Pooled risk ratio from all studies is 0.42 (95% confidence interval [CI] 0.33-0.54). The efficacy of TDV after pooled analysis is 58%. In a study conducted by Hadinegero in 2014, the risk ratio is almost equal to 1 suggests no vaccine efficacy. This study had given more heterogeneous effect compared to other studies. Vaccine efficacy is highest in the studies conducted by Lanata in 2012 and Shibadas in 2019. In the study conducted by Lanata, vaccine efficacy is 83%. Vaccine efficacy found in the study conducted by Shibadas is 80%. [11] These two studies had given more effective results of vaccine efficacy. There may be regional variation in vaccine efficacy according to serotype prevalent in that area. [8] Findings of the study are presented by various graphs. Pooled estimates of relative risk are shown by forest plot [ Figure 2]. Funnel plot is drawn to show the publication bias. Three studies are showing more publication bias [ Figure 3]. Baujat plot is drawn by taking Cochran's Q statistics on x-axis and its influence on overall result on y-axis. Studies which are situated on the right side of the graph contribute more to heterogeneity [ Figure 4]. Overall serious adverse events (SAEs) reported in the vaccinated group were variable, ranges from 0.5% to maximum 11.8%. [4,7,8,11,12] However, vaccine-related SAEs were none or negligible in vaccinated group. [6,8,14] Deaths reported in most of the studies were nil or negligible (<0.1%). Some studies had monitored systemic as well as local solicited and unsolicited adverse events, which gave variable results. One study had assessed the cost-effectiveness of the CYD-TDV vaccine found that it is more cost effective. [15] Some studies also reported less hospitalization rate among the vaccinated group than control group. [9,15] Rate of hospitalization due to VCD was less among persons having age >9 years. [10] COnCluSiOn From the pooled analysis, it is concluded that TDV has good efficacy and negligible vaccine-associated SAEs. More postmarketing trials are required to be conducted. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2021-06-07T13:28:21.385Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "c7bdb4d5e020f446c8d90c8f3a908ef8d257d425", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "a3cdd17af6a25b5d5caad4875caa8d33bdc18558", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253429417
pes2o/s2orc
v3-fos-license
Influence of Low-Intensity Ultrasound on ε-Polylysine Production: Intracellular ATP and Key Biosynthesis Enzymes during Streptomyces albulus Fermentation The effect of low-intensity sonication treatment on cell growth, ε-polylysine (ε-PL) yield and its biological mechanism were investigated, using a 3-L-jar fermenter coupled with an in situ ultrasonic slot with a Streptomyces albulus strain SAR 14-116. Under ultrasonic conditions (28 kHz, 0.37 W cm−2, 60 min), a high biomass of SAR 14-116 and concentration of ε-PL were realized (i.e., they increased by 14.92% and 28.45%, respectively) when compared with a control. Besides this, ultrasonication increased the mycelia viability and intracellular ATP as well as activities of key enzymes involved in the ε-PL biosynthesis pathway, resulting in an improvement in the production of ε-PL. Data on qRT-PCR revealed that ultrasonication also affected the gene expression of key enzymes in the ε-PL biosynthesis pathway, including ε-PL synthetase (PLS). These outcomes provided the basis for understanding the effects of ultrasound-assisted fermentation on the stimulation of metabolite production and fermentation procedure in a fermenter. Introduction As an effective non-thermal physical (i.e., green) processing technique, ultrasonication is extensively applied in the food industry [1,2]. Ultrasound has also been applied to food microbial fermentation to enhance productivity and process efficiency in the food industry [3]. In general, ultrasound has dual effects on microorganisms. High frequencies (2)(3)(4)(5)(6)(7)(8)(9)(10), which can denature enzymes or break cells, has been typically used as a nondestructive analytical technique for monitoring fermentation processes [4]. At present, acoustic cavitation is considered a concept of deactivation by high-intensity ultrasound. The form-grow-collapse of cavitation/sonication bubbles in aqueous media causes powered effects (e.g., microstreaming, shear strain and shock waves) and sonochemical reactions (including unstable atoms and hydrogen peroxide, H 2 O 2 ), resulting in the damaging or disordering of bacterial cells [5][6][7]. On the other hand, low frequency (20-100 kHz) treatment has a positive impact, enhancing the mass transfer rate of gas and liquid nutrients in microbial fermentation processes without damaging the cells [8,9]. An aptly applied ultrasound treatment has the potential to enhance the bioprocesses and/or productivity of several bacteria, yeasts, actinomycetes and filamentous fungi [8]. The low-intensity ultrasound mechanically stimulates a physiological response in microorganisms. The treatment of ultrasound causes pressure changes, which increase the cellular stress and induce an excessive metabolism of cell proliferation to produce more cells for resisting adverse living conditions [10][11][12]. Additionally, ultrasonication is noted to improve fermentative processes [8] and thus, has beneficial application in fermentation science [13]. As a food-grade preservative, cationic biopolymer ε-polylysine (ε-PL) is an antimicrobial agent, which is very effective against spoilage organisms and food pathogens [14]. In the past few decades, ε-polylysine and its derivative, ε-polylysine hydrochloride, have been used in various food industrial applications in Asia, because they are biodegradable, soluble in water, edible and non-toxic [15], especially for Salmonella, Escherichia coli and other Gram-negative bacteria, which are not easily controlled by other natural antibacterial agents. S. albulus is noted to be an excellent source for ε-polylysine [16]. Several studies have been conducted to evaluate ε-polylysine production through fermentation-process regulation or the optimization of nutritional conditions [17][18][19][20][21], including fermentative production by the optimization of the culture medium, immobilized cell fermentation, double carbon-source fermentation and pH-regulated fermentation. However, the low yield of ε-PL produced in such fermentation processes remains a major challenge to the extensive use of this natural antimicrobial agent and highly functional material. The beneficial effect of low-intensity ultrasound has been demonstrated in multiple microorganisms, including Bacillus subtilis, Candida tropicalis and Saccharomyces cerevisiae (referencing the stimulation of microbial cell proliferation, see [1,5,10,11,13,14,22]). Additionally, based on the influence on membrane permeability and enzymatic activities, we (in our previous study) developed an efficient ultrasound-stimulation strategy for promoting microbial fermentation and metabolite production. The data suggested that the biomass of S. cerevisiae increased by 127.03% under optimal conditions of lowintensity ultrasound [11]. The cell-membrane permeability was affected by the change in extracellular protein, nucleic acid (biopolymer) and fructose 1, 6-diphosphate (FDP) contents. Likewise, Zhang et al. [13] founded that ultrasound at a fixed frequency of 23 kHz significantly promoted the metabolism yield of ethanol and the content of β-phenylethanol and other metabolites, such as esters. To further explain the mechanism of ultrasound treatment for the enhancement of ethanol output, a 7.5 L fermentation tank connected with six-frequency (6f) ultrasonic equipment was used by He et al. [22]. Results indicated that ultrasound increased the activities of three main enzymes (hexokinase, phosphofructokinase, pyruvate kinase), which catalyzed three irreversible reactions in the glycolysis metabolism of ethanol biosynthesis, and an accelerated glucose consumption contributed to increasing the rate of ethanol production. Using sweeping frequency pulsed ultrasound (SFPU) in the pretreatment of RP (rapeseed protein) prior to proteolysis, it was found that ultrasound improved the enzymolysis efficiency by enhancing the hydrolysis rate due to the unfolding of molecular conformation and the secondary structure of proteins [23]. The physical mechanism of ultrasound-induced and enhanced enzymatic hydrolysis is not only related to the changes in molecular conformation and the microstructure of proteins, but also the increase in enzyme-substrate affinity and reaction velocity induced by intense micro-convection generated during sonication [24]. Nevertheless, most of the previous studies on the mechanism of ultrasonic-assisted fermentation, to date, have been carried out with the objective of promoting cell proliferation and increasing cell-membrane permeability. Even so, the studies on the influence of ultrasound on metabolic processes are still limited and insufficient. The present study, therefore, aimed at developing an efficient ultrasonic strategy to promote the synthesis and metabolism of ε-polylysine in a 3-L-jar fermenter coupled with an in situ ultrasonic slot. The biochemical indices and kinetic parameters during the fermentation of S. albulus have been investigated to verify the ε-PL production characteristics. Moreover, intracellular ATP levels and the activity and transcription level of key enzymes such as hexokinase (HK), glucose-6-phosphate dehydrogenase (G6PDH), pyruvate kinase (PK), aspartate kinase (ASK), aspartate aminotransferase (AAT) and ε-polylysine synthase (PLS), which were involved in the ε-PL synthesis pathway, were also studied to establish the mechanism promoting the fermentation of S. albulus by low-intensity ultrasound. Microorganisms and Culture Media Streptomyces albulus SAR 14-116 screened through mutagenesis (ARTP) [25], which is a mutant from an original S. albulus-CICC 11022 strain (purchased from China Center of Industrial Culture Collection (CICC)), was used in this study. The agar slant (BTN) medium was used as solid culture and the pre-culture medium (M3G) was used as seed medium [26,27]. The culture media, as regards the fermenter cultures, were prepared as outlined by Zeng et al. [19]. A glucose-glycerol mixed carbon source was fundamentally used to enhance ε-poly-L-lysine productivity (since it accelerates cell growth). The fermentation medium was composed of mixed carbon source (30 g/L glucose + 30 g/L glycerol) and beef extract 10 g, (NH 4 ) 2 SO 4 10 g, MgSO 4 ·7H 2 O 0.8 g, KH 2 PO 4 4 g and FeSO 4 ·7H 2 O 0.05 g per liter, and the pH was adjusted to 6.8 using NH 4 OH (12.5%) before sterilization (121 • C, 20 min). Ultrasonic Treatment of S. albulus and Fermentation of ε-Polylysine A 3-L-jar fermenter (BioFlo/CelliGen 115, Eppendorf China Limited, Shanghai) was used in the fermentation process with 1.5 L working volume ( Figure 1). Multi-frequency scanning slot ultrasound (WKS300/6S) was designed by our research group (Jiangsu university, China) and manufactured by Jiangda Wukesong Biotechnology Co. (Zhenjiang, China). This equipment has/generates several frequencies (20,23,25,28,33,40 kHz) at a maximum power output of about 300 W, and the rated power of each generator is 50 W [13]. For use as sonobioreactor, the sonic chamber (160 mL) of the hex-frequency slit ultrasound was connected with the 3-L-jar fermenter through sterile silicone tubing as shown in Figure 1. The temperature of the samples was controlled during the treatment by a liquid circulating system and a water bath connected to the equipment. Moreover, an external peristaltic pump was employed to control the flow rate of the fermentation broth (50 mL/min) during the ultrasound treatment. The broth taken from the bottom of the fermentation was recirculated continuously with ultrasonic treatment through the ultrasonic chamber for one hour using the pump in a sterile environment, and then the broth flowed back into the upper part of the fermenter. All the steps of ultrasound-assisted fermentation were carried out with the recirculation of the broth through the sonic chamber. Pre-cultured seed (120 mL, 24-h seed culture) was inoculated into 1.5 L sterilized fermentation medium with an initial pH of 6.8. Under the conditions of 0.05 MPa pressure, 5.0 SLPM ventilation, temperature (30 °C ) and agitation (200 r/min), the level of dissolved oxygen (DO) dropped as expected (from 100% to 10% or less), monitored with a DO elec- Pre-cultured seed (120 mL, 24-h seed culture) was inoculated into 1.5 L sterilized fermentation medium with an initial pH of 6.8. Under the conditions of 0.05 MPa pressure, 5.0 SLPM ventilation, temperature (30 • C) and agitation (200 r/min), the level of dissolved oxygen (DO) dropped as expected (from 100% to 10% or less), monitored with a DO electrode (Mettler Toledo ISM). A Mettler Toledo ISM pH electrode was used to determine changes in pH during cultivation. Afterward, pH was maintained at 4.0 by automatically adding NH 3 solution (12.5%, v/v) to the culture broth until end of the cultivation. After 30 h of inoculation, the fermented broth was ultrasonically treated under the conditions of 28 kHz, 280 W/L and 0.37 W cm −2 for 1 h. These conditions were chosen based on preliminary experiments, including the screening of ultrasound frequency, power density and treatment time. Samples were collected after each treatment for evaluation/analyses. Suspension (cells) flowing through the reaction chamber of ultrasonic equipment with no ultrasonic treatment was used as control. Establishment and Assessment of Fermentation Kinetic Models The original and mutant strains were subjected to activated culture and then transferred to 3-L-jar fermenter. The fermentation culture and ultrasonic treatment were carried out under the fermentation parameter conditions (detailed in Section 2.2). For the control group, the sample was treated in same manner (as described) but without sonication. The samples of mycelium and fermentation broth were picked for measuring biomass and ε-polylysine production, respectively, every 12 h within 120 h. The fitted equations were predicted and compared to illustrate the variations in the fermentation kinetics of biomass and ε-PL production with time. The logistic model is a typical S-shaped curve, indicating the exponential growth of bacteria. The logistic equation was used as an alternative empirical function in this study. The growth pattern of S. albulus meets the basics of logistic regression [28]. A custom function (Equation (1)) was used to conduct non-linear fitting (least squares) for the biomass. Previous studies have shown that the formation of ε-PL is partially coupled with cell growth according to the relationship between the formation of ε-PL and cell growth [28]. Our initial data indicated the synthesis of ε-PL showed a certain synchronization with cell/microbial growth at early stages of fermentation, but after the microbial/cell growth reached a maximum, the ε-PL could still be synthesized. Therefore, a kinetic model (Luedeking-Piret) of product formation was developed to describe the ε-PL production (Equation (2)). where X is dry cell weight (mg/mL), X 0 is the initial dry cell weight (mg/mL) and µ m is maximum growth rate (mg/h). where P is ε-PL concentration (mg/mL), P 0 is the initial ε-PL concentration (mg/mL), t is fermentation time (h), α (product formation constant) linked to cell growth and β is a production constant related to the biomass of S. albulus. The parameters of the kinetic model of cell growth and product generation were calculated from the experimental data using a nonlinear curve fitted with SPSS software (version 17.0 for Windows, SPSS, Inc., Chicago, IL, USA). Assay of Mycelia-Viability Staining The morphology of the mycelium and the distribution of living cells in the bacteria can reflect the survival status of the cells during ε-PL fermentation process. To investigate the effect of sonication on cell viability under the optimal ultrasonic conditions during the fermentation of S. albulus, the mycelial viability (of the microbe) was detected by . CFSE is a membrane-penetrating green fluorescently labeled dye [29]. Studies have confirmed that CFSE-labeled cells are characterized by stable binding. Propidium iodide (C 27 H 34 I 2 N 4 , PI) is a fluorescent dye, which penetrates damaged/dead cells(and not live cells) to label DNA. Therefore, when the dyes were used, bacteria with intact/undamaged cell membranes appear fluorescent green, whereas those with damaged/stressed membranes appear red. Culture samples of the ultrasound and the control groups were obtained at a different time, as described in the Section 2.2. The cells were centrifuged (10 min, 4500 r/min) and washed with sterile water. The mycelium pellet was re-suspended using 100 µL phosphate buffered saline (PBS). The strains (2 in all) were prepared and mixed (1:1, v/v). An equal amount (20 µL) of the strain mixture and culture samples were mixed (on a clean slide) and kept in the dark (15 min). Images were captured with a fluorescence microscope (202-XD; COI Co., Ltd., Shanghai, China). Assay of Respiration Activity The respiration rate of sonicated and untreated S. albulus was determined as outlined by Bai [30] with some modifications. The changes in the dissolved oxygen of the respiration rate in the ultrasonic treatment group compared with the initial respiration rate were measured when the system was stable. The respiration rate of different groups was calculated as follows: where R is respiration rate (mmol/g·h), O is the reading of dissolved oxygen meter (mg/L), V is the volume of reaction liquid (mL), T is reaction time (min) and M is mycelium biomass of the cells in the reaction liquid (g). The promoting ratio of respiration activity in different groups was calculated as follows: where I R is respiratory promotion rate (%), R 0 is the initial respiration rate (mmol/g·h) and Ri is the respiration rate of the ultrasound-treatment group (mmol/g·h). Assay of Key Enzyme Activities in ε-PL Biosynthesis Pathway The control samples and the sonicated ones under the same ultrasound treatment conditions (Section 2.2) were collected during the process of ε-PL biosynthesis at a 12 h interval. All procedures for cell-extract preparations were carried out at 4 • C. Mycelia were obtained by centrifugation (4000 r/min 10 min) and then washed twice with PBS for enzyme and cofactor assay. The crude enzymes from S. albulus were extracted/isolated and the activity of hexokinase (HK), pyruvate kinase (PK), glucose-6-phosphate dehydrogenase (G6PDH) and aspartokinase (ASK) was determined by suitable assay kits (Product Nos. A077-3-1, A076-1-1, S0189, MAK095; Nanjing JB Institute, Nanjing, China; Beyotime Institute of Biotech, Shanghai, China; Sigma-Aldrich, Inc., St. Louis, MO, USA) based on manufacturer's guidelines/instructions. The protein (macromolecule) content of the extracts was evaluated with a Super-Bradford Protein Assay (SBPA) Kit (P0060FT; Beyotime Inst. of Biotech, Shanghai, China) following the manufacturer's specification. Assay of Intracellular ATP in ε-PL Biosynthesis Pathway For the ATP assay, the samples were analyzed by HPLC (Waters 1525, Milford, MA, USA) using a Waters SunFire C18 (250 × 4.6 mm, 5.0 µm) column at 25 • C and a spectrophotometer at 254 nm. The mobile phase contained: 10% methanol, 90% 0.02 M K 2 HPO4 and KH 2 PO4 buffer (v/v = 1:1), then its pH was adjusted to 6.0 using H 3 PO 4 . Flow rate was kept at 0.8 mL min −1 and the volume (injected) was set at 10 µL. qRT-PCR Assay for the Identification of the Transcription Levels of Key Enzymes in ε-PL Biosynthesis Pathway RNA of cultured samples was extracted using a MiniBEST Universal RNA Extraction Kit. The yield and integrity of RNA were examined using a spectrophotometer and gel electrophoresis. Reverse transcription assays were conducted with TransScript First-Strand cDNA Synthesis Super MIX for qPCR. The reverse transcription system was as follows: 0.5 µg RNA, 2 µL of 5×TransScript All-in-one SuperMix for qPCR and 0.5 µL of gDNA Remover, in a total volume of 10 µL. The reaction program was devised for 15 min (42 • C), and 5 s at 85 • C. The 10 µL RT reaction was subsequently diluted (× 10) in nuclease-free H 2 O and held at −20 • C. The transcription profiles of gene pk, cs, pls, aat, SAZ_18790, SAZ_24700 and SAZ_28490 were undertaken by qRT-PCR with qRT-PCR kit mixing 2×PerfectStartTM Green qPCR SuperMix (5 µL), cDNA (1 µL), forward primer (0.2 µL), reverse primer (0.2 µL) and nuclease-free H 2 O (3.6 µL). Reactions were incubated (94 • C, 30 s) in a 384-well optical plate (Roche, Swiss), followed by 45 cycles (94 • C, 5 s, 60 • C, 30 s). The expression levels of mRNAs were normalized and computed using the 2 −∆∆Ct method for relative quantification with hrdB as the reference gene. Biomass, NH 4 + -N and ε-PL Concentration Analyses Samples were taken (12-h intervals) from the fermenter for analysis of the biomass (DCW−dried cell weight,) and the concentration of ammonia nitrogen (NH 4 + -N) was assessed as described by Chen et al. [18]. The portion of colorimetric change from the oxidation of residual reduced sugar (RRS) was determined by a DNS test [31]. The concentration of ε-PL was examined according to the protocol of Cheng et al. [32]. Statistical Analysis All processing treatments were carried out in triplicate and the results were expressed as mean ± standard deviation (SD). An analysis of variance (ANOVA) was performed to compare the effects of different treatments (processed and unprocessed samples) using SPSS (version 17.0 for Windows, SPSS, Inc., Chicago, IL, USA) software. A p-value of less than 0.05 was defined as significant difference. Effect of Ultrasonication on the ε-Polylysine Yield during Fermentation Following ultrasonic treatment (28 kHz, 0.37 W cm −2 and 60 min) at a cultivation time of 30 h, the cell growth and ε-PL production profiles were measured. Figure 2 shows the profiles of sugar consumption, cell growth, pH, DO, and ε-PL accumulation of SAR 14-116 following sonication. The cell growth rate was accelerated over 12-36 h, and the dissolved oxygen dropped sharply because of the larger oxygen demand for cell growth. By consuming glucose and NH 4 + -N, the maximal value of cell mass was obtained at 36 h culture time. The biomass of S. albulus increased by 14.42% compared with the untreated sample. In reference to the control, the promotion rate of ε-polylysine yield by S. albulus after ultrasound treatment at this stage reached the maximum (69.45%). Although the biomass of S. albulus does not significantly increase after entering the ε-PL synthesis stage, it still needed to consume a large amount of glucose and dissolved oxygen from 30 h to 36 h of cultivation. These findings suggest that the glycolytic pathway, electron-transport chain and TCA cycle were still active [33]. The volume of ε-PL produced continued to increase (at a constant rate) following sonication for 60 min, which may be associated with the limited rupture of mycelia clusters induced by ultrasonication, improving the cell utilization of nutrients and cell biomass [34]. After that, the production rate (of ε-polylysine) gradually decreased. The reason for this could be linked to the fact that the substrate was consumed subsequent to the culture time of 48 h. Although the growth rate of cell viability and cell biomass decreased, the synthesis of ε-PL was not inhibited. Furthermore, the pH value declining from 5.4 to 3.9 shortened by 12 h, which showed ultrasound treatment promoted the rate of acid production. This phenomenon may be linked to cavitation induced by ultrasonication to promote the transfer of gas-liquid mass and glucose consumption to accelerate the secretion of organic acids, including lactic acid, citric acid, acetic acid, etc. Chen et al. [18] have demonstrated that the production of ε-PL is affected by pH value. The low pH environment is more favorable for the synthesis of ε-PL. At 72 h fermentation time, the content of ε-polylysine in the ultrasound group increased by 27.31% in comparison with the non-sonicated group. After 124 h fermentation, the biomass of SAR 14-116 and yield of ε-polylysine after ultrasonic treatment were 4.66 g/L and 2.69 g/L, which increased by 14.92% and 28.45%, respectively (Figure 2). Such observations suggest that the ultrasonic treatment aided the process of fermentation to produce ε-polylysine. Fermentation Kinetics of ε-PL The synthesis of ε-PL, as described in a previous study [25], had some similarity/agreement with cell growth at the early stages of fermentation. To analyze the kinetics with respect to the influence of sonication on ε-PL production and cell growth, the synthesis parameters of cell growth (α) and biomass (β) were quantified, depending on the results in Figure 3. Equations (5) and (6) represent the biomass of cells for SAR 14-116 with (A) and without (B) sonication treatment, respectively. Equations (7) and (8) denote the ε-PL yield for SAR 14-116 with (C) and without (D) treatment, respectively. Fitting curves of the kinetics of cell growth and product generation are displayed in Figure 3. Fermentation Kinetics of ε-PL The synthesis of ε-PL, as described in a previous study [25], had some similarity/agreement with cell growth at the early stages of fermentation. To analyze the kinetics with respect to the influence of sonication on ε-PL production and cell growth, the synthesis parameters of cell growth (α) and biomass (β) were quantified, depending on the results in Figure 3. Equations (5) and (6) represent the biomass of cells for SAR 14-116 with (A) and without (B) sonication treatment, respectively. Equations (7) and (8) denote the ε-PL yield for SAR 14-116 with (C) and without (D) treatment, respectively. Fitting curves of the kinetics of cell growth and product generation are displayed in Figure 3. After the values of β and μm for two groups were determined, the outcomes proved that The model prediction (as exhibited in Figure 3) was in good agreement with the results, suggesting that the fitted model well represented the regularity of ε-PL synthesis in liquid fermentation of SAR 14-116 with (C) or without (D) sonication treatment over time. After the values of β and µ m for two groups were determined, the outcomes proved that ultrasonication improved mycelia growth compared to the untreated/control group. The product constant associated with SAR 14-116 cell growth of the ultrasonic group model increased by 2.88 times when compared with the control. The values of β variations suggested that sonication treatment accelerates the material metabolism in the fermentation process. The values of the model parameters (test and control experiments) (Figure 3), were appropriate for predicting the experimental process of production of secondary metabolites of ε-PL. The values of β variations suggested that sonication treatment accelerates the material metabolism in the fermentation process. The enhanced kinetics of metabolic reactions may possibly be explained by the increase in the substrate affinity of enzymes as well as a greater resistance to substrate inhibition [35]. As a consequence, the metabolic rate of SAR 14-116 increased following sonication, which may be one of the reasons for the increase in the production of the ε-PL. Effect of Ultrasonication on Mycelia Viability and Metabolic Activity of S. albulus To analyze the impact of the ultrasonic time (30 and 60 min) on samples, mycelia viability was observed under a fluorescence-inverted microscope using viability staining with PI and CFSE (Figure 4). Results showed that the dead cells appeared in the central core of the mycelium and occupied a large proportion/percentage of the control group ( Figure 4A,C). One can infer from this that the major fraction of the hyphae was dead, which may be due to the inhibited nutrient and oxygen supply to the internal hyphae. However, the number of dead hyphae in the cells stimulated by ultrasound treatment reduced compared with control ( Figure 4B,D) and the external border of active hyphae grew at a reasonably fast rate. These results suggested that cells exposed to low frequency sonication became highly viable with no negative influences on their physiology [36]. Moreover, the respiration activity of mycelia in sonicated samples (for 30 min ultrasonic treatment) increased by 12.74% over the control, as shown in Table 1. The mechanical effect induced by low intensity ultrasound is known to improve mass transfer, and the generated intense micro-turbulence (during sonication) enhances oxygen uptake in fermentation mixture from the atmosphere above the liquid surface [37], which is in agreement with our results. The cells might accelerate the metabolic rate of nutrients, suggesting that more ATP would be supplied for the function of ε-PL synthesis by respiration. Additionally, the respiration rate of SAR 14-116 with 60 min ultrasound treatment did not change significantly, suggesting that ultrasound may not have inhibited the respiratory metabolism. These findings demonstrated that ultrasound treatment affects mycelia viability and further controls the metabolic activity of S. albulus SAR 14-116. tabolites of ε-PL. The values of β variations suggested that sonication treatment accelerates the material metabolism in the fermentation process. The enhanced kinetics of metabolic reactions may possibly be explained by the increase in the substrate affinity of enzymes as well as a greater resistance to substrate inhibition [35]. As a consequence, the metabolic rate of SAR 14-116 increased following sonication, which may be one of the reasons for the increase in the production of the ε-PL. Effect of Ultrasonication on Mycelia Viability and Metabolic Activity of S. albulus To analyze the impact of the ultrasonic time (30 and 60 min) on samples, mycelia viability was observed under a fluorescence-inverted microscope using viability staining with PI and CFSE (Figure 4). Results showed that the dead cells appeared in the central core of the mycelium and occupied a large proportion/percentage of the control group ( Figure 4A,C). One can infer from this that the major fraction of the hyphae was dead, which may be due to the inhibited nutrient and oxygen supply to the internal hyphae. However, the number of dead hyphae in the cells stimulated by ultrasound treatment reduced compared with control ( Figure 4B,D) and the external border of active hyphae grew at a reasonably fast rate. These results suggested that cells exposed to low frequency sonication became highly viable with no negative influences on their physiology [36]. Moreover, the respiration activity of mycelia in sonicated samples (for 30 min ultrasonic treatment) increased by 12.74% over the control, as shown in Table 1. The mechanical effect induced by low intensity ultrasound is known to improve mass transfer, and the generated intense micro-turbulence (during sonication) enhances oxygen uptake in fermentation mixture from the atmosphere above the liquid surface [37], which is in agreement with our results. The cells might accelerate the metabolic rate of nutrients, suggesting that more ATP would be supplied for the function of ε-PL synthesis by respiration. Additionally, the respiration rate of SAR 14-116 with 60 min ultrasound treatment did not change significantly, suggesting that ultrasound may not have inhibited the respiratory metabolism. These findings demonstrated that ultrasound treatment affects mycelia viability and further controls the metabolic activity of S. albulus SAR 14-116. Effect of Ultrasonication on Intracellular ATP Levels of S. albulus Compared with an unsonicated sample, the concentration of intracellular ATP was significantly increased after ultrasonic treatment ( Figure 5; p < 0.05). The optimal sonication time was 60 min. In this condition, sonication improved the intracellular ATP rapidly with a significant increase of 81.21%, and the maximum value (10.48 µmol/g) was observed at 30 h of cultivation. The concentration of intracellular ATP increased to 90.63% in comparison with the control at 36 h, which was up to 14.73 µmol/g. This may be explained by the fact that there may have been little intracellular ATP available/used for cell growth; therefore, most intracellular ATP was accumulated as a co-factor for Pls in ε-PL biosynthesis [33]. Such observations indicate that the catalytic function of Pls is controlled by intracellular ATP. The high levels of ATP are essential for full enzymatic activity during the synthesis of ε-PL. Effect of Ultrasonication on Intracellular ATP Levels of S. albulus Compared with an unsonicated sample, the concentration of intracellular ATP was significantly increased after ultrasonic treatment ( Figure 5; p < 0.05). The optimal sonication time was 60 min. In this condition, sonication improved the intracellular ATP rapidly with a significant increase of 81.21%, and the maximum value (10.48 μmol/g) was observed at 30 h of cultivation. The concentration of intracellular ATP increased to 90.63% in comparison with the control at 36 h, which was up to 14.73 μmol/g. This may be explained by the fact that there may have been little intracellular ATP available/used for cell growth; therefore, most intracellular ATP was accumulated as a co-factor for Pls in ε-PL biosynthesis [33]. Such observations indicate that the catalytic function of Pls is controlled by intracellular ATP. The high levels of ATP are essential for full enzymatic activity during the synthesis of ε-PL. Effect of Ultrasonication on Key Enzyme Activities of S. albulus To elucidate the stimulation of ε-PL production and process intensification following sonication, the activity of key enzymes of SAR 14-116 was quantified at 24 h (prophase of ε-PL synthesis stage) and 36 h (mid-late of ε-PL synthesis stage). The effect of ultrasonic treatment (28 kHz, 0.37 W cm −2 and 280 W/L) on the content of key enzymes related to ε-PL synthesis in fermentation broth is exhibited in Figure 6. Effect of Ultrasonication on Key Enzyme Activities of S. albulus To elucidate the stimulation of ε-PL production and process intensification following sonication, the activity of key enzymes of SAR 14-116 was quantified at 24 h (prophase of ε-PL synthesis stage) and 36 h (mid-late of ε-PL synthesis stage). The effect of ultrasonic treatment (28 kHz, 0.37 W cm −2 and 280 W/L) on the content of key enzymes related to ε-PL synthesis in fermentation broth is exhibited in Figure 6. In Figure 6, it is shown that the activity of the HK enzyme did not increase immediately after ultrasonic treatment, but it increased 4.61-fold relative to a nonsonicated sample at 36 h, reaching 19.91 × 10 −4 U/mg protein. This significant increase indicates that the ultrasonic treatment may have increased the intensity of the glycolytic pathway, resulting in an increase in glucose consumption ( Figure 2). As demonstrated by Sinisterra [34], intense microturbulence produced by ultrasonic treatment causes conformational alterations in the secondary enzyme structure that improve the kinetics and activity of intracellular enzymes involved in metabolism. Notably, ultrasonication enhanced the activity of PK over the control as displayed in Figure 6C. The enhanced PK enzyme activity may have provided more carbon skeletons for the TCA cycle from 30 h to 33 h, indicating that more carbon skeletons are supplemented to synthetic precursors for other metabolites, remarkably for the yield of oxaloacetate, the precursor of aspartate consumed in L-Lys biosynthesis. The PK level in the ultrasound group decreased at 36 h of cultivation, but was still 2.49-and 2.21-fold higher than the control at 30 h and 36 h, respectively. However, the PK enzyme activity of the control group declined from 30 h of cultivation. In contrast, the G6PDH enzyme activity of the ultrasound-treatment group decreased over the untreated sample ( Figure 6B). At 33 h, the G6PDH enzyme activity of the ultrasonic treatment group decreased to 0.99 × 10 −4 U/mg protein, which was 1.17 times lower than the control. The G6PDH enzyme activity decreased by 42.55% compared with the control group at 36 h. This showed that ultrasound may reduce the enzyme activity of In Figure 6, it is shown that the activity of the HK enzyme did not increase immediately after ultrasonic treatment, but it increased 4.61-fold relative to a nonsonicated sample at 36 h, reaching 19.91 × 10 −4 U/mg protein. This significant increase indicates that the ultrasonic treatment may have increased the intensity of the glycolytic pathway, resulting in an increase in glucose consumption ( Figure 2). As demonstrated by Sinisterra [34], intense microturbulence produced by ultrasonic treatment causes conformational alterations in the secondary enzyme structure that improve the kinetics and activity of intracellular enzymes involved in metabolism. Notably, ultrasonication enhanced the activity of PK over the control as displayed in Figure 6C. The enhanced PK enzyme activity may have provided more carbon skeletons for the TCA cycle from 30 h to 33 h, indicating that more carbon skeletons are supplemented to synthetic precursors for other metabolites, remarkably for the yield of oxaloacetate, the precursor of aspartate consumed in L-Lys biosynthesis. The PK level in the ultrasound group decreased at 36 h of cultivation, but was still 2.49-and 2.21-fold higher than the control at 30 h and 36 h, respectively. However, the PK enzyme activity of the control group declined from 30 h of cultivation. In contrast, the G6PDH enzyme activity of the ultrasound-treatment group decreased over the untreated sample ( Figure 6B). At 33 h, the G6PDH enzyme activity of the ultrasonic treatment group decreased to 0.99 × 10 −4 U/mg protein, which was 1.17 times lower than the control. The G6PDH enzyme activity decreased by 42.55% compared with the control group at 36 h. This showed that ultrasound may reduce the enzyme activity of G6PDH. The reduction in the activity of G6PDH in the ultrasonic treatment group implied that the reducing synthetic capability of equivalent NADPH may have altered the carbon flux of the PPP pathway, which hindered the synthesis of erythrose used for cell growth, which agrees with the observations of cell growth (Section 3.2). More carbon skeletons were used for the L-lysine biosynthesis pathway rather than for cell growth (PPP). The phenomenon of metabolic shift is probably caused by a reduction in pH as outlined by Zeng [19]. The decrease in the G6PDH activity of the ultrasound-treated sample was in strong agreement with the findings of other researchers [38]. The inactivation was not due to thermal and/or cavitation inactivation but rather acoustic microstreaming. In addition, the reduction in PK activity at 36 h may have stimulated the phenomenon of substrate inhibition due to accumulation of intracellular ATP ( Figure 5), which triggered an inhibitory effect on PK, resulting in a decrease in the supply of ATP used for cell growth [39]. Most of the ATP produced was not utilized for cell growth, but instead accumulated and used as a co-factor of Pls in ε-PL biosynthesis [33]. Indeed, the level of accumulated ATP was obtained when ε-PL was generated ( Figure 5). AAT significantly increased when cell growth reached a stationary phase. The activity of the AAT enzyme increased by 81.21% and reached 28.80 × 10 −4 U/mg at 36 h ( Figure 6D). The increase in AAT-enzyme activity increased the metabolic flux from the oxaloacetate anaplerotic metabolic pathway into the DAP pathway [40], which promoted the synthesis of the precursor substance L-Lys of ε-PL. As a result, the intracellular L-Lys concentration increased. Presumably, the improvement of key enzyme activity might be a reason for the increased ε-PL yield. Effect of Ultrasonication on the Expression of Key Enzymes and Genes of S. albulus Based on the qRT-PCR analysis, the transcription of genes related to ε-PL anabolism (for ultrasound treated and control) was analyzed and compared. The outcome confirmed that the transcription levels of ask and hrdD decreased and pk, aat, pls, SAZ_24700 and SAZ_28490 increased significantly (p < 0.05) following sonication (with reference to the untreated group) (Figure 7). These results were consistent with the observations of the assays of key enzyme activity. Our results have shown that sonication could enhance the enzyme activity of PK, CS and AAT from 1.34-to 1.69-fold when compared to the control. Inferring from the analysis of qRT-PCR, this increase was caused by the upregulated expression of pk, cs and aat. It is speculated that the expression of pk, cs and aat increased the transcription of HK, PK and AAT, which is considered to possibly cause the enhancement of the metabolic intensity of the glycolysis pathway, TCA cycle and DAP pathway. It is noteworthy that the expression level of pls encoding was upregulated 4.58 times by ε-PL synthetase (Pls) (Figure 7). The improvement in pls expression may have positively influenced the ε-PL synthesis, which agreed with the observation of the increased concentration of extracellular ε-PL caused by sonication. The increased content of intracellular ATP and the relative expression of pls may have jointly promoted the synthesis of ε-PL and consequently increased its production. However, the expression of ask and hrdD, which encode ASK and bind with the promoter of the pls gene decreased (Figure 7), indicating that the high concentrations of L-Lys produced inhibited ASK. The relative expression of SAZ_28490 encoding ribonucleoside diphosphate reductase increased 1.54 times after ultrasonic treatment (Figure 7), which may cause increased synthesis of RDP reductase. The synthesis of RDP reductase is increased when the DNA synthesis rate in an E. coli cell is insufficient for the conditions of cell growth [41]. More produced ATP may be applied as a co-factor of Pls in ε-PL biosynthesis. Thus, the lack of ATP supply during DNA synthesis in a S. albulus cell may be one of the reasons for the upregulated expression of SAZ_28490. The upregulated expression of SAZ_28490 could be used as a signal to increase secondary metabolite biosynthesis at the cost of a reduction in cell growth. However, the expression of ask and hrdD, which encode ASK and bind with the promoter of the pls gene decreased (Figure 7), indicating that the high concentrations of L-Lys produced inhibited ASK. The relative expression of SAZ_28490 encoding ribonucleoside diphosphate reductase increased 1.54 times after ultrasonic treatment (Figure 7), which may cause increased synthesis of RDP reductase. The synthesis of RDP reductase is increased when the DNA synthesis rate in an E.coli cell is insufficient for the conditions of cell growth [41]. More produced ATP may be applied as a co-factor of Pls in ε-PL biosynthesis. Thus, the lack of ATP supply during DNA synthesis in a S. albulus cell may be one of the reasons for the upregulated expression of SAZ_28490. The upregulated expression of SAZ_28490 could be used as a signal to increase secondary metabolite biosynthesis at the cost of a reduction in cell growth. The ABC transporter ATP-binding protein, encoded by SAZ_18790, affects the morphological differentiation of Streptomyces mycelium and the synthesis of antibiotics, and also alters the production of secondary metabolites [42]. RT-PCR further verified SAZ_18790 was upregulated by 1.47 times in the ultrasound-treated group (Figure 7). The upregulation of SAZ_18790 expression indicated that the ultrasound treatment changed the expression of membrane-transport proteins for an efflux of metabolites with the export of secondary metabolite biosynthesis. This was possibly associated with the efflux of ε-PL and its metabolic intermediates, causing an improvement in ε-PL production. Moreover, the expression of the gene SAZ_24700, which encodes serine/threonine protein kinase in the two-component system regulation (TCS) metabolic pathway [25], increased by 2.99 times (Figure 7), indicating that the gene SAZ_24700 may positively regulate the secondary metabolism of S. albulus [43]. From our experimental data, we have primarily inferred a possible increase in ε-PL formation following sonication. A schematic summary of our hypotheses and obtained results are displayed in Figure 8. Here, we have illustrated that the improved level of ε-PL was induced by the increased cell growth rate and product formation rate of S. albulus mycelium. In addition, the transcription level of pls and intracellular ATP was responsible for the enhancement of ε-PL biosynthesis, leading to the improvement in the content of extracellular ε-PL. Furthermore, the changes in mycelia viability, the activity of key enzymes, the transcription level of gene related to ε-PL biosynthesis and membranetransport proteins may have contributed to the enhanced ε-PL yield. The ABC transporter ATP-binding protein, encoded by SAZ_18790, affects the morphological differentiation of Streptomyces mycelium and the synthesis of antibiotics, and also alters the production of secondary metabolites [42]. RT-PCR further verified SAZ_18790 was upregulated by 1.47 times in the ultrasound-treated group (Figure 7). The upregulation of SAZ_18790 expression indicated that the ultrasound treatment changed the expression of membrane-transport proteins for an efflux of metabolites with the export of secondary metabolite biosynthesis. This was possibly associated with the efflux of ε-PL and its metabolic intermediates, causing an improvement in ε-PL production. Moreover, the expression of the gene SAZ_24700, which encodes serine/threonine protein kinase in the two-component system regulation (TCS) metabolic pathway [25], increased by 2.99 times (Figure 7), indicating that the gene SAZ_24700 may positively regulate the secondary metabolism of S. albulus [43]. From our experimental data, we have primarily inferred a possible increase in ε-PL formation following sonication. A schematic summary of our hypotheses and obtained results are displayed in Figure 8. Here, we have illustrated that the improved level of ε-PL was induced by the increased cell growth rate and product formation rate of S. albulus mycelium. In addition, the transcription level of pls and intracellular ATP was responsible for the enhancement of ε-PL biosynthesis, leading to the improvement in the content of extracellular ε-PL. Furthermore, the changes in mycelia viability, the activity of key enzymes, the transcription level of gene related to ε-PL biosynthesis and membrane-transport proteins may have contributed to the enhanced ε-PL yield. Figure 8. The proposed pathway of ultrasound-enhanced actinomycetes mycelia metabolism and ε-PL production from S. albulus in a 3-L-jar fermenter. (Lowercase letters a, b indicate that there is a significant difference between the different ultrasound-treated culture times of fermentation (p < 0.05)). Conclusions This study has given mechanistic insight into ultrasonic enhancement in ε-PL production during ultrasonic-assisted fermentation of S. albulus. Our results indicated that the biomass of SAR 14-116 and the concentration of ε-PL increased by 14.92% and 28.45% following ultrasonication (28 kHz, 0.37 W cm −2 and 280 W/L) relative to control. An analysis of ε-PL synthesis using kinetic models revealed that intense micro-convection stimulated by ultrasound/cavitation enhances the reaction velocity. In addition, ultrasound-assisted SAR14-116 fermentation may promote the synthesis of L-Lys, the precursor substance of ε-PL, by increasing the intracellular ATP and the metabolic intensity of glycolysis pathway, TCA cycle and DAP pathway, leading to an increase in intracellular L-Lys concentration. Furthermore, the expression level of pls increased after ultrasonic treatment, which directly improved the ability of SAR14-116 to synthesize ε-PL. These results demonstrated that ultrasound treatment is a profitable technology for ε-PL production, and so may be useful in metabolite fermentation from other antimicrobials.
2022-11-10T16:33:56.457Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "b591cab0df582a5d32f1b567a9d5cbe04c227593", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/11/21/3525/pdf?version=1667641399", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ed91455ba0aa911a9a57bcef2a677954a628f063", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
218882099
pes2o/s2orc
v3-fos-license
Overview of strong winds on the coasts of the Russian Arctic seas Joint analysis of ground-based standard observations, spaceborne Synthetic Aperture Radar observations and the Arctic System Reanalysis (ASR) v.2 allow us to identify areas with storm and hurricane wind in the Russian Arctic in detail. We analyzed statistics and genesis of strong winds in each region, with the special emphasis on orographic winds. For those regions where wind amplification occurs due to downslope windstorms (Novaya Zemlya, Svalbard, Tiksi, Pevek, Wrangel Island), a statistical analysis of the intensity and frequency of windstorms was carried out according to observations. Reanalysis ASR v.2 demonstrates significantly better strong wind climatology in comparison with another high-resolution Climate Forecast System Reanalysis. ASR v.2 still underestimates speed of strong winds, however it reproduces rather well most of mesoscale local winds, including Novaya Zemlya bora, Spitsbergen foehn, bora on Wrangel Island and some other. Introduction According to criteria for hazard weather of Federal Service for Hydrometeorology and Environmental Monitoring (Roshydromet) (Federal ... 2009), a very strong (dangerous) wind on the coasts of the seas is a wind with 10-min averaged speed exceeding 30 m/s. In addition, the Beaufort scale is often used to determine criteria for dangerous winds: severe and hurricane storm is identified when wind speed > 25 m/s (e.g. Radinovic and Curic 2013). However, a weaker wind can also be accompanied by dangerous weather phenomena, such as ship icing (Samuelsen and Graversen 2019), fast sea ice drift (Buzin andGlazovskiy 2005, Bychkova andPlatonova 2014) or high turbulence and difficult aircraft landing and take-off due to a large vertical wind shear (Korablev 2018). Such winds are primarily of orographic nature (downslope windstorms, tip jets and gap flows). For instance, bora on Novaya Zemlya leads to a rapid drift of ice from the coast, to a reduced visibility due to severe blizzards in the wintertime and spray in the summer (Pastusiak 2016). Orographic winds in the area of Cape Zhelaniya lead to severe swell, which makes navigation difficult (Pastusiak 2016). The study of dangerous coastal winds is especially important in terms of the growing interest in the Northern Sea Route. At present, the climatology of the mean wind in the Arctic is well studied, owing not only to long enough series of surface observations, but also to satellite observations and reanalyses (e.g. Wan et al. 2010, Kostianoy et al. 2014, Hughes and Cassano 2015, Liu et al. 2016). However, most reanalyses have rather rough resolution, therefore, various orographic effects are not taken into account, and even large-scale wind amplification does not reach the observed value (e.g. Dery andYau 1999, Hughes andCassano 2015). Satellite wind data also has significant limitations associated with the presence of sea ice, insufficient spatial resolution (especially for microwave radiometers) and a relatively short period of observations. Satellite climatology of mean and extreme winds is usually calculated for individual summer months (Liu et al. 2016). Spaceborne Synthetic Aperture Radar (SAR) data with a typical resolution of about ten meters are most suitable for studying strong local winds (Gavrikov andIvanov, 2015, Ivanov, 2016). However, due to the small period of observations, we can only study the phenomenon on the example of individual cases, analyzing its spatial structure. At nowadays, a full-fledged statistical analysis of strong winds is possible only according to station observations, and high-resolution reanalysis data in those areas where data are not available. Therefore, in this study, we decided to use a joint analysis of ground-based observations, SAR data, and a rather new Arctic System Reanalysis (ASR) version 2 with the best resolution in the Arctic region to obtain a more complete view on extreme winds in the coastal regions of the Russian Arctic. Data and methods The climatology of extreme winds and their genesis are considered on the basis of station observation data from Roshydromet network (standard 3-hour observations), as well as ASR v.2 reanalysis (https://doi.org/10.5065/D6X9291B) with spatial resolution 15 km, time span 2000-2016 and a time resolution of 3 hours. Calculation of the climatic characteristics of the wind according to station data was carried out for the period 1979-2017, when the largest number of observations is available. For downslope windstorms, a threshold of 8 m/s (which approximately corresponds to gusts of up to 15 m/s) for wind speed and the corresponding wind direction were used as a formal criterion for identifying episodes. For a qualitative analysis of the spatial distribution of orographic winds, Level 2 wind speed from Synthetic Aperture Radar (SAR) from Radarsat-2 and Sentinel-1 were used from free-access archives of U.S. National Ice Center (https://www.natice.noaa.gov/products/kml_radarsat_wind.html), NOAA National Centers for Environmental Information (https://www.nodc.noaa.gov/sog/sar_wind/) and Sentinels Scientific Data Hub (https://scihub.copernicus.eu/) for the period 2014-2018. Extreme winds climatology General features of wind speed climatology in the Arctic (high wind speed in the Atlantic sector of the Arctic and the Chukchi Sea, mainly far from the coasts, and relatively low wind speed in the inland areas of the Arctic Ocean and the East Siberian and Laptev Sea) may be disturbed by local effects (for example, orographic). It can be seen from the map of average daily maximums of wind speed according to ASR v.2 ( Fig. 1), that strong winds rather frequently are observed at the northern coast of the Kola Peninsula, to the north and south of Svalbard, as well as in the central mountain ranges of the archipelago, at the coast of Novaya Zemlya, on Cape Zhelaniya, inside the Kara Gate Strait and Ugorskyi Shar Strait, in the north-west of Severnaya Zemlya, at the coast of Bering and partially Chukchi seas. Analysis of observational data at weather stations in the Russian Arctic showed that high wind speeds are quite often (the 99th percentile of wind speeds exceeds 17 m/s, a threshold for adverse weather phenomenon (Radinovic, and Curic 2013)) observed at stations in the north of the Kola Peninsula, Kanin Nos Peninsula, Novaya Zemlya, Vaigach Island, Franz-Josef Land, in the north-west of Taimyr (Dikson, Cape Sterlegova), in the bays of Tiksi and Ambarchik, in Pevek, on Wrangel Island, in the Bering Strait (Uelen station), on the coast of the Bering Sea (from Anadyr to Gavriila Bay) (Fig.2). At the same time, at some stations, strong wind blows not only often, but its speed exceeds 30 m/s -these are Malye Karmakuly, Tiksi, Pevek, Uelen, stations of the northern coast of the Kola Peninsula and the White Sea, stations in the Anadyr Bay (Anadyr, Beringovsky, Gavriila Bay) (Table 1). At Egvekinot station on the coast of the Bering Sea, very strong winds often arise, on average every 2.5 years, although in 99% of cases the wind speed is less than 17 m/s (Fig.2). Comparison with the observational data showed that ASR v.2 significantly underestimates the magnitude (Fig. 3) and frequency ( Fig.4) of strong winds. For example, frequency of very strong winds (>30 m/s) in Teriberka, Egvekinot and Malye Karmakuly is underestimated by 2.5, 4 and 14 times, respectively. However, the localization of strong winds on Novaya Zemlya, associated with bora and other orographic flows, on Svalbard, on the northern coast of the Kola Peninsula and many other regions is generally reproduced correctly (Fig.4). Some areas with strong winds are not reproduced in reanalysis (for example, in Pevek, in the southern part of Anadyr Bay coast, in Egvekinot), but new coastal areas with high wind appear in reanalysis -for example, near Kolguyev Island (0.3 times per year), in Olenyoksky Bay, to the east from Big Lyakhovsky Island (Novosibirsk Islands) and near the mouth of the Indigirka River, a part of the coast of the Bering Sea in the north-east of the Anadyr Bay (between Kresta Bay and Providence Bay). The resolution of reanalysis is sufficient to reproduce large mesoscale effects (e.g., Novaya Zemlya bora), but insufficient for micro-scale and meso-gamma effects (Orlanski 1975). Nevertheless, ASR v.2 is much better at reproducing extreme wind speeds than the Climate Forecast System Reanalysis (CFSR) with a resolution of 0.3°. Among all areas with extreme winds in the Russian Arctic identified by ASR (Fig. 4), CFSR reproduces a very strong wind only in the north of Novaya Zemlya, while underestimating its frequency by more than 2 times. Genesis of extreme winds a) Synoptically-forced winds In the area of Teriberka and at other stations on the northern coast of the Kola Peninsula, a very strong north-west wind is confirmed by both observational and reanalysis data. According to the Sailing Directions of the Barents Sea (GUNiO 2006), in winter a strong northwest wind in this area can blow continuously for a long time. An analysis of the large-scale situation (according to ASR) for all cases of extreme wind in Teriberka showed that wind amplification is most often associated with a large pressure gradient in the southern or rear parts of cyclones moving over the Barents Sea from the north-west to southeast. Moreover, in all cases, cyclones formed in the Greenland Sea and quickly deepened over the Barents Sea. Storm wind (≥ 20 m / s) of the north-west direction in most cases was associated with large-scale wind amplification in cyclones or with cold-air outbreaks from the north and northeast and only several times was recorded during the passage of polar lows. According to reanalysis, a very strong wind in the region of Oleneksky Bay and the mouth of the Indigirka River (Fig. 4) was noted only a few times, and all these times were related to the same case. The cause of the wind intensification was two deep cyclones, moving one after another across the Laptev Sea and the East Siberian Sea. However, in general, the Laptev Sea and the East Siberian Sea are characterized by a calm wind climate. Synoptically-forced wind amplification in the north-west of Taimyr, observed at stations Cape Sterlegov and Dikson, is mentioned in (Pastusiak 2016). Most of the storm wind episodes are observed during the southerly, south-westerly wind when deep cyclones pass through the Kara Sea. Some additional increase in wind in Dikson can also occur due to the tip jet effect, however, reanalysis data indicate a small difference between the background wind and Dikson local wind. Frantz-Josef Land is known as a windy place, where storm winds occur 5-7 days per month due to the passage of deep cyclones and can attain 44 m/s (Dementiev, Bryazgin 1996). These storm winds must be strongly modulated by the complex topography of the archipelago, but the orography representation in ASR is insufficient to study these possible effects, and the use of satellite data is limited to the almost permanent sea ice in this region. b) Tip jets and strait winds Tip jets (Proch 1983) occurs due to streamlines convergence during flow around obstacles. Strong winds in narrow straits and gap winds have the same nature. In the Arctic, the most famous and strong tip jets are southern tip jet in Greenland (e.g. Moore et al. 2003), Svalbard tip jets (Reeve andKolstad 2011, Sandvik andFurevik 2002), tip jet at Cape Zhelaniya (Novaya Zemlya) (Pastusiak 2016). Tip jets are rarely very strong (and therefore cannot formally be classified as hazardous weather phenomenon), but are observed quite often, so they can be classified rather as adverse weather phenomena (Federal ... 2009). Due to the complex orography and configuration of the coastline, Svalbard is the cause of numerous local winds. Analysis of SAR data and ASR v.2 showed, that tip jets are often observed at the northern tip of Spitsbergen (Fig.5b) both in the east and west direction of background flow. Due to the proximity of the sea ice edge north of Svalbard, this jet can be enhanced by baroclinicity (thermal wind). At the north exit from the Hinlopen Strait between Spitsbergen and Nordaustlandet wind amplification (up to 30 m/s according to ASR) is very often observed during a southeast and south flow. There, flow convergence in the strait overlap with the tip jet effect, which was demonstrated in (Sandvik and Furevik 2002). The wind also intensifies in the strait between Spitsbergen and Barents and Edgeøya Islands. Analysis of satellite data and reanalysis in the region of southern cape of Spitsbergen (Fig.5c) has shown, that it is rather difficult to separate the tip jet effect from the synoptic wind amplification (the latter is characteristic of the entire Atlantic sector of the Arctic). For example, the maximum wind speed near the southern tip of Svalbard according to ASR reached 38 m/s, but the wind direction was north, north-west (and not west or east, which could be expected with a tip jet), and wind amplified due to the passage of a very deep cyclone. According to observations at the weather station Cape Zhelaniya (data for some years (2005,(2010)(2011)(2012)(2013)(2014)(2018)(2019) are available on the website www.rp5.ru), storm winds are observed during the north-west, west (most often) or east, south-east flow direction. Several times, the wind speed at the station reached 30 m/s (in the reanalysis, the maximum wind speed is underestimated), and storm winds are observed quite often, on average 23 times per year, during all seasons except August-October. Most episodes of wind amplification are associated precisely with the tip jet (for example, as in Fig.5a), although during westerly flow, the tip jet effect is often accompanied by bora. According to reanalysis, it is bora on the northeastern slope of the ridge that leads to the maximum observed wind velocities at Cape Zhelaniya. In Anadyr, the prevailing direction of strong wind is east, which is associated with the convergence of flows in the mouth of the Anadyr River, stretched from west to east. Additional convergence of flows may occur due to the flow around the Golden Range, located northeast of the city. The reanalysis as a whole reproduces the storm and hurricane wind in the Anadyr Bay and its small (compared with the background wind) additional amplification in Anadyr itself, however, the magnitude of wind is significantly underestimated (on average by 2-3 m/s). In Egvekinot, located in a narrow bay, stretched from north to south and bounded by steep ridges from west and east, the observed hurricanes are a manifestation of canyon (gap) winds that reach such a force during high-velocity northern background flow. Due to insufficient resolution, orography around Egvekinot doesn't reproduced properly in ASR v.2 and consequently canyon wind does not occur in the reanalysis. In the area of Cape Dezhnev, wind amplification is associated both with the tip jet effect and with the flow convergence in the Bering Strait (Fig.5e). At the same time, at the Uelen station, additional wind amplification can be associated with downslope windstorm during southeastern and southern flow (the maximum wind speed reaches 35 m/s); much less often storm and hurricane winds were noted with a northwest and northeast direction. The frequency of storm and hurricane winds is about 12 synoptic terms per year. According to SAR data and ASR v.2, tip jet effect is even more pronounceable on the opposite side of the Bering Strait, on the Cape Prince of Wales. Analyzing observations on Severnaya Zemlya Archipelago, on weather station of the ice base "Cape Baranov" (north of the Shokalsky Strait) during 2013-2018 (http://www.aari.ru/main.php?lg=0&id=405), one can note that all cases of storm wind (up to 31 m/s) were observed mainly in winter and during southwestern wind direction, coinciding with the axis of Shokalsky Strait. Frequency of such winds is on average 29 times per year. SAR data shows that additional amplification of the wind also occurs due to tip jet effect of Cape Baranov (Fig.5d). During north-eastern flow, the maximum wind speed is observed in the southwestern part of the strait and reaches (according to few satellite data) 22-24 m/s. c) Downslope windstorms Downslope windstorms are observed in various regions of the Russian Arctic, some of them are well known, but other winds are little described in the literature. Unfortunately, the use of satellite data for the study of downslope windstorms is practically useless due to the fact that the areas of strong wind are tied to the lee mountain slopes and almost do not propagate far downstream to the open sea. This is true for all downslope windstorms discussed in this Section, as shown by satellite data analysis. SAR images can only help to determine the potential presence of windstorms by the presence of gap flows downstream from the passes and fjords, which usually accompany bora-type winds and are perfectly visible from space (Gavrikov and Ivanov 2015;Ivanov 2016), at least on Novaya Zemlya (Fig.6b), Svalbard, in the north of Anadyr Bay, where these jets are the most obvious. Svalbard is one of the areas with well-known downslope windstorms, usually called foehns (e.g. Skeie andGronas, 2000, Migała et al. 2008). However, weather stations (including Barentsburg) are located in such a way that the amplification of the wind during windstorms is not represented adequately (as they are in the wake of the mountains most of the time). High-resolution simulation results using the mesoscale atmospheric model WRF-ARW (not shown) and also ASR (Fig. 1,4) show that high-velocity regions are localized directly on the lee slopes. In Ny-Alesund, the most representative (due to its proximity to the leeward slope) station among others, downslope windstorm never exceeds 25 m/s, a wind speed of 20-25 m/s is observed in 1.2% of cases, and in more than half of cases it does not reach 12 m/s. On average, there are 40 episodes of downslope windstorm per year. Relatively strong windstorms (wind speed > 20 m/s) are observed only in the cold season, with a peak in February-March. Novaya Zemlya bora is also a well-known downslope windstorm (Moore, 2013, Ivanov, 2016, Efimov and Komarovskaya, 2018, Shestakova and Moiseenko, 2018. Weather station Malye Karmakuly is situated on the western coast of the South Island, where ridge height is about 500 m. On average, 60 episodes of eastern bora per year are observed at the station. The percentage of terms with bora from the total number of synoptic terms is on average 23%, and in January it reaches 40%. The maximum 10-min wind speed observed during bora is 48 m/s. Strong bora is most often observed in the cold season (with a peak in January), although weak and moderate bora is observed quite often even in summer. Foehn winds in the Tiksi region is mentioned in the Laptev Sea Sailing Directions (GUNiO 2009), but there are no other references to this wind in the literature. The direction of a very strong wind in Tiksi is strictly southwest, that is, the strengthening of the wind should be connected with the flow over the Verkhoyansk ridge with the height up to 500 m in the Tiksi region. On average, there are 36 episodes of downslope windstorm in Tiksi. A strong windstorm (> 30 m/s) occurs every 5 years (only in the cold season). However, most of the strong episodes were observed during the period 1966-1982, and in subsequent period the frequency of storm and hurricane winds decreased (from 12% to 5%). The greatest frequency of windstorms is observed in December-January. Extreme winds in Pevek are associated with the so-called "Yuzhak", a southeast windstorm on the northern slope of a small ridge (Shapaev 1951, Zimich 1991. For the period 1985-2018 the maximum recorded speed of "Yuzhak" is 38 m/s. In approximately 20% of cases, the wind speed exceeds 20 m/s, and the proportion of hurricane velocities is 1-2%. According to the climate study of Yuzhak in (Zimich 1991), this phenomenon is observed 62 days a year, with more often windstorms occur in the warm season, but hurricane speeds are most often observed in the cold period (November-December). November is characterized by the maximum number of episodes of strong windstorm. On Wrangel Island, very strong wind (more than 30 m/s) is practically not observed, but there is a high frequency of northern storm wind (with a maximum speed of 25-30 m/s)bora wind, which is mentioned in (Pastusiak 2016). Weather station is located on the southern coast of Wrangel Island, at the foot of the ridge with heights 300-1000 m. On average, there are 44 episodes of bora per year observed at the weather station. As for Tiksi, stormy winds were more often observed in 1966-1982 and became less common in subsequent years. In June-July, the number of synoptic terms with bora is about 4%, most frequent bora is in October-November period (20-25% of terms). The strongest bora is observed in November-March. Satellite SAR data can also detect downslope windstorms in areas where there are no in-situ observations. In subarctic regions, an analysis of satellite data showed the presence of downslope windstorm in the north of the Anadyr Bay, which is confirmed by reanalysis data (Fig.7 shows wind amplification according to ASR v.2 and SAR image, but for different dates, because time span of these two data sources overlap little). Between Kresta Bay (where Egvekinot station is situated) and Providence Bay, ASR v.2 demonstrates the high frequency of very strong northerly winds (Fig.4). Judging by the type of orography and the structures in the satellite images, gap flows are combined there with downslope windstorm. However, due to poor orography in ASR v.2, multiple jets observed in nature (Fig.7a) are represented by only one jet (Fig.7b). In addition, downslope windstorms were discovered on the islands of Severnaya Zemlya (Fig.6a shows an example of windstorm in the north of Komsomolets Island during southerly flow). However, statistics of windstorms in the Anadyr Bay and Severnaya Zemlya is unknown. Wrangel Island 70% -1.5 2 0.9 13 Reproducibility of downslope windstorms in reanalysis is determined primarily by the orography scales. On Svalbard, Novaya Zemlya, in Tiksi and on Wrangel Island mountain ranges have a large scale, therefore the lowering of the air along the lee slopes associated with windstorms is reproduced with good accuracy: the percentage of terms with windstorms reproduced by reanalysis is 70-80%, the average wind speed bias is 1-2 m/s, the correlation coefficient is 0.8-0.9 (Table 2). In Pevek, the ridge has the smallest scaleabout 6 km wide and 12 km longand is subgrid in ASR v.2 and especially in other reanalysis. Windstorms in Pevek is reproduced only in 50% of cases, and these 50% of cases are the coincidence of windstorm and the large-scale wind amplification in reanalysis. Thus, the average wind speed bias in Pevek is 5 m/s, and the correlation coefficient is only 0.3 (Table 2). Conclusion Combining various sources of wind data (satellite data, ground-based observations, high-resolution reanalysis), we were able to draw a more detailed picture of extreme wind in the Russian Arctic. Extremely strong winds on the coasts of the Russian Arctic can have a synoptic nature (including the coast of the Kola Peninsula), or occur due to orographic effects (Fig.8 shows only those, where the amount of data is sufficient to study the wind statistics): tip-jet effects, flow convergence in straits and mountain passes (for example, on Spitsbergen, Novaya Zemlya, Severnaya Zemlya, in Anadyr Bay). But undoubtedly the most dangerous winds are downslope windstorms, especially in the area of Novaya Zemlya, Tiksi and Pevek, where they often reach hurricane force. The ASR v.2 reanalysis underestimates the maximum wind speeds at all coastal stations, but reproduces most of downslope windstorms and other orographic winds in general.
2020-04-23T09:08:04.066Z
2019-11-08T00:00:00.000
{ "year": 2019, "sha1": "09fdf40b4e5575511568dd67c05ff1d857a1e0a9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.37828/em.2019.25.2", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "eff3c7a6188e275da852310af64ce0961119f667", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
10180799
pes2o/s2orc
v3-fos-license
Expression levels of the JAK/STAT pathway in the transition from hormone-sensitive to hormone-refractory prostate cancer The main cause of prostate cancer-related mortality is the development of hormone-refractory disease. Circulating serum levels of IL-6 are raised in hormone-refractory prostate cancer patients and evidence from cell line studies suggests that the IL-6R/JAK/STAT3 pathway may be involved in development of this disease. In the current study we investigate if expression levels of these family members are implicated in the development of hormone-refractory prostate cancer. Immunohistochemistry using IL-6R, JAK1, STAT3, pSTAT3Tyr705 and pSTAT3Ser727 antibodies was performed on 50 matched hormone-sensitive and hormone-refractory tumours pairs. An increase in expression of cytoplasmic IL-6 receptor, with the development of hormone-refractory prostate cancer was associated with reduced time to relapse (P=0.0074) while an increase in expression of cytoplasmic pSTAT3Tyr705 was associated with reduced patient survival (P=0.0003). In addition, those patients with high expression of cytoplasmic pSTAT3Tyr705 in their hormone-refractory tumours had significantly shorter time to death from biochemical relapse and overall survival in comparison to those patients with low expression of cytoplasmic pSTAT3Tyr705 (P=0.002 and P=0.0027, respectively). Activation of STAT3, via phosphorylation is associated with reduced patient survival, suggesting that activation of the IL-6R/JAK/STAT3 pathway is involved with development of hormone-refractory prostate cancer. Carcinoma of the prostate (CaP) is an increasing healthcare problem. In the UK prostate cancer is the most common male malignancy and is the second main cause of cancer-related deaths among men. The majority of prostate cancer patients present with locally advanced or metastatic disease, which may be treated using androgen ablation therapy. Response rates to androgen ablation therapy are initially high (70 -80%), however most patients relapse with resistance to androgen ablation within 18 -24 months, termed as developing hormone-refractory prostate cancer (Beemsterboer et al, 1999). The lack of effective therapies directed against hormone-refractory prostate cancer is related to the poor understanding of the molecular mechanisms that drive progression to this refractory state (McEwan, 2004). One possible mechanism underlying the development of hormone-refractory prostate cancer is upregulation of the IL-6 receptor/JAK/STAT3 cascade. As prostate cancer progresses from hormone sensitive to hormonerefractory, the circulating concentrations of IL-6 in the serum of patients increase (Fearon et al, 1991;Drachenberg et al, 1999;Heinrich et al, 2003). It is postulated that this results in activation of the IL-6 receptor/JAK/STAT3 cascade (Hobisch et al, 1998), which has previously been reported to increase androgen receptor (AR) activity (Culig, 2003). Indeed, the increase in proliferation rate observed in prostate cancer cell line models in response to IL-6 has been demonstrate to be via activation of STAT3 (Dhir et al, 2002;Godoy-Tundidor et al, 2005;Sanford and Dewille, 2005). In addition, in vitro studies have demonstrated that IL-6-dependent activation of the JAK/STAT3 pathway is accompanied by transition from hormone-sensitive to hormone-insensitive prostate cancer cell growth (Lee et al, 2003). LNCaP cells, normally undergo apoptosis when androgens are withdrawn, however treatment with IL-6 or transfection with constitutively active STAT3 results in protection of the cells from apoptosis and therefore resistance to androgen deprivation (Lee et al, 2004). Inhibition of STAT3 activation results in the induction of apoptosis in cells even in the presence of IL-6 ( Barton et al, 2001Barton et al, , 2004. The hypothesis that STAT3 is involved in the development of hormone-refractory prostate cancer is further supported by the observation that levels of activated STAT3 are significantly higher in AR-negative cells (DU145 and PC3) than in AR-positive (LNCaP cells) (Mora et al, 2002). STAT3 activation may therefore act to promote cell growth and survival in hormonerefractory prostate cancer independent of the AR. Conversely, STAT3 has been implicated in the development of hormone-refractory disease via interaction with the AR (Culig, 2003(Culig, , 2004. In LNCaP cells the activated dimer of STAT3 binds ligand-free AR in the cytoplasm before entering the nucleus, facilitating the translocation of the AR in to the nucleus in the absence of androgens (Matsuda et al, 2001;Chen et al, 2002). Functional cell line studies demonstrate that the AR/STAT-3 complex can promote androgen-regulated gene transcription even in the absence of androgens (Chen et al, 2002;Trachtenberg and Blackledge, 2002;Yamamoto et al, 2003). This mechanism is supported by data that demonstrate IL-6 can activate the AR in a ligand-independent manner (Ueda et al, 2002;Corcoran and Costello, 2003). Evidence in clinical tissue to support these in vitro observations are sparse, although it is reported that IL-6 receptor expression is eightfold higher in prostate cancer tissue compared to normal tissue (Giri et al, 2001) and that phosphorylated STAT3 is observed in 82% of human prostate tumours and expression levels correlate with Gleason score (Barton et al, 2004). In summary, despite the large number of in vitro functional reports implying a role of IL-6R/JAK/STAT3 pathway in prostate cancer progression there appears to be little data confirming the role of this pathway in the development of clinical hormonerefractory prostate cancer. This study investigates both the expression levels and activation of the IL-6R/JAK/STAT3 pathway in matched hormone-sensitive and hormone-refractory tumours from the same patient. This will enable us to assess if changes in expression and activation of pathway members are associated with development of hormone-refractory prostate cancer. Therefore, we aim to identify whether inhibition of this pathway would lead to improved patient outcome after progression to hormonerefractory prostate cancer. Patients Fifty patients were retrospectively selected for this study. Ethical approval was obtained from the Multi-centre Research Ethics Committee (MREC Scotland) and local research ethics committees. Inclusion criteria for this study were that each patient was required to have both hormone-sensitive and hormone-refractory tumours available for analysis. Tumours were defined as hormone sensitive if PSA fell by at least 50% during hormone treatment and subsequently hormone refractory if two consecutive rises in serum PSA of 410% was observed during hormone therapy. Immunohistochemistry All antibodies used in this study had specificity confirmed by western blotting and on paraffin-embedded cell pellets known to express the proteins of interest (LNCaP and MCF-7 cells). IHC was performed on 5 mm, archival formalin-fixed paraffin-embedded prostate tumour sections. Two methods of antigen retrieval were used, sections were microwaved under pressure (15 psi) in TE solution (5 mM Tris base, pH 8.0 and 1 mM sodium EDTA (JAK1, 3332; pSTAT3 Tyr705 , 9131; and pSTAT3 Ser727 , 9134, Cell Signaling Technology) or incubated in 10 mM citrate buffer (epitope retrieval solution  10, DakoCytomation, Glostrup, Denmark) in a calibrated water bath at 961C for 20 min (IL-6R, C20, SC-661, Santa Cruz and STAT3, 9132, Cell Signaling Technology). Non-specific background staining was blocked using 1.5% (v v À1 ) normal horse serum in tri-phosphate buffered saline and incubated for 20 min at room temperature. All antibodies were incubated overnight at 41C. The concentrations used for each antibody were as follows: IL-6R 1 : 500, JAK1 1 : 100, STAT3 1 : 100, pSTAT3 Tyr705 1 : 50 and pSTAT3 Ser727 1 : 50. Staining was developed using the LSAB plus kit (DakoCytomation) and chromagen was detected using 3,3'diaminobenzidine (Vector Labs, UK). A positive and negative control slide was included in each IHC run, negative controls were incubated in an isotype-matched control antibody at a concentration of 1 mg ml À1 . Scoring criteria Tissue staining was scored blind by two independent observers using a weighted histoscore method (Kirkegaard et al, 2006), also known as the H score system (McCarty et al, 1986). The full tissue section was examined and expression score calculated as follows (1  % cells staining weakly positive) þ (2  % cells staining moderately positive) þ (3  % cells staining strongly positive). Maximum score was 300. An interclass correlation coefficient (ICCC) for each protein was calculated by SPSS for Windows to confirm consistency between observers and the mean of the two observers' scores were used for analysis. ICCC of greater than 0.7 is considered as excellent (Kirkegaard et al, 2006). Statistical analysis Statistical analysis was performed using SPSS for Windows. Descriptive analysis was used on variables such as age at diagnosis, serum PSA (pre-and post-relapse), Gleason sum, time to biochemical relapse, time to death from biochemical relapse and overall survival. Median and inter-quartile (IQR) ranges were calculated from these analyses. To determine if there was a change in expression in progression from hormone-sensitive to hormonerefractory disease a Wilcoxon signed-rank test was used to compare the hormone-sensitive expression score to the hormone-refractory expression score for each protein and each location. Survival analysis was performed using Kaplan -Meier curves and the log-rank test, low expression was defined as an expression score less than or equal to the median and high expression as expression score greater than the median. A change in expression level in hormone-sensitive and hormone refractorymatched tumours was defined as the mean difference between the expression scores that each observer assigns for protein expression plus two standard deviations. The number of histoscore units defined as a change in expression for each individual protein is shown in Table 1. Patient characteristics Fifty pairs of hormone-sensitive and hormone-refractory prostate cancer tumours were analysed. The median age at diagnosis was 70 (IQR 64 -73) years and the median PSA at diagnosis was 24.6 (IQR 6.4 -79.8)ng ml À1 . The median time to biochemical relapse was 2.55 (IQR 1.55 -5.24) years, the median time to death from hormone relapse was 1.49 (IQR 0.98 -2.14) years, and the median overall survival was 5.82 (IQR 3.03 -6.78) years. The median Gleason sum was 8 (IQR 6 -9) for hormone-sensitive tumours and 9 (IQR 8 -9) for hormone-refractory tumours. The range of Gleason sum for both tumour types was 2 -10. All patients received chemical or surgical castration and 39 also received antiandrogens. These were calculated from the mean observer difference plus 2 standard deviations. The inter-class correlation coefficients are also shown demonstrating consistent scoring. Protein expression The variation in observer scoring was calculated by ICCC. An ICCC of 0.7 is classed as excellent and an ICCC of 1 indicates identical scores. All scorer variations assessed in this study by ICCC consistently achieved an ICCC of 0.7 or above, ICCC values for each protein at each cellular location is given in Table 1. Membrane and cytoplasmic expression was observed for IL-6 receptor, while only cytoplasmic expression was observed for JAK1, and STAT3. Cytoplasmic and nuclear expression was seen for both phosphorylated STAT3 proteins. An example of an increase in expression for IL-6 receptor and pSTAT3 Tyr705 in the transition from hormone-sensitive to hormone-refractory disease is shown in Figure 1. Protein expression levels in hormone-sensitive and hormone-refractory tumours To assess if protein expression levels were associated with clinical endpoints (relapse and survival), Kaplan -Meier graphs were plotted for tumours expressing low levels of specific proteins compared to high levels. Those patients that expressed high levels of cytoplasmic pSTAT3 Tyr705 in their hormone-refractory tumour had significantly shorter time to death from biochemical relapse and than those patients with low cytoplasmic pSTAT3 Tyr705 expression (P ¼ 0.002, hazard ratio 4.25 (95% CI 1.59 -11.34)) ( Figure 2A). This also translated into significantly shorter overall survival (P ¼ 0.0027, hazard ratio 2.87 (95% CI 1.39 -5.92)) ( Figure 2B). The median overall survival in those patients whose tumours expressed high levels was 3.77 (IQR 1.11 -6.43) years compared to 7.55 (IQR 6.69 -8.41) years for those whose tumours expressed low levels. A trend with overall survival was also noted for nuclear pSTAT3 Tyr705 in hormone-refractory prostate cancer; however due to the lines crossing after 8 years this was not significant (P ¼ 0.250) ( Figure 2C). The median overall survival for those patients whose tumours had low expression was 7.49 (IQR 6.52 -8.57) years compared to 5.82 (IQR 3.49 -8.15) years for those patients whose tumours expressed high levels of nuclear pSTAT3 Tyr705 . Expression levels of all other proteins in hormonesensitive or hormone-refractory tumours were not associated with time to relapse, time to death from relapse, or overall survival. When pSTAT3 Tyr705 and pSTAT3 Ser727 expression levels were divided by Gleason (o7, 7 or 47) no change in expression was observed in the cytoplasm or the nucleus. In addition the ratio of pSTAT3 Tyr705 and pSTAT3 Ser727 does not correlate with Gleason. Changes in protein expression with the development of hormone-refractory prostate cancer An increase in cytoplasmic IL-6 receptor expression from hormone sensitive to hormone refractory was associated with reduced time to biochemical relapse (P ¼ 0.0074) (Figure 3). The median time to relapse for patients whose tumours had a decrease or no change in expression with the development of hormonerefractory prostate cancer was 2.97 (IQR 1.89 -4.07) year compared to 1.18 (IQR 0.45 -1.92) years for those patients whose tumours exhibited a rise in expression. An increase in expression of cytoplasmic pSTAT3 Tyr705 with progression to hormone-refractory prostate cancer was also associated with a reduction in overall survival (P ¼ 0.0003, hazard ratio 4.52 (95% CI 1.85 -11.52)) ( Figure 4A). The median overall survival for those patients whose tumours exhibited a decrease or no change in expression was 7.54 (IQR 6.52 -8.57) years compared to 5.51 (IQR 2.77 -8.26) years for those patients whose tumours IL-6 receptor Hormone sensitive Hormone refractory ×400 ×400 pSTAT3 705 Hormone sensitive Hormone refractory ×400 ×400 Figure 1 Images of matched hormone-sensitive and hormone-refractory prostate tumours whose expression increased in the transition from hormonesensitive to hormone-refractory disease (upper panel: IL-6 receptor and lower panel: pSTAT3 Tyr705 ). Positive staining is brown in colour and is indicated by arrows according to their location, M, membrane; C, cytoplasm; and N, nucleus. Counterstaining is blue and is represented in the stroma (S). exhibited a rise in expression. Changes in the expression levels of all the other proteins investigated in the transition from hormonesensitive to hormone-refractory tumours were not significantly associated with time to relapse, time to death from relapse, or overall survival. DISCUSSION Approximately 50% of patients with advanced prostate cancer have elevated levels of serum IL-6 in comparison with men with normal prostates, benign prostatic hyperplasia, prostatitis and localised disease (Twillie et al, 1995;Drachenberg et al, 1999). In addition Figure 2 (A) Kaplan -Meier plot comparing time to death from biochemical relapse for those patients with hormone-refractory tumours with high cytoplasmic pSTAT3 Tyr705 expression (solid line) (27 patients) compared to those patients with hormone-refractory tumours with low cytoplasmic pSTAT3 Tyr705 expression (broken line) (23 patients) (P ¼ 0.002, hazard ratio 4.2 (95% CI 1.59 -11.34)). (B) Kaplan -Meier plot comparing overall survival for those patients with hormone-refractory tumours with high-cytoplasmic pSTAT3 Tyr705 expression (solid line) (19 patients) compared to those patients with hormone-refractory tumours with low cytoplasmic pSTAT3 Tyr705 expression (broken line) (31 patients) (P ¼ 0.0027, hazard ratio 2.87 (95% CI 1.39 -5.92)). (C) Kaplan -Meier plot for high-pSTAT3 Tyr705 protein expression in the nucleus vs low nuclear pSTAT3 Tyr705 protein expression in hormonerefractory prostate tumours. Those patients with high pSTAT3 Tyr705 (solid line) (22 patients) expression had shorter overall survival compared to those with low pSTAT3 Tyr705 (broken line) (28 patients); however this was not a significant change (P ¼ 0.25). Figure 3 Kaplan -Meier plot for an increase in IL-6R protein expression vs no change or a decrease in IL-6R protein expression in the transition from hormone-sensitive to hormone-refractory disease. Those patients with an increase in IL-6R expression (solid line) (six patients) had significantly shorter time to biochemical relapse compared to those with no change or a decrease in IL-6R expression (broken line) (45 patients) (P ¼ 0.0076, hazard ratio 3.45 (95% CI 1.31 -9.07)). Figure 4 Kaplan -Meier plot for an increase in pSTAT3 Tyr705 protein expression in the cytoplasm vs no change or a decrease in pSTAT3 Tyr705 protein expression in the transition from hormone-sensitive to hormonerefractory disease. Those patients with an increase in pSTAT3 Tyr705 (solid line) (nine patients) expression had significantly shorter overall survival compared to those with no change or a decrease in pSTAT3 Tyr705 expression (broken line) (50 patients) (P ¼ 0.0003, hazard ratio 4.52 (95% CI 1.85 -11.52)). IL-6 has been associated with progression from hormone-sensitive to hormone-insensitive disease in animal models via interaction with AR cofactors (Wallner et al, 2006). One possible mode by which IL-6 may influence progression of prostate cancer to the hormone-refractory state is by activating the IL-6 receptor/JAK1/ STAT3 pathway, resulting in differentiation and inhibition of apoptosis (Spiotto and Chung, 2000a;Smith et al, 2001). Overexpression of IL-6 receptor in androgen-sensitive human LNCaP cells results in the conversion to androgen-independent growth both in vitro and in vivo (Lee et al, 2003) and depletion of IL-6 results in decreased proliferation of hormone-insensitive cells but not hormone-sensitive cells (Lou et al, 2000). The IL-6 receptor is predominantly located in the cell membrane (Lou et al, 2000), however the IL-6/IL-6 receptor complex may be taken into the cell by endocytosis as part of the protein-recycling process (Nesbitt and Fuller, 1992). This is the cell's method of downregulating receptors once the ligand has produced the appropriate signal (Nesbitt and Fuller, 1992). Therefore as the IL-6 receptor, unlike the other signalling proteins in the pathway does not have a phosphorylated form, the localisation of the IL-6 receptor to the cytoplasm may be used as a surrogate marker for activation. This study demonstrates that there is an association with an increase in cytoplasmic IL-6 receptor expression with development of hormone-refractory prostate cancer and time to biochemical relapse. However these results should be treated with caution as only 12% (six patients) of the patients in our cohort exhibited this increase. We would therefore aim to increase our cohort size to confirm these results. JAK1 and STAT3 are proteins that become activated in sequence, upon the binding of IL-6 to the IL-6 receptor (Schindler and Darnell, 1995). STAT3 is phosphorylated at two different sites, tyrosine 705 position and serine 727. The tyrosine kinase, JAK1 phosphorylates STAT3 at tyrosine 705, while the kinase(s) that mediate serine phosphorylation remain to be determined, evidence suggests that mitogen-activated protein kinase may be responsible (Briscoe et al, 1996a, b). Immunohistochemical staining in our cohort of patients has demonstrated that both phosphorylated forms of STAT3, pSTAT3 Tyr705 and pSTAT3 Ser727 are found in the cytoplasmic and nuclear compartments of the cell. STAT3 dimerisation occurs in the cytoplasm before it enters the nucleus (Corcoran and Costello, 2003), the presence of activated STAT3 in the cytoplasm, therefore, provides a 'snapshot' of activated STAT3 before it enters the nucleus. Although it has been previously reported that it is serine phosphorylation at 727 that modulates the DNA binding and/or transcriptional activity of STAT3 dimers (Dhir et al, 2002) our data shows no evidence of pSTAT3 Ser727 being associated with development of hormone-refractory prostate cancer. However, it has been reported that phosphorylation of STAT3 at serine 727 via ERK1 and/or 2 negatively regulates STAT3 activity (Kim et al, 2002;Tian et al, 2004), this supports our data that pSTAT3 Ser727 is not associated with the development of hormonerefractory prostate cancer or prostate cancer patient survival. There does however appear to be a significant role for pSTAT3 Tyr705 in the progression to hormone-refractory prostate cancer. High expression of cytoplasmic pSTAT3 Tyr705 in hormonerefractory prostate cancer tissue is associated with quicker time to death from hormone relapse and shorter overall survival. In addition, an increase in pSTAT3 Tyr705 with the development of hormone-refractory prostate cancer is associated with shorter overall survival. These results fit with the hypothesis that activation of the IL-6 receptor/JAK1/STAT3 pathway is involved in the development of hormone-refractory prostate cancer. However, nuclear expression of pSTAT3 Tyr705 expression was not associated with clinical parameters in this study. This may be due to the fact that although phosphorylation of tyrosine 705 increases STAT3 activity it is not sufficient to induce its relocation from the cytoplasm to the nucleus. Dimerisation of STAT3 is required for its translocation to the nucleus and this is regulated by reversible acetylation of a single lysine (Lys) residue (position 685) (Yuan et al, 2005). More research is therefore required to understand the exact mechanisms for activation and translocation of STAT3 to the nucleus and also the consequence of STAT3 phosphorylation in clinical prostate cancer tissue. Cell line studies demonstrate that STAT3 activation results in an increase in proliferation and induces neuroendocrine differentiation, although out with the scope of the current study these parameters warrant further investigation in clinical tissue to establish the route by which STAT3 influences prostate cancer patient survival (Spiotto and Chung, 2000b). In summary, these data support the hypothesis that the IL-6 receptor/JAK1/STAT3 pathway is activated in the progression of hormone-refractory prostate cancer. Cytoplasmic expression of IL-6 receptor and pSTAT3 Tyr705 are associated with reduced time to biochemical relapse and reduced time to death from hormone relapse respectively, therefore, supporting the strategy for targeting this pathway in hormone-refractory prostate cancer treatments. A recent report demonstrates that this pathway can be targeted and successfully inhibited in other disease using a humanised monoclonal antibody that targets the IL-6 receptor (in a similar to which herceptin targets HER2 (Jia et al, 2004;Nakahara and Nishimoto, 2006;Nishimoto and Kishimoto, 2006) and this approach has proved successful in prostate cancer animal models (Wallner et al, 2006). Use of this drug (tocilixumab) in phase II clinical trials for rheumatoid arthritis has proved the clinical benefit of IL-6 blockade and we suggest that such a strategy should be applied to hormonerefractory (Nishimoto and Kishimoto, 2006).
2014-10-01T00:00:00.000Z
2007-06-26T00:00:00.000
{ "year": 2007, "sha1": "825d667e84fc5c3a40b3ff7c64d27216a271e895", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/6603871.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5b967a546eca0412232d70d02c95dac528d29938", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
262036228
pes2o/s2orc
v3-fos-license
What went right? A collaborative process to prepare a city forest management strategy ABSTRACT We analyze a multi-stakeholder process that succeeded in creating a joint forest management strategy for the city of Jyväskylä, Finland. The analysis draws on the participants’ own account of the process, elicited via interviews and questionnaires. We attend to critical context and process factors to account for the success of the collaborative process and evaluate the effectiveness of the agreement in terms of ecological and social outcomes. The process created a practical agreement, which increased the share of protected forests and introduced new biodiversity protection measures for commercial and recreational forests. It also created innovative solutions, like the new concept of a nature value forest, which helped the parties to negotiate around their differences. However, disagreement over the impacts of forest management practices, especially continuous cover forestry, remained. The crucial contextual conditions contributing to the agreement were strong initiating leadership and political mandate, which motivated the participants to engage in collaborative dialogue and stick with the process. The key process factors were a third-party facilitation and joint fact-finding. Most importantly, the process was not constrained by a pre-defined agenda or assumptions related to the status quo, but the participants were granted considerable influence over decisions and outputs. Introduction An ongoing trend in environmental management is a shift from primarily state-initiated, regulatory strategies toward collaborative governance, which relies on dialogue and cooperative relations between governmental bodies, non-governmental organizations, and private interests (Margerum & Robinson, 2016).Ansell and Gash (2008, p. 544) have defined collaborative governance as 'a governing arrangement where one or more public agencies directly engage non-state stakeholders in a collective decision-making process that is formal, consensusoriented, and deliberative and that aims to make or implement public policy or manage public programs or assets.'It is an umbrella term that covers diverse inclusive approaches, including deliberative governance (Healey et al., 2003), consensus-building (Innes, 2004), and communicative planning practices (Forester, 1999). The proponents of collaborative governance maintain that it can minimize destructive conflicts by providing an orderly forum for interest articulation and create innovative and efficient solutions for public policy problems (Ansell & Gash, 2008;Innes & Booher, 2003).It is anticipated to build trust and social capital and allow the participants to pool their knowledge and resources in an effective manner (Emerson & Nabatchi, 2015;Healey et al., 2003).However, collaboration is not a panacea, and cooperative processes can go wrong in several ways.While token and half-hearted participatory processes are doomed to fail, serious efforts can also come to nothing (see, e.g.Booth & Halseth, 2011;Walker & Hurley, 2004).As Forester (2006) points out, participatory processes can be messy, unpredictable, and uncertain, and they are threatened by inequalities of power, income, and information.Some critics are concerned about the ability of collaborative processes to include weaker groups and insurgent voices (Dryzek, 2000;Hillier, 2003), while others caution that collaboration can produce watered-down compromises that jeopardize ecological sustainability goals (Koontz & Thomas, 2006;Singleton, 2000).Given the precariousness of collaborative processes, one of the most intractable questions that concerns researchers is to determine whether and how collaborative processes produce improved outcomes across ecological, economic, and social domains (Lindgren et al., 2021). In this paper, we analyze a multi-stakeholder process to prepare a forest strategy for the city of Jyväskylä, Finland.The cooperation group succeeded in creating a joint forest program that all participants approved.However, the agreement among a group of contentious stakeholders, representing forest sectors actors, environmental organizations, businesses, and residential as well as recreational interest groups, was at no point guaranteed; the process had several critical moments when it was about to fall apart.We zoom in to these critical instances and explore the forest program process from the inside, through the participants' voices (see Forester, 1999), to understand why and how the collaborative effort was sustained.We also attend to critical context and process factors to account for the success of the process and evaluate the effectiveness of the agreement in terms of ecological and social outcomes, drawing on recent advances in collaborative governance evaluation studies (Ansell & Gash, 2008;Emerson & Nabatchi, 2015;Innes, 2004;Reed et al., 2018).Our narrative in-depth account of the forest program process aims to add nuance to our understanding of how collaborative processes can be carried out for the better or the worse.We also provide new empirical evidence of the outcomes of collaborative environmental governance, suggesting that well-conducted collaborative processes can indeed create innovative solutions and lead to improved environmental outcomes.Furthermore, we illustrate the challenges of engaging ordinary citizens in multi-stakeholder dialogue processes and propose ways to include diverse citizen interests in collaborative environmental governance practices. Outcomes and conditions for successful collaboration Participatory multi-stakeholder processes have become increasingly common in addressing complex and contentious environmental management problems (Lindgren et al., 2021;Margerum & Robinson, 2016).They can include collaborative governance regimes, in which multi-party interaction represents the prevailing pattern of activity (Emerson & Nabatchi, 2015), but they can also be one-time collaborative efforts set up to prepare a joint strategy and/or solve a public policy dispute (Innes & Booher, 2003). The success of the collaborative processes can be evaluated in terms of their outcome: Did they create organizational or other benefits for the participants; and did they meet the convenors' expectations of the process?And did they contribute to socially just and environmentally sustainable outcomes that were regarded as beneficial and legitimate by political decision-makers and the public at large (Conley & Moote, 2003;Emerson & Nabatchi, 2015)?According to Innes and Booher (2003), a good collaborative process should meet the following outcome criteria: The process ended a stalemate and produced a high-quality agreement in terms of meeting the main interests of all stakeholders.It built new relationships and social capital, which assist the participants' interactions in the future.Perhaps most importantly, the process produced novel and innovative ideas for action and contributed to learning and reflection on the participants' initial beliefs, positions, and interests. Several authors have presented contextual conditions and process factors that affect the outcomes of collaborative processes and determine their success (Ansell & Gash, 2008;Innes, 2004;Reed et al., 2018).Contextual factors, such as interdependencies and power dynamics between the participants and their background organizations, generate the energy and impetus to begin collaboration (Ansell & Gash, 2008).Incentives to participate are low when stakeholders can achieve their goals unilaterally.Effective processes also need initiating leadership as well as a mandate from policymakers and executives who are committed to respect and honor the outcomes of collaborative processes (ibid).Collaboration is particularly suited for situations that require ongoing cooperation, because they create a motivation to develop rapport and social capital among the participants (Ansell & Gash, 2008).Overall, collaboration is more likely to succeed in open and transparent political systems, in which decision-makers are accountable to their constituencies and need to justify their decisions publicly (Reed et al., 2018). The process criteria for successful collaboration include balanced engagement of all relevant actors who are affected by or care about the issue; clear ground rules; a self-organizing process, which permits the status quo and all assumptions to be questioned; information that is accessible and fully shared among participants; a face-to-face dialogue, where all are heard and respected; equal opportunities to influence process outcomes; and consensus-oriented decision-making (Ansell & Gash, 2008;Innes, 2004;Reed et al., 2018).Some authors emphasize consensus-based decision-making (Innes, 2004;Susskind et al., 1999), while others grant that consensus is usually unattainable and suggest consensus or compromise-oriented decision-making (Ansell & Gash, 2008).In the presence of significant power and/or resource imbalances between stakeholders, effective collaborative governance requires the empowerment of weaker or disadvantaged stakeholders.Collaboration does not alter the power balance outside the table, but a collaborative process can level the playing field at the table (Innes, 2004).Furthermore, a prehistory of antagonism among stakeholders necessitates positive steps and sufficient time for remedial trust building (Ansell & Gash, 2008).It usually requires the contribution of a trusted third party, such as a neutral facilitator who can address the relationship issues and help the parties to respectfully speak and listen to one another (Ansell & Gash, 2008;Innes, 2004).Finally, effective collaboration requires that the participants are committed to the process and feel ownership of it (Ansell & Gash, 2008). The outcome criteria, as well as the factors influencing the success of collaborative processes, are presented in Figure 1, which depicts the theoretical framework of our study. Methodology Our research approach is a single case study analysis (Eisenhardt & Graebner, 2007) using a combination of neutral, third-party evaluation and participatory evaluation (Conley & Moote, 2003).One of the authors observed the forest program process as part of her MSc thesis while assisting the facilitators.The participant observation approach allowed us to detect group dynamics that the interviews alone could not have captured.The case is a critical case in a sense that it is, in many ways, a best-practice example of collaborative governance.As Innes (2004) points out, the real potential of collaboration can only be evaluated through cases in which the conditions for authentic dialogue are satisfied. The data consists of a situation assessment report prepared by the facilitators in the beginning of the process, facilitators' memos (n = 18) of the cooperation group meetings, responses to an online questionnaire by the participants (n = 8) right after the process, and in-depth interviews (n = 9) with the most active or otherwise key participants, who were willing to be interviewed.Three of the 12-person cooperation group either declined the interview or could not be reached.The interview guide and questionnaire are included in the Appendix.The interviews were carried out in August 2018, except one additional interview in April 2020.We also interviewed the senior facilitator of the process in March 2019.The interviews lasted from one to two hours, and they were taped and transcribed.The quotes from the interviews in the text are translated from Finnish and lightly edited for the purposes of readability and anonymity of the interviewees.The interviewees who could be identified were asked to read the manuscript and give their consent to use of the quotes. The data is analyzed in an interpretative and narrative fashion (Miles & Huberman, 1994) to describe the process as the participants experienced it, paying specific attention to the critical moments of the process that either jeopardized or saved collaborative dialogue.The analysis is informed by the analytical categories derived from the theoretical framework (Figure 1). The starting point The city of Jyväskylä is a medium-sized city in central Finland with around 140,000 inhabitants.As a major forest owner, with around 9000 hectares of forest land (Metsäohjelma, 2018), the city faces the challenge of balancing a traditional orientation toward timber production with the newly arising demands for multifunctional forestry, including biodiversity and recreational values.Ecologists have called attention to the role of intensive forestry in decreasing the habitats of endangered species, especially old-growth forest, and pointed out the importance of decayed wood, which is vital for one-quarter of all forest species in Finland (Mönkkönen, 1999).The forest sector has responded to these demands by developing sustainable forestry guidelines and forest certification standards, which ensure protection of the most important habitats and a certain number of retention trees in logging sites.Furthermore, continuous cover forestry was included as an appropriate forest management practice in the amended Forest Act in 2014.Yet, the forestry sector still largely rejects the idea of continuous cover forestry and maintains that it is less productive than periodic forestry with clear cuts (Mäntyranta, 2018). These arguments were at play also in Jyväskylä, where the environmental actors advocated for continuous cover forestry and demanded that the city should follow the United Nations Biodiversity Agreement and protect 17% of its forest land by 2020.According to the situation assessment preceding the collaborative process, some environmental non-governmental organizations (ENGOs) even proposed that the city should set an example and completely renounce targets for return of assets from publicly owned forest.The calls for giving up clear cuts were echoed by residential organizations, which have frequently protested individual logging sites near residential areas or popular outing areas.The forest sector emphasized the role of good forest management in ensuring safe and pleasant forests for recreational users, and in ensuring economically viable forestry in commercial forests.In their view, the citizens are best served if old hazardous trees are removed, and the forest are kept relatively open and easy to access.They also maintained that adopting new, untested forest management practices is risky and should not be adopted uncritically on a large scale. To address the conflicting demands, the Jyväskylä Urban Planning and City Infrastructure Committee (UPCIC) commissioned a situation assessment to see whether there is a scope for and interest in, among the key stakeholder groups, participation in a collaborative process and preparation of a joint forest strategy for the city.The initiative came from the city land use planning agency, which was keen to try out new approaches to engage citizens and stakeholder groups, and also to facilitate city interagency collaboration between forest management and environmental authorities.The timeline of the process in presented in Figure 2. Mapping the baseline situation and designing the process In the situation assessment stage, a facilitation company interviewed 24 stakeholder representatives to map out the key interests and concerns related to city forest management as well as their willingness to engage in a collaborative process.The assessment showed that there is indeed the scope for cooperation, and the UPCIC decided to set up a cooperation group with a task to prepare a multifunctional forest strategy for the cityowned forest.Given the history of antagonism between the stakeholders, the UPCIC hired the facilitator company to coordinate and run the forest program process. All interviewees were invited to a kick-off meeting in January 2017.The main task of this meeting was to select 15 members to the cooperation group.A 15-member group was considered small enough to allow indepth dialogue but large enough to be representative of the different perspectives, covering forest management and timber production, biodiversity protection, urban planning, recreational use of forests, and residential interests.The selection of the group members was consensual, and all interested parties could join the group.The group size was later reduced to 12, because some people changed jobs or withdrew for other reasons. During the design phase, the group formulated a working plan, including schedule, ground rules, and the goal for the process.At this point, the UPCIC promised to ratify the forest program proposed by the cooperation group, assuming that it was unanimous. Joint fact-finding phase The joint fact-finding phase included information sharing and presentations between the group members, an expert seminar, field visits, and a map-based questionnaire to the city residents on their forest management preferences. A one-day open seminar was organized in August 2017 to address the open questions identified by the cooperation group.The presentations covered landscape ecological planning, insect damages, continuouscover silviculture, health and well-being benefits from forests, as well as timber provisioning a.Most interviewees regarded the seminar as helpful, but some forestry actors complained that conservation biology questions received too much attention compared to themes, which they regarded as 'actual' forestry considerations. The field visits were organized jointly by the forestry and environmental actors.The sites selected by the former included examples of good forest management practices or failed experiments, like one area where small-scale loggings had a negative impact on timber growth, while the sites selected by the latter were those in which forestry operations had negative impacts on nature.The field visits were mentioned by several interviewees as a critical juncture, as they provided the participants with a concrete idea of the impacts of forest management practices on the ground.As one environmental actor said, 'The field visit was probably one of the most important events in this work, and it served as a reference point in the later stages.'The view was echoed by a forestry actor, 'It was a real eye-opener, as many participants had not seen what the different forest management practices proposed during the process actually meant in practice.' The aim of the map-based questionnaire was to find out public preferences for forest use.According to the results, citizens regarded landscape values and recreation as the most important goals of the city forest management, and they viewed biodiversity protection more important than revenue from timber sales.Furthermore, protected areas were the most popular outdoor activity areas.The interviewed group members found that the results provided important input to the process.However, some forestry actors felt that the economic benefits from forestry might have been underestimated because the questionnaire contained a somewhat technical term 'revenue from timber sales.'They maintained that the results might have been different if the income from timber sales was formulated in more concrete terms, like more money for the city to organize health services.However, the forestry actors also acknowledged that the key message was that citizens value forests. Negotiation stage At the negotiation stage, the cooperation group agreed on numerical targets for protected forests and decided on the designation of forest areas for different purposes.This stage was described by several participants as the toughest one.The negotiations were aided with consensus-based decision-making, in which the participants could signal with 'traffic lights' if they approved a proposal, were not entirely comfortable with it, or rejected it outright.The approach was viewed as a relatively quick and simple way to measure the level of agreement in the room.It also helped the participants to understand better how their ideas were received by the group: It was really informative, the more yellows other people raised, the more people had to think that there must be something wrong with this.And it also made you think that ok, they have given me yellow a few times, so at some point, I have to [propose something that also they can approve]. However, by this point, some participants were tired with the lengthy process, while others were frustrated that there was still no concrete output in sight: 'There was a period, around half a year before the finishing line, when everybody was quite strained and we wondered if anything will come out of this.' Some participants were also increasingly uncomfortable with the confrontational tone of the discussions: 'This was very quarrelsome and harrowing, not at all constructive, but people were deep in their trenches and kept shooting at their opponents.'One participant admitted that they used 'quite strong' language to make their case.However, they would have also liked the other participants to say things directly: 'It would be very important for this kind of process that people would speak their minds and tell what they really think instead of mincing words and trying not to offend anyone.' A critical juncture during this phase was a meeting which was facilitated by a junior member of the facilitation team and organized as a plenary without break-out groups due to the attendance of fewer participants than usual.This meeting was described by several participants as chaotic or terrible, and it eroded relationships built in the earlier stages of the process.A further source of trouble was that the composition of the facilitation team varied during the long process, and at one point, some participants felt that a facilitator team member had an environmental bias.One participant recounted the experience of another group member: 'They felt that the facilitator opposed their views, "You cannot think that way", and they were really insulted and said that they were not coming to the meetings anymore.' To get the process back on track, the composition of the facilitation team was altered.The team also met all participants in person to give them a proper hearing and elicit suggestions to improve the atmosphere.Furthermore, they organized 'an appreciation round' in which each participant said something positive about the others and their contribution to the process.This was described by one participant as a complete success: 'The comments were really good, and some people were almost moved to tears.'This was a particularly important step for the city forest sector representatives who had constantly felt that their work was questioned and criticized during the process.These interventions helped to improve the relationships, as indicated by the comment below: 'The consultant [facilitator] was working hard in the background so that we reached a situation in which we could talk to each other like decent human beings, respecting and appreciating one another.It was a huge effort.' Writing up the program After the outlines were agreed on, the last step was to put the strategy in writing.This stage also had some critical moments, as the main writing task fell to the city authorities, who were used to working with consultants who deliver reports, not facilitators who only run the process.The authorities found the writing task burdensome and were particularly offended when their work was unduly criticized for purposefully omitting some issues: 'Suddenly we had to pull everything together and start almost from scratch.And then we were blamed for not presenting correctly what had been agreed on in the meetings.It did not feel very nice.' In a similar way, a group member who also contributed to writing the programavailable for the whole group in a shared drivewas upset, as their contribution was accused of distorting the strategy and adding details that were not agreed on.In their words, I was very busy at work […] so I wrote the text during the night, and [then some people claimed] that I had tried to smuggle in my views in the darkness of the night.[…] It was at that point when [I thought to give up and] say that ok, keep your program. The process could have shipwrecked at the finishing line if the facilitators had not intervened and put an end to the destructive email exchange.They reminded the participants about the ground rules and urged them to appreciate the input of those members who provided a significant contribution to the joint document. The agreement The cooperation group met altogether 19 times over two years and eventually reached an agreement on the forest program and its implementation plan.The UPCIC approved the program with no changes in June 2018.The feedback from the public hearing of the strategy in May 2018 was very positive. The new program increased the percentage of protected forest surface area from 13.5 to 17 and the percentage of recreational forests from 36 to 44.Furthermore, the implementation plan specifies several measures to maintain biodiversity in all forests, including 20 retention trees per hectare, which is twice the amount required by the Forest Stewardship Council certification system.Other planned means to increase the amount of decayed wood in the forest are, for example, to retain all dead fallen or standing trunks, and making artificial rotten standing trees in all logging sites.Furthermore, the diversity of tree species will be boosted by increasing the share of deciduous trees in all forests, in the long term, by 5%.In economic terms, the program amounts to around 300,000-euro losses from timber sales for the city. The implementation plan details the forest management goals and measures in different types of forests.The timber production forests will be managed for economic profit and even-aged management and clear cuts are the main forest management methods.The recreational forest, including forests nearby residential areas, will be managed in a way that gives priority to maintaining landscapes and easy access to forests, as well as ensuring the safety of people in the forests.The aim is to create forests with a diverse structure using continuous cover forestry as well as small-scale clear cuts.Clear cuts can be carried out in old spruce forests to create forests with a layered canopy.Decayed trees will be cleared from the vicinity of hiking trails.The protected forests will be managed for biodiversity protection and left to age without any forest management activities.Only hazardous trees will be taken down, but they are left in the forest. All participants approved the forest program as indicated by the following comments: It was a heavy process, but the outcome is good, we are all pleased with it. Well, it did not entail anything that is not impossible to implement […], one just must look at things from a little different perspective now.I fully stand by it. I think it is quite good.It has several elements that an ambitious city forest management plan should have […], although the level of ambition was lower than I would have wanted. According to an independent audit, released in February 2023, the implementation of the forest program was proceeding according to the plans in an agreed upon schedule.The only critical comment was that there is a need to better document the amount and location of the retention tree groups. The outcome of the process The forest program process was successful with respect to the key outcome criteria: reciprocal agreement, relationships, creativity, and learning (Figure 1).In terms of agreement, the process ended a long-standing stalemate and created a jointly approved strategy which accommodated the main interests and concerns of the disputing parties.The environmental actors were particularly pleased with the 17% protection level, which is in line with the United Nations biodiversity targets.One clear advancement from their perspective is that continuous cover forestry is included as a main forest management method in recreational forests.Furthermore, being involved in selecting the nature value areas is a far more proactive strategy than protesting individual logging sites one by one.The city forestry actors were content with the fact that the agreement allows them to carry out standard forest management activities in commercial forests and light safety-improving operations also in recreational forests.Importantly, the agreement provides them an undisturbed operating space with fewer protests and complaints, causing costs and delays.The city land use planning authorities, who initiated the process, felt that the process succeeded well in its aim to balance the different interests and concerns related to the city forests management.Also, the UPCIC approved the forest program, indicating external legitimacy of the process outcome.To be sure, the participants were not entirely happy with the outcome.The environmental actors had to relinquish the ambitious goal of protecting all city-owned forests, while the forestry actors pointed out that the losses from timber revenue for the city were considerable. The participants nevertheless felt that the cost and benefits associated with the program were distributed equitably enough across the beneficiaries and that it accommodated ecological, social, and economic goals in an even-handed manner. The process also performed well in terms of building new relationships and social capital among the parties.The antagonisms ran high, especially at the beginning of the process, but eventually the parties succeeded in building sufficient trust and rapport which helped them to come to an agreement.The participants also learned what the issues meant to the others (see Innes & Booher, 2003).For example, some forestry actors noted that they now better understand the importance of species diversity for some stakeholders, 'it really matters to them.'One forestry actor even joined a local birdwatching organization 'to better understand how they see issues.'The ENGO representatives, too, felt that the process developed working relationships that will assist their interactions with the forestry actors in the future.The city land use agency observed contently that the process improved communication between the city agencies and provided a model for collaborative dialogue to be used in future land use planning processes. Importantly, the process created novel and innovative solutions to break the impasse.To meet the 17% protection target, the participants developed a new forest category 'nature value forest' under an existing class 'value forest,' which previously covered forests with significant landscape value.This solution prevented the parties from getting locked in a debate over statutory protection, a typical 'hot button' in forest management conflicts.The nature value forests are effectively protected, as no loggings are permitted in them, yet they are not defined as protection areas.Another example of innovative solutions was the decision to use map-based questionnaires to scan out the citizens' preferences for forest management practices in the future, in regular intervals. Finally, the process instigated learning and reflection among the parties.The parties did not come to an agreement on the most fundamental question concerning the feasibility of different forest management strategies (clear-cuttings vs. continuous-cover forestry), but they reconsidered their views on citizens' preferences for management of city-owned forests.Forest sector actors acknowledged that the map-based questionnaire showed quite unequivocally that people value forests and do not like clear cuts, while some environmental actors admitted that it was a surprise for them that some people actually support landscape loggings.As one ENGO representative put it, 'I understand now that people also want trees to be cut down, to see a lake or something.It has been a difficult thing for me to understand, because I find forests as such a positive thing.'The participants also learned how to work with each other effectively.As one participant noted, 'The process taught us [in the city administration] to be more attentive to diverse views.' Factors contributing to the agreement Contextual factors, like interdependencies and power dynamics, played an important role in motivating the participants to engage with and stick to the process.Initially, the power distribution was asymmetric, as there were no alternative means for ENGOs and displeased citizens to influence city forest management, other than demonstrations and public objection of planned forestry operations.However, the city land use planning and forest management authorities were keen to reduce the conflicts and increase the legitimacy of city forest management.An important background factor was a heightened political interest in environmental values as a result of the 2017 elected City Council with a Green Party majority.The process also benefitted from strong initiating leadership from the city land use planning agency.Furthermore, the process had a clear mandate from the UPCIC.The need for ongoing cooperation did not motivate the ad-hoc cooperation group to build relationships, but it did influence the city agencies' willingness to engage in constructive interagency cooperation. The contextual factors created favorable circumstances for the collaborative process, but the forest program process also performed well in terms of the key process criteria for collaborative governance.The process was self-organizing in a sense that the participants could decide the ground rules and the working program.Importantly, the group was not restricted by any predefined logging targets as some other Finnish participatory forest strategy processes (see Saarikoski et al., 2010).As one ENGO representative put it, 'I felt that this process provided a real opportunity to make a difference.Often these [participatory processes] don't do that, the outcome is pretty much predetermined.'Another ENGO representative put it more bluntly: 'This approach was from a different planet in fairness and a real dialogue.' All information generated and collected during the process was fully shared among participants, and the process was based on collaborative dialogue.Not all participants thought that they were heard and respected during all stages of the process, but the corrective moves by the facilitators reinstated more respective dialogue.The joint fact-finding stage was instrumental in sharing information and developing an improved understanding of the basic ecological and forest management issues among the participants.Also, the map-based questionnaire, which was jointly designed by the group, provided important information for the process.These intermediate outcomes provided 'small wins' (see Ansell & Gash, 2008) that sustained and energized the process. The participants basically had equal opportunities to influence the process outcomes.In practice, it was the partners with the most scientific and professional expertise and skills to write the forest program text who had the biggest influence on the outcome.However, these actors represented both side of the biodiversity vs. forestry debate, and they were trusted by the others to formulate the jointly approved principles and actions in an accurate manner.Furthermore, consensus-oriented decision-making ensured that compromises were reached in situations in which different views could not be reconciled. Perhaps the single-most important issue accounting for the agreement was the use of neutral third-party facilitators, who helped to design the process, structured the discussions, and strived to build trust and maintain a good working atmosphere.As one participant pointed out, 'We would never have gotten this far on our own.'The importance of good facilitation was most clearly demonstrated by the instances in which it was missing.The process was about to fall apart at the point when some participants felt that a member of the facilitation team was biased, and it took some time and effort from the senior facilitator to re-establish the trust in the process.Also, the stages in which the participants communicated directly with each other via emails were critical, as it was easy to misunderstand each other, and the tone of the conversation quickly turned acrimonious.These critical moments also illustrate the ways in which resentment and hurt feelings, in some instances only a few misplaced words, can get in the way of collaboration and block settlements that are in the participants' best interests.This insight was aptly captured by one interviewee: I learned a lot about forestry.But I also learned how much it depends on people and on each person, how well they get along with each other.That it is not always the facts that count, but also the people and how they talk to each other. The most important shortcoming of the process was the underrepresentation of residential interests.A residential organization representative dutifully attended the meetings but was largely silent during the discussions.In addition to fortuitous reasons, the place-specific and ad-hoc nature of citizens' protests can make it difficult to find sufficiently engaged representatives for a general residential interestif such thing exists in the first place.The map-based questionnaire partly filled in this gap by bringing in the diversity of residential and recreational user perspectives. Discussion and conclusions The Jyväskylä forest agreement demonstrates the potential of collaborative multi-stakeholder processes in settling persistent environmental disputes.The process did not result in a consensus on the best ways to manage city forests, but it created a practical agreement (Laws et al., 2014), which balanced ecological and recreational goals with traditional forest management goals and practices.Retaining the negotiation element in collaborative processes is important, as it makes space for plurality of views and allows cooperation and conflict to coexist in a fruitful way (van den Hove, 2006).The productive role of conflict as a catalyst of social transformation is emphasized by several scholars.Some view societal struggles as openings for new possibilities for collective action (Chambers et al., 2022) while others draw on the notion of agonistic pluralism (Hillier, 2003) and maintain that nonconfrontational practices engender a retreat from radical thinking and innovative environmental solutions (Poncelet, 2001).In the Jyväskylä case, conflict was a driving force in motivating the parties to collaborate but sustaining it simply for the sake of maintaining critical environmental debate would not have served the environmental activists' interests.For them, challenge and resistance are not an end in itself but ways to achieve concrete improvements in environmental protection on the ground, measured in terms of hectares and number of species and habitats.In these terms, the Jyväskylä agreement was quite successful.The results indicating positive environmental outcomes from collaboration align with previous studies that have found a generally positive effect of participation on the environmental standard of governance outputs (Jager et al., 2020;Newig & Fritsch, 2009).The effects were particularly positive in caseslike in Jyväskylä casewhere participants were granted considerable influence over decisions and outputs.The reluctance of vested interest to devolve power to new partners and to allow the questioning of the dominant forest management paradigm (see Ollonqvist, 2002) has been the primary reason why the previous participatory natural resource management efforts have failed to deliver ecologically and socially sustainable outcomes in Finland (Kangas et al., 2010;Raitio, 2012;Saarikoski et al., 2010). The findings from this case study also illustrate the capacity of collaborative processes to develop creative ideas and innovative solutions, and they provide some support for the joint learning hypothesis (see Innes & Booher, 2003).The concept of nature value forest created during the process served as a boundary object (Baggio et al., 2015), which was sufficiently fluid with respect to the protection status of nature value forests but concrete enough to specify acceptable management practices in such forests.Another innovative solution was the decision to adopt the map-based questionnaire tool as part of the city forest management practices.Adaptive forest management, in which policies and actions are adjusted based on the combination of new scientific and socioeconomic information, is considered essential in supporting reflexive and resilient natural resource management (e.g.McIntyre & Schultz, 2020). The study confirms the importance of several process-related criteria for the success of collaborative processes.The most crucial factor was the participants' commitment and collaborative capacity to work through their differences, even in the critical moments when the process was about to unravel.Our analysis shows how collaboration can easily get derailed by ineffectual communication even if it is in the parties' best interests to reach an agreement.Another key factor was good facilitation, which enabled genuine dialogue and encouraged the parties to stick with the process despite the difficult moments.The importance of facilitation is found in several empirical studies on collaborative natural resource management (e.g.Lindgren et al., 2021;McIntyre & Schultz, 2020).Conversely, the lack of neutral facilitation has been a major reason for several unsuccessful attempts at collaborative forest management (Raitio, 2012;Vihma & Torikka, 2021).Developing a shared knowledge base on forest management and ecology, and especially citizens' preferences, was also critical.The positive impact of information sharing and joint fact finding is observed also in other studies on collaborative natural resource management (Innes & Booher, 2003;Lindgren et al., 2021). The contextual factors are equally important in accounting for the process outcome.The forest agency was unlikely to alter its existing practices, ingrained in the dominant Finnish forest management paradigm without the political pressure from the Green-Party-dominated City Council to revisit city forest management strategies and priorities.Additionally, the economic stakes were relatively low compared to conflicts concerning large areas of commercially managed forests (see Zachrisson & Lindahl, 2013).The city forests are a source of revenue, but they are mainly managed for the benefit of citizens.Hence, the most important debates revolved around public interests as well as the epistemic authority concerning good forest management practices.The prospects of finding collaborative outcomes are far more challenging in situations with high economic interests of individual forest owners and/or forest industry (Saarikoski et al., 2010) and national-level pressure to generate forestry revenue to the state (Raitio, 2012).Furthermore, municipal government is likely to be more responsive to citizen concerns than more distant and opaquer national-level policy-making processes.Therefore, the Jyväskylä forest cooperation group was, in many ways, ideally positioned to result in positive environmental outcomes. Our study affirmed the bearing of process related criteria (see Figure 1) for reaching a practical agreement among conflicting parties.However, it also showed that collaborative processes designed for stakeholder interaction do not necessarily succeed in accommodating the views of the public at large.The cooperation group members made incorrect assumptions of the public's forest management preferences, reflecting their own views instead of those of the average Jyväskylä residents.Furthermore, our experiences show that it is very difficult if not impossible to ensure adequate representation of diverse citizen interests in collaborative groups which are small enough to allow meaningful interaction between the participants.The solution to capture the city residents' perspectives with a map-based survey tool was very effective because it unequivocally showed what people really think and what kind of forest management activities they prefer.Map-based surveys are particularly helpful in providing spatially specific information, often highly relevant in natural resource management situations.Collaborative governance literatures tend to shun one-directional participatory methods which do not allow dialogue and interaction.However, a skillful combination of deliberative and survey-based methods can provide a multifaceted understanding of diverse views, and also address the concerns that stakeholder processes may fail on the grounds of inclusiveness and democratic representativeness (Leach, 2006). Implementing the forest program and introducing continuous cover forestry to recreational forests can serve as a niche experiment (Kivimaa et al., 2017) that can instigate changes of forest management practices in other local governments, and possibly commercial forests, as well.Also, the social and intellectual capital developed in the process can contribute to novel thinking about city forest management priorities and methods in the future.Single case studies like ours tend to focus on the immediate outcomes and cannot capture the longstanding impacts.Therefore, further research is needed on the long-term effects of collaboration in instigating social learning and promoting sustainability transitions. Emma Luoma is a doctoral student at the Univerisity of Eastern Finland.Her research interests are in environmental conflict resolution and collaborative capacity building. Dr. Sanne Bor is postdoctoral researcher at LUT Business School.Her research focuses on inter-organisational collaboration in the face of big societal challenges. Dr. Pia Polsa is Associate Professor of marketing at Hanken School of Economics.Her current research interests are consumer vulnerability, poverty, and multi-dimensional value in cross-sector settings. Figure 1 . Figure 1.The outcome criteria, as well as the factors influencing the success of collaborative processes. Figure 2 . Figure 2. Timeline of the forest program process.
2023-09-18T15:10:22.196Z
2023-09-16T00:00:00.000
{ "year": 2023, "sha1": "d10f9305b9dfa99a7ff4d7a1a81413742e75aeb0", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/1523908X.2023.2258524?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "faeb258bfa47a99eee27e75cc679a2339762f6d3", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
18957263
pes2o/s2orc
v3-fos-license
Explicit and Implicit Emotional Expression in Bulimia Nervosa in the Acute State and after Recovery Expression of emotional state is considered to be a core facet of an individual's emotional competence. Emotional processing in BN has not been often studied and has not been considered from a broad perspective. This study aimed at examining the implicit and explicit emotional expression in BN patients, in the acute state and after recovery. Sixty-three female participants were included: 22 BN, 22 recovered BN (R-BN), and 19 healthy controls (HC). The clinical cases were drawn from consecutive admissions and diagnosed according to DSM-IV-TR diagnostic criteria. Self reported (explicit) emotional expression was measured with State-Trait Anger Expression Inventory-2, State-Trait Anxiety Inventory, and Symptom Check List-90 items-Revised. Emotional facial expression (implicit) was recorded by means of an integrated camera (by detecting Facial Feature Tracking), during a 20 minutes therapeutic video game. In the acute illness explicit emotional expression [anxiety (p<0.001) and anger (p<0.05)] was increased. In the recovered group this was decreased to an intermediate level between the acute illness and healthy controls [anxiety (p<0.001) and anger (p<0.05)]. In the implicit measurement of emotional expression patients with acute BN expressed more joy (p<0.001) and less anger (p<0.001) than both healthy controls and those in the recovered group. These findings suggest that there are differences in the implicit and explicit emotional processing in BN, which is significantly reduced after recovery, suggesting an improvement in emotional regulation. Introduction Concerns about weight and shape were central to early maintenance models of BN [1] but over time these models have been extended to include problems in social emotional functioning [2]. A model using the SPAARS framework (schematic, propositional, analogical and associative representation systems) of emotional processing posits that eating disorder symptoms develop as a means of managing negative emotions such as anger by redirecting the emotion onto the self and body, in the form of selfdisgust/shame [3]. Systematic reviews of the domains of social-emotional processing in people with eating disorders found substantial difficulties in different domains, including emotion recognition and emotion regulation [4]. A meta analysis showed that people with bulimia nervosa had particular problems with social evaluative aspects of functioning (negative self evaluation, higher sensitivity to rank related issues, etc.) [5]. Experimental implicit tasks in people with eating disorders find negative self esteem and vigilance towards critical and dominant faces [6]. This may explain why anger is suppressed as suggested by SPAARS model as anger expression is less tolerated in those with subservient positions [7]. However, emotional expression, both implicit and explicit, has been less often studied in people with BN. Thereby, explicit emotion regulation processes require a conscious effort to be initiated and completed, some monitoring during implementation, and a certain degree of awareness (e.g. answering questions on a questionnaire). Implicit processes are automatic, can happen without insight or awareness, and do not need monitoring in order to be completed (e.g. facial expressions, attentional bias…) [8]. In a preliminary pilot study we found that patients with BN showed significantly less facial expression of anger/frustration than controls while playing a therapeutic video game [9]. On the other hand, expressions of joy were higher than healthy controls in people with BN which contrasted with the lower level of expression of positive facial affect in patients with anorexia nervosa. However the implicit vigilance towards facial signals of criticism and dominance was associated with early adversity [6]. In order to examine for possible state or trait effects, the aim of the study was to examine the implicit emotional expression (by measuring facial expression in response to a therapeutic video game) and explicit emotional expression (measured by self report of anxiety and anger) in BN patients, in both acute and recovered state compared with healthy controls. We hypothesised that patients with BN would show lower emotional regulation functioning, expressed by higher levels of positive emotion and reduced anger than healthy controls, which might improve after remission. Subjects The study was carried out according to the latest version of the Declaration of Helsinki and was approved by the Ethics Committee of the Bellvitge University Hospital (Spain). Written informed consent was obtained from all participants. The study was conducted between May 2011 and June 2013. Sixty-three women participants were included, distributed in three independent groups: 22 BN patients in acute state, 22 BN patients in remission state and 19 healthy controls. Clinical cases were diagnosed according to DSM-IV-TR diagnostic criteria [10], by means of structured clinical interview for DSM-IV Axis I disorders (SCID-I) [11], conducted by experienced psychologists and psychiatrists. The criteria to be included as recovered patients were having a minimum of 12 (consecutive) weeks being abstinent from bingeing and purging (laxatives and/or vomiting). As described in previous studies [12,13], the inclusion criteria was an age between 18 and 45 years. The exclusion criteria were: primary psychiatric or neurological disorders (e.g. psychotic disorders, bipolar disorders, major depressive disorders, substance abuse-dependence disorders, etc.) or active pharmacological therapy that can interfere with the game performance; current or lifetime diagnosis of behavioural technological addictions. Patients were consecutive referrals for assessment and outpatient treatment at our Hospital. Participants' characteristics are shown in Table 1, No statistical differences between groups were found in neither age nor marital status, but a larger percentage of the healthy control group had a high education level. Both clinical groups (BN in acute state and BN in remission state) reported: a non-significantly different mean duration of eating disorders illness, and non-significantly different mean Body mass index. BN patients reported mean weekly frequency of bingeing of 4.1 (SD = 3.8) and a mean weekly frequency of vomiting of 7.2 (SD = 15.5). None of the BN patients had a lifetime diagnosis of AN. The healthy control cohort included 19 volunteers from the same catchment area. They did not receive any financial compensation for their time. Measures Explicit Emotional expression measures. State-Trait Anxiety Inventory (STAI) [14,15]: is a self-report questionnaire that includes 40 items on the basis of a 1-4 response scale, to evaluate the temporary condition of ''state anxiety''(S) (20 items) and the more long-standing quality of ''trait anxiety''(T) (20 items). The set of questions value feelings of anxiety and depression in the areas of worry, tension and apprehension. The STAI was validated in Spanish population with Cronbach's alpha coefficients ranging between 0.90 and 0.94 [16]. Symptom Check List-90 items-Revised (SCL-90-R) [17]: is a multidimensional self-report assessment measure for a broad range of psychological problems/symptoms. It contains 90 items structured in nine primary symptom dimensions: Somatization, Obsession-Compulsion, Interpersonal Sensitivity, Depression, Anxiety, Hostility, Phobic Anxiety, Paranoid Ideation, and Psychoticism. In addition, the questionnaire can produce three global scores: a global severity index (GSI, which measures global psychological distress), a positive symptom distress index (PSDI, a measure of the intensity of symptoms) and a positive symptom total (PST, which reports the total self-reported symptoms). Only the Anxiety subscale was used in this study. This questionnaire has been extensively validated in a Spanish population, obtaining adequate psychometrical values [18]. This scale has been validated in a Spanish population, obtaining a mean internal consistency of 0.75 (Coefficient alpha) [19]. State-Trait Anger Expression Inventory 2 (STAXI-2) [20] is a 44-item self-report instrument that examines the experience and expression of anger. This instrument was validated in a Spanish population with Cronbach's alpha coefficients ranging between 0.63 and 0.95 [21]. The Spanish version of the STAXI-2 comprises 49 items [21]. It entails six scales: (1) State Anger; (2) Trait Anger; (3) Anger Control (including two subscales: a) Anger Control-Out and b) Anger Control-In); (4) Anger Expression-In; (5) Anger Expression-Out and (6) Anger Expression Index, which provides a general index of the expression of anger (derived from the Anger Expression-In, the Anger Expression-Out and the Anger Control scales). Items are rated on four-point Likert scales assessing either the intensity of the angry feelings or the frequency with which anger is experienced, expressed, suppressed, or controlled. Implicit Emotional expression measure. Facial recognition software: as described in previous studies [9,12], this facial affect recognition software was designed and developed for this specific PlayMancer Platform. The facial expression of the patient during the video game performance is detected by an integrated camera and processed by the facial tracking component. For this experiment, we used anger and joy emotions as outcome measures. To calibrate the facial emotion recognition software, several previous experiments were conducted, for a more detailed description of the method see this study [9]. The measure provided by this tool is the total time (in seconds) that the patient is facially expressing a particular emotion throughout the duration of the entire video game session. Procedure For both clinical groups and healthy controls, experienced psychologists/psychiatrists conducted face-to-face structured interviews. Participants completed the self-report questionnaires (SCL-90-R, STAI and STAXI-2). For the BN patients in acute state, the video game session took place before starting cognitive behavioural therapy (CBT). For the BN patients in remission state, the session was recorded in a follow-up session after finishing the standard CBT program [13]. The video game intervention. A detailed description of the Island Video game is available in [12]. It has been used as an addon therapeutic tool for ED with promising results [13]. The overall goal is to improve self-control and also to learn how to regulate arousal and reactivity to negative situations, such as frustration, anxiety and time pressure. Biofeedback and a focus on breathing to produce relaxation are used to train this form of emotional regulation [9]. The level of game difficulty is adjusted in a closed feedback loop; higher levels of undesired emotional and/or physiological reactions are coupled with greater difficulty in attaining the end goals. The performance in each VG session was collected during 20 min. Three minutes of relaxing music were played before and after the VG session. The VG consists of three mini-games: (1) The Face of Cronos: The player has to climb up a cliff in which obstacles appear, depending on the arousal of the player (based on biofeedback). This mini-game trains planning and decision making; (2) Treasures of the Sea: A virtual swimming game in which the player has to collect different objects and fishes while conserving their oxygen supply. This trains visuospatial abilities, visual working memory and decision making. High arousal makes the task more difficult; (3) Sign of the Magupta: A relaxation game in which the player connects a constellation of stars through breathing control. Slow deep breathing allows the connections between stars to form [13]. Statistical analysis Analyses were carried out with SPSS20 for Windows. The comparison of emotion measures between diagnostic groups was carried out with Poisson regression, a log-linear model that uses the logarithm as the link function and the Poisson distribution function and becomes useful for count data. Analysis of Variance (ANOVA) tested the mean differences in psychological measures and performance measure among the groups. The effect size was valued with the 95% confidence interval for the contrasts and with the Cohen's d coefficient (good effect sizes were considered for |d|.0.50). Analyses were adjusted by the SCL-Depression-score. Table 2 shows the descriptive for the facial expression of joy and anger (in seconds) and the comparison between groups. Mean scores showed that BN patients expressed joy during the longest time and anger during the shortest time, whereas healthy controls expressed joy during shortest time and anger for the longest. All the pair wise comparisons achieved significant results, except for the post-hoc comparison of joy between Recovered-BN and healthy controls. Measures of implicit emotional expression: facial expression In order to control for effects of playing success on the expression of emotions, the outcome of the diving performance on the mini-game ''treasures of the sea'' was calculated as a number of errors (number of times out of breath) divided by the minutes playing the diving mini-game. No statistical differences were found between groups (p = 0. Figure 1, the average difference between expression of positive and negative own implicit emotion was lower in recovered BN and healthy controls than in acute BN (p,0.001). Table 3 shows the ANOVA models comparing the mean scores registered in the SCL-90-R-anxiety scale, STAI-state and trait scales, and the STAXI-state and trait scales. As a rule for all measures, BN patients achieved the highest mean scores and HC the lowest. All pair-wise comparisons between these two diagnostic conditions achieved significant differences with high effect sizes (|d|.0.80). BN-recovered group showed intermediate scores compared with BN and healthy controls. The pair-wise comparisons between BN-recovered and BN achieved statistical differences for SCL-90-R-anxiety and STAI. The comparison of BN-recovered with healthy controls achieved also significant and relevant differences for SCL-90-R-anxiety, STAI-trait and STAXI-state. Discussion The aims of the study were to examine implicit aspects of emotional regulation by measuring facial expression in response to a therapeutic video game (Islands), and explicit aspects of emotional reactivity (i.e. anger and anxiety), measured by selfreported questionnaires, in both acute and recovered states of BN patients. We were able to confirm our first hypotheses in that patients with BN did show higher levels of implicit positive emotion and reduced implicit anger than healthy controls, and BN recovered patients had an intermediate response. In contrast to the reduced implicit expression of anger through facial expression, in the questionnaires asking explicitly for emotional expression, patients with BN reported higher levels of state and trait anger and anxiety. These findings suggest an opposite profile in the explicit self reported emotions and implicit emotional expression in BN patients, which is visibly corrected in the recovered group. The results point to emotional dysregulation in BN patients, with a selective reduction in the facial emotional expression of anger and elevated levels of explicit anger. This suggests a specific strategy of suppressing anger. These findings fit with the SPAARS model, as it is as if anger is appraised as ego-dystonic. It is possible that expressing anger has perhaps been linked to rejection from others, a form of emotional invalidation which may have been learned through classical conditioning. Therefore it appears that people with BN have learned to suppress anger which may have an outlet via restriction/bingeing-vomiting. These results are in line with those suggesting that ED patients have inadequate anger expression and deficits in dealing with anger and frustration [22,23]. There is evidence suggesting that ED patients experience negative emotions as threatening, dangerous and unacceptable and their expression as a sign of weakness and reason for social rejection [24][25][26]. This lack of concordance between implicit and explicit negative emotional expression has been noted to occur when emotional suppression is used as a form of emotional regulation [27,28]. The recurrent use of emotional suppression in the long term produces reduced control of emotion, problems in interpersonal functioning, and higher depressive symptomatology [29]. The mental effort Table 2. Association between group and implicit emotional expression (facial expression) measures. Table 3. Association between group and explicit emotional expression (anxiety, anger and impulsivity) measures. required to suppress facial reactivity may make the individual less responsive to the emotional cues of the person they are interacting with and this may disrupt social functioning. This may contribute to the lack of trust and problems with conflict noted in bulimia nervosa [30]. Neuroimaging studies have also demonstrated that suppression is associated with higher activation in the amygdale and insula [31,32], and in regions implicated in cognitive control, namely prefrontal and anterior cingulated cortices [33,34]. These findings demonstrate that emotional suppression is attained at a higher physiological, psychological and interpersonal cost [32,35]. The BN patients also showed greater levels of facial expression of positive emotion and yet described themselves as more anxious than the control group. However, in the recovered group there is a decrease in the level of discordance between this implicit and explicit emotions, which might indicate that after remission BN patients exhibit a more authentic emotional response [24,36], as high levels of positive emotional expression in BN patients might be used as a means of gaining acceptance and avoiding rejection [37]. Finally, the convergence between anxiety and emotional expression in BN patients is in agreement with studies showing that anxiety is associated with the inability to control emotional responses [38,39]. Neuroimaging studies also support this hypothesis, and it has been suggested that bigger decreases in anxiety are related to higher ventro-medial prefrontal cortex activity, a cerebral region involved in emotional regulation [40][41][42]. Comparison between diagnostic conditions The study adds to earlier work on emotional expression in ED, which had found less facial expression in AN during negative and positive film clips and attenuated verbal emotional expression in AN during an emotion talk task, but not in BN [43,44]. Since the outcome variable in this study was verbal expression, the results of the study at hand amends to the existing research by suggesting that facial emotional expression, which may be less susceptible to cognitive control, does show disturbances in patients with acute BN. Since the video game can be seen as emotionally neutral, it would be interesting to assess the facial emotional expression of BN patients during positive and negative film clips. This study has several important strengths, primarily the inclusion of both, acutely ill BN patients and BN patients after remission. Although emotional regulation has been previously studied in BN [13,22] this is, to the best of our knowledge, the first time that the implicit and explicit emotional expression has been described in BN patients in the acute and recovered state. However, the results of this study should be interpreted in the context of some limitations. First, the sample was relatively small, so the results should be interpreted with caution. Second, only women were included, considering that it has been estimated that only 10-15% of people with ED are male [45]. Future studies focusing on the emotional regulation of male BN patients are desirable. As a third point it has to be mentioned that the explicit measure only included negative emotions (anger and anxiety), while implicit facial expression was assessed as well in the positive (joy) as in the negative (anger) valence. Thus, no conclusion on the coherence between implicit and explicit expression of the positive emotion joy could be drawn. The possibility that BN patients actually felt more emotions that are positive cannot be ruled out. In addition, the assessment of implicit and explicit emotional expression was taken during different activities, more precisely explicit emotions were assessed during the completion of questionnaires, implicit emotions were assessed as reaction towards a video game. Game-playing could have influenced the emotional state of participants, resulting in an increase of positive emotions and a decrease of negative emotions in the patient group. Still, there aren't any conclusive hints to support these assumptions, since game success was controlled for. Finally yet importantly, the question of the relationship between emotion regulation and emotional expression needs to be addressed. From the results of this study it cannot be confirmed if the differences in emotional expressivity and the discordance between implicit and explicit emotional expression are due to difficulties in emotion regulation and emotional awareness, because other explaining factors or intervening variables (such as previous emotional experiences and specific personality traits) cannot be excluded. It is cognisable that other factors related to the psychopathology of eating disorders account for differences in facial emotional expression. Since sleep disturbances and its interaction with altered neurotransmitter function (e.g. leptin and orexin) [46][47][48] were not assessed in this study, an influence of these variables is possible. Thus, it is not clear if obtained results are due to improvements in emotional abilities or to improvements in before mentioned factors. Although it is probable that the patients' emotion regulation capacities have improved with treatment and that the more authentic expression of emotions can be explained by this, future studies should address these issues. In summary, this study emphasizes the importance of developing treatments targeting more effective strategies for enhancing the regulation of emotions as well as their adequate recognition and authentic expression in BN patients. Future neuropsychological and neuroimaging studies should focus on the emotional profile of these patients, in order to shed more light on these multifaceted constructs.
2017-06-18T02:08:11.652Z
2014-07-02T00:00:00.000
{ "year": 2014, "sha1": "5001d6e60f1ab695301c939ed790206dc822e3fa", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0101639&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "18304b16fe531d80df40bb5f550bff6deb34bdbe", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
245827798
pes2o/s2orc
v3-fos-license
Laser-Driven, Ion-Scale Magnetospheres in Laboratory Plasmas. II. Particle-in-cell Simulations Ion-scale magnetospheres have been observed around comets, weakly-magnetized asteroids, and localized regions on the Moon, and provide a unique environment to study kinetic-scale plasma physics, in particular in the collisionless regime. In this work, we present the results of particle-in-cell simulations that replicate recent experiments on the Large Plasma Device at the University of California, Los Angeles. Using high-repetition rate lasers, ion-scale magnetospheres were created to drive a plasma flow into a dipolar magnetic field embedded in a uniform background magnetic field. The simulations are employed to evolve idealized 2D configurations of the experiments, study highly-resolved, volumetric datasets and determine the magnetospheric structure, magnetopause location and kinetic-scale structures of the plasma current distribution. We show the formation of a magnetic cavity and a magnetic compression in the magnetospheric region, and two main current structures in the dayside of the magnetic obstacle: the diamagnetic current, supported by the driver plasma flow, and the current associated to the magnetopause, supported by both the background and driver plasmas with some time-dependence. From multiple parameter scans, we show a reflection of the magnetic compression, bounded by the length of the driver plasma, and a higher separation of the main current structures for lower dipolar magnetic moments. I. INTRODUCTION A vast range of space and astrophysical scenarios are driven by the rapid expansion of plasmas through space. Such examples include interplanetary coronal fast ejecta 1 , the expansion of the stellar material from supernova remnants 2 , and artificial magnetospheric releases of tracer ions 3 . When these expanding plasmas encounter obstacles of magnetic nature, the resultant interaction leads to highly nonlinear and complex dynamics. In the solar system, the interaction between the plasma flow (i.e. the solar wind) and planetary-sized magnetic obstacles leads to the formation of magnetospheres 4 . The effective size of the magnetic obstacles is determined by the equilibrium position between the kinetic pressure of the solar wind and the magnetic pressure exerted by the planetary magnetic fields 5 . The region of equilibrium, called the magnetopause, can be described using the pressure balance derived from magnetohydrodynamics (MHD) where n d is the density of the solar wind, v 0 is its flow velocity, m i,d is the mass of its ions, and B is the total magnetic field at the magnetopause. The total magnetic field can be written as B = B 0 +B dip , where B 0 is the collective magnetic field and B dip = M/L 3 0 is the magnetic a) Electronic mail: filipe.d.cruz@tecnico.ulisboa.pt b) Electronic mail: luis.silva@tecnico.ulisboa.pt field of the obstacle, often well described by a dipolar profile of magnetic moment M . The distance L 0 between the center of the dipole and the magnetopause, often referred to as the plasma standoff distance, measures the effective size of the magnetic obstacle. For planetary-sized magnetospheres, the obstacle size is typically tens of thousands of kilometers. However, magnetospheres with a few hundreds of kilometers are also observed in space environments such as the lunar surface. When the magnetic obstacle size is smaller or of the order of the ion kinetic scales of the plasma, i.e. the ion skin depth or the ion gyroradius, the interaction with the solar wind results in ion-scale magnetospheres, or mini-magnetospheres. The study of mini-magnetospheres in past years was mainly motivated by the observation of crustal magnetic anomalies on the lunar surface [6][7][8][9][10] . Although the Moon does not have a global magnetic field like Earth, it does have small localized regions of crustal magnetic field, of 10-100 nT over distances of 100-1000 km 6 , which are of the same order as the gyroradius of solar wind ions near the Moon's surface. As a result, when these regions of the lunar surface are exposed to the solar wind, mini-magnetospheres are formed. The deflection of charged particles off of lunar mini-magnetospheres commonly leads to the formation of "lunar swirl" structures 11 . Similar interactions between the solar wind and small-sized patches of magnetic field also occur in other planets and natural satellites without a planetary magnetosphere, such as Mars 12 , Mercury 13 , Ganymede 14 , and comets and asteroids 15 . Multiple experiments have been performed in laboratory environments that replicate the interaction be-tween plasma flows and magnetic obstacles. With a proper re-scaling of parameters 16 , these experiments represent highly controlled configurations where a large variety of diagnostics can be used to obtain more accurate measurements than those obtained from the direct probing of astrophysical events. In experimental studies, fast-moving plasma flows are usually driven resorting to high-intensity lasers focused onto solid targets of plastic or metal composition 17,18 . These laser-ablated plasmas can be mildly collisional or collisionless, replicating astrophysical conditions 19,20 . By adding dipole field sources against the plasma flow, previous experiments of mini magnetospheres studied possible applications for spacecrafts [21][22][23] , the formation of lunar swirls 11 , and the conditions for the formation of magnetosphere features [24][25][26][27] . Although these experiments achieved important breakthroughs in the study of ion-scale magnetospheric physics, they were limited to i) 1D measurements of the magnetic field and plasma density profiles and ii) fixed properties of the obstacle and plasma flow. Numerical simulations play a key role in interpreting and designing experiments. Early MHD simulations attempted to explain the formation and characteristics of lunar mini-magnetospheres and validate experimental and analytical models 25,28,29 . Hybrid simulations were used to study the role of ion kinetic effects, and obtain conditions for the formation of magnetospheres 30 and replicate previous experimental results 31 . However, these simulations do not resolve the electron scales and do not capture important kinetic effects on the magnetosphere's boundary, e.g. charge separation effects and nonthermal particle distributions. Particle-in-cell (PIC) simulations were used to fully resolve the micro-physics of these systems and study its role in the formation of lunar minimagnetospheres [32][33][34][35][36] , the scaling of their properties with solar wind speed and magnetic field orientation 37 and the conditions for the formation of collisionless shocks 38 . In this work, we use PIC simulations of ion-scale magnetospheres to interpret the results of recent experiments 39 performed at the LArge Plasma Device (LAPD), University of California, Los Angeles. In these experiments, fast collisionless plasma flows generated by highrepetition-rate lasers were collided with the magnetized ambient plasma provided by the LAPD and with a dipolar magnetic field obstacle, leading to the formation of ion-scale magnetospheres. Using motorized probes, high spatial and temporal resolution measurements of the magnetic field allowed characterization of 2D magnetic field and current density structures. Apart from validating the experimental results, the simulations presented in this work explore a set of upstream and magnetic parameter scans and configurations not accessible in the laboratory to determine the importance of each system parameter on the magnetospheric properties. The simulations show that the background ions, and then the driver ions, are responsible for the formation of the magnetopause observed in the experiments. They also show that a reflection of the downstream magnetic compression is observed for certain parameters of the driver plasma, and that the distance between the main current features is dependent on the dipolar and driver plasma parameters. This paper is organized as follows. In Sec. II, we briefly review the LAPD experiments and their main results. In Sec. III, we present PIC simulations of ion-scale magnetospheres. In Sec. III A, we outline the standard configuration and parameters used for the simulations. In Sec. III B, we provide an overview of the temporal evolution of these systems and show that the simulations agree with the results of the LAPD experiments. We discuss the origin of the structures observed in current density and magnetic field synthetic diagnostics and use particle phase spaces to interpret them. In Sec. III C, we present the results for different lengths of the plasma flow and define the conditions required to reproduce the features observed experimentally. The coupling between the laserablated driver and background plasmas is characterized in Sec. III D with simulations with different driver densities. In Sec. III E, different magnetic moments are considered, and we show that the main current density features are highlighted and more easily visible for weaker magnetic obstacles. In Secs. III F and III G, we discuss and illustrate the validity of the key simplifications and approximations used for the parameter scans presented in Secs. III B-III E. Finally, we outline the conclusions of this work in Sec. IV. This paper is the second part of a two part series. Detailed experimental results are presented in Part I 39 . II. LAPD EXPERIMENT A new experimental platform has been developed on the LAPD to study mini-magnetospheres. The platform combines the large-scale magnetized ambient plasma generated by the LAPD, a fast laser-driven plasma, and a pulsed dipole magnet, all operating at high-repetitionrate (∼ 1 Hz). In the experiments, a supersonic plasma is ablated from a plastic target and then expands into the dipole magnetic field embedded in the ambient magnetized plasma. By measuring 2D planes of the magnetic field over thousands of shots, detailed maps of the magnetic field evolution are constructed. Additional details on the platform and results can be found in Part I. Example results are shown in Fig. 1 for the measured change in magnetic field ∆B z = B z − B z,initial and the current density J x = ∂∆B z /∂y. Here, B z is the total magnetic field, B z,initial = B 0 + B dip is the total initial magnetic field, B 0 is the background LAPD field, and B dip is the dipole magnetic field. These results are taken along y at x = 0 from the z = 0 plane probed experimentally. In the experiments, the dipole is centered at (x, y, z) = (0, 0, 0) and has a magnetic moment M = 475 Am 2 . As seen in Fig. 1(a), the expanding laser-driven plasma creates a leading magnetic field compression followed by a magnetic cavity. The cavity reaches a peak position of y ≈ −13 cm, while the compression propagates closer to the dipole before being reflected back towards the target. The current density in Fig. 1(b) shows two prominent structures. Following the expansion of the magnetic cavity is a diamagnetic current, which reaches a peak position of y ≈ −15 cm before stagnating for approximately 1 µs and then dissipating. Ahead of the diamagnetic current is the magnetopause current near y ≈ −13.5 cm, which lasts for about 0.5 µs. Here, we aim to qualitatively model these experiments in order to address key questions that aid in the interpretation of the experimental results. In particular, simulations can explain the role of each system component in the features observed, address which plasma component (ambient or laser-driven) is responsible for the features observed and which pressure balances are most relevant. For convenience, the notations used in this paper are different for the ones used in Part I. Here, we use the CGS system, and the axis system is rotated from the used in Part I. A. Configuration of the simulations Motivated by the results of experiments described in Sec. II, we performed 2D simulations with OSIRIS, a massively parallel and fully relativistic PIC code 40,41 . With PIC simulations, we can accurately resolve the plasma kinetic scales characteristic of minimagnetospheres dynamics. The numerical simulations presented in this work stem from a simplified description of the LAPD experimental setup, represented in Fig. 2. In these simulations, a driver plasma moves against a background plasma permeated by a uniform magnetic field B 0 and a dipolar magnetic field B dip . B 0 and B dip are oriented along the z direction and are transverse to the driver plasma flow. Since the most relevant dynamics of the simulations occurs at the ion kinetic scales, all the spatial scales are normalized to the ion skin depth of the background plasma d i = c/ω pi = m i,0 c 2 /4πn 0 e 2 , where c is the speed of light in vacuum, ω pi is the ion plasma frequency, m i,0 is the mass of the background plasma ions, n 0 is the background density, and e is the electron charge. In turn, the temporal scales are normalized to 1/ω ci , where ω ci = eB 0 /m i,0 c is the ion cyclotron frequency of the background. The simulation box is a 12 d i The driver plasma, shown in region I in Fig. 2, represents ideally the plasma ablated from the plastic target in the experiments. We assume that this driver has a length L x that is typically 2 d i , and a width L y that is typically infinite. It has a constant density n d , and it is initialized moving to the right side with initial flow velocity v 0 . The driver is composed of an electron species and a single ion species, with ion mass m i,d . Because the driver plasma is reflected during the interaction with the background, an empty region at the left of the driver was added to accommodate the reflecting particles. The background plasma is represented in region II. It is an 8 d i length and infinite width plasma and it has uniform density n 0 . The initial interface between the driver and background plasma is located at x B = −4 d i . Like the driver plasma, it has an electron species and a sin- Schematic illustration of the initial setup of the 2D PIC simulations performed. The system considers a vacuum region at the left, a driver plasma (I) of density n d and length Lx, travelling to the right with flow velocity v0, and a background plasma (II) with constant density n0 and with an internal magnetic field B0. A dipole is included at the center of the background region. Both the uniform and the dipolar magnetic fields are oriented in the z direction. An illustration of the effective magnetic obstacle created by the dipole and of the magnetic field profile at y = 0 are also shown in a dashed circumference and in a solid black line, respectively. gle ion species, of mass m i,0 . The background plasma is magnetized with an internal uniform magnetic field B 0 = B 0ẑ , and its magnitude is defined such that the Alfvénic Mach number of the flow, A dipolar magnetic field is externally imposed in our simulations (i.e., it is added to the plasma self-consistent electromagnetic fields to advance particle momenta but is not included in Maxwell's equations to advance the fields). The dipole is centered at (x, y) = (0, 0) and its associated magnetic field is B dip = B dipẑ , with B dip = M/r 3 , where M is the dipolar magnetic moment, r = x 2 + y 2 + δ 2 is the distance to the origin of the dipole and δ = 0.25 d i is a regularization parameter. For most simulations, the magnetic moment M was chosen such that the expected standoff, obtained from Eq. (1), is similar to the experimental value L 0 = 1.8 d i . For this particular magnetic moment, the total initial magnetic field B 0 + B dip is ≈ 3.0 B 0 at the standoff distance. Near the interface between the driver and background plasmas, the magnetic field of the dipole is relatively small and the initial magnetic field is ≈ 1.2 B 0 . In this work, we present simulations with different drivers and magnetic dipole moments. All the simulations presented here, and their respective parameter sets, are listed in Table I. Simulations B-G are discussed through Sec. III on equally labeled subsections. Simulation B is used to discuss the overall dynamics of the system, while simulations C, D, and E illustrate the role of the driver length, the density ratio, and the magnetic moment, respectively. Simulations F show the results for more realistic choices of parameters and simulation G for a more realistic driver shape. The physical parameters of the simulations (e.g. M A , L 0 /d i ) were adjusted to be similar to the LAPD experiments, whereas other parameters (e.g. m i /m e , v 0 , v the ) were chosen to make simulations computationally feasible. The experimental and numerical parameters are presented in Table II and compared with lunar mini-magnetospheres. In most simulations, we considered a reduced mass ratio m i /m e = 100, a flow velocity v 0 /c = 0.1, and cold plasmas to reduce the required computational resources, allow extended scans over the different parameters of the system, and simplify our analysis. The thermal effects are negligible for the main results, and the chosen ionto-electron mass ratio is high enough to ensure sufficient separation between electron and ion spatial and temporal scales. We confirm the validity of our assumptions in Sec.III F. In most of the simulations presented in this work, we have assumed that ions and electrons are initially in thermal equilibrium, and thus used the electron thermal velocities v the shown in Table I, to compute the ion thermal velocities v thi . Because we aim to study the role of the hydrogen ions of the experimental driver in the interaction with the background plasma, these simulations considered equal ion masses for the driver and background plasmas, i.e. m i,d = m i,0 . B. Evolution and main features of the system To identify the main magnetospheric and kinetic-scale structures that arise from the initial configuration, simulation B was performed. It considered a driver with length L x = 2 d i and density n d = 2 n 0 (twice the background density). Figs. 3 a1-3) represent the total ion density n i = n i,d + n i,0 , for three different times, and Figs. 3 b1-3) show the variation of the z component of the magnetic field, from its initial value, ∆B z = B z − B z,initial . In Fig. 3 a1), we see the total ion density for an early time (tω ci = 1.5). Given the small distance propagated by the driver plasma at this time, the dipolar magnetic field does not significantly affect the interaction between the plasmas. For this reason, we can express the early system as a driver flowing against a uniform magnetized background plasma. In Fig. 3 b1), we observe that this interaction creates a region of compressed magnetic field in the downstream region, where the background plasma is located, and expels the magnetic field in the region of the driver, leading to a magnetic cavity in the upstream region with approximately null magnetic field 20 . In Figs. 3 a2) and b2), we start to observe the effects of the dipolar magnetic field for a later time (tω ci = 3.0). As the magnetic pressure exerted against the plasmas increases, a region of compressed background plasma forms in front of the dipole, as Fig. 3 a2) shows. After the interaction between the background and the dipole, the magnetic field pressure becomes large enough to counterbalance the kinetic pressure of the driver, reflecting it upstream. This can be seen seen in Fig. 3 a3) for a subsequent time (tω ci = 4.5). After the reflection, there is no longer a plasma flow pushing the magnetic compression forward or holding the decompression by the left side of the background region, and as a result, the region near the dipole quickly decompresses -see Fig. 3 b3). To compare the numerical results with the experimental data shown in Fig. 1, synthetic diagnostics were ob-tained from the simulations. In Fig. 4, the variation of the magnetic field ∆B z and the density current J y measured at the axis of symmetry y = 0 and as a function of time are plotted for simulation B. These diagnostics are important to comprehend the system dynamics, due to the importance of the z direction of the magnetic field in the motion of the particles. The main features of Fig. 4 are consistent with the experimental results. In the magnetic field plot of Fig. 4 a), both the upstream magnetic cavity and the downstream magnetic compression are present. Between tω ci = 0 and tω ci ≈ 1.5, the system behaves approximately as a driver piston moving against a uniform magnetized plasma. As the driver pushes the background plasma and magnetic Table I for a list of parameters). Columns 1-3 correspond to three different times in the simulation. The vertical and circular dashed lines mark the initial border between the driver and background plasma and the dipolar magnetic obstacle with radius L0, respectively. field, the discontinuity that separates these two media travels at constant coupling velocity v c < v 0 , measured as v c ≈ 0.49 v 0 for this simulation. The leading edge of the compression of the magnetic field travels with a velocity close to v 0 for the runs considered. The driver experiences increasingly higher magnetic fields until the magnetic pressure is enough to reflect the driver near the expected standoff x 0 = −L 0 , at tω ci ≈ 3. The magnetic cavity and magnetic compression are also reflected, and the boundary between these two regions travels with a velocity v r after reflection. The background magnetic decompression is seen after tω ci = 5. In the current density plot of Fig. 4 b), we can observe the diamagnetic current that supports the magnetic field gradient between the driver and background plasmas and that identifies the leading edge of the magnetic cavity. During the driver reflection, this current branches into multiple components due to the multi-stream velocity distributions developed in the driver and background plasmas. We can also verify that this structure is reflected near the expected standoff x 0 = −L 0 . Between tω ci ≈ 2 and tω ci ≈ 3, a second current structure is present in the background region. It is associated with the magnetopause of the system and the small decom-pressed field region that we see in Fig. 4 a), and it arises from the interaction of the accelerated background ions with the dipole, as we show in Sec. III E. The presence of these two current structures is consistent with the experimental results. In Fig. 4 b) we can also see the formation of waves in the background plasma, near the dipole region. These waves are excited in regions of highly non-uniform density and magnetic field, and have periods and wavelengths between the ion and electron kinetic scales. We have verified that their properties change significantly for different ion thermal velocities. In particular, we have found these waves to be more clearly excited for lower ion temperatures, which may explain why these waves have not been observed in the experiments performed at the LAPD. A detailed characterization of these waves and the conditions for their formation is out of the scope of this paper, and shall be addressed in a future work. To better understand the particle motion during the events described, we show in Fig. 5 the phase spaces of ions and electrons located near y = 0. For the ions, the x component of the velocity of the particles is presented, to illustrate their reflection and accumulation, while for the electrons, the y component is shown instead, to show the Diamagnetic current Magnetopause Background waves formation of the currents. The magnetic field B z and the current density J y profiles for y = 0 are also represented. Once again, we used the parameter set B of Table I. Fig. 5 a1) shows the v x velocity of the ions when the dipole field is still negligible. The ions initially move upstream with velocity v 0 until they interact with the background field. After reaching the background, they are mostly decelerated and reflected by electric field in the interface between the plasmas 20 , and end up with a flow velocity that is close to zero for the simulation considered. The reflection occurs near the boundary of the magnetic cavity, which moves with velocity v c through the background, as mentioned above. During this stage, the background ions accelerate from rest to velocities of average close to v c . The driver and the accelerated background ions continue to approach the dipole until they are reflected. This can be seen in Fig. 5 a2). During this interaction, two main current structures are visible in the J y profile. The first one (from the left) corresponds to the typical diamagnetic current, while the second one corresponds to the magnetopause. To the right of these two main current structures, we can see the background waves observed in Fig. 4 b). In Fig. 5 a3), the driver ions are totally reflected. The ions reflected by the dipole obtain a velocity close to −v 0 , while the magnetic cavity moves back with velocity v r . Because the simulation considers a cold plasma approximation, the ion thermal velocities remain small most of the time, except for the boundary between the two plasmas, where the velocity of the ions changes abruptly. The same does not occur for the electrons. We can see in the v y velocity of the electrons, represented in Figs. 5 b1) to b3) that, although the electron thermal velocities are initially small, they rapidly increase considerably. At the boundary, the electrons can reach thermal velocities of 6 v 0 , much higher than the ion velocities. Because the electron and ion density profiles are very similar during the entire evolution of the system, the current density J y = e(n i v iy − n e v ey ) is then mainly transported by the electrons, where n j is the density and v jy the y component of the velocity of the ions and electrons (j = i, e, respectively). This is also consistent with the observed spatial distribution of electrons during the reflection, which shows an excess of fast electrons around the standoff position. C. Driver length To choose a driver length that best reproduces the experimental results shown in Fig. 1 and to understand its role on the magnetic field and current density structures, we performed simulations C1 to C3 (see Table I) with varying driver length L x . In Fig. 6, we show ∆B z and J y at y = 0 for L x = 1 d i (C1), L x = 4 d i (C2) and for an infinite driver (C3). For these simulations, the properties of the background plasma and the width of the driver L y were kept unchanged. The density of the driver was n d = 2 n 0 . In Figs. 6 a1) and b1), we see the magnetic field and current density plots for the short driver length L x = 1 d i . We observe most of the features of Fig. 4, namely the reflection of the compressed magnetic field in a1) and the diamagnetic and magnetopause currents in b1). For this length, however, the driver never fully interacts with the dipole. The closest that the diamagnetic current structure gets to the dipole is x r ≈ −3.0 d i , i.e., much farther than the expected standoff x 0 = −L 0 = −1.8 d i . To replicate the experimental results and ensure that the driver can reach the dipole, we should thus use a sufficiently long driver such that x r > x 0 . Additionally, short drivers risk entering in a decoupling regime between the two plasmas 42 , which can compromise the observation of a magnetopause. The coupling effects on the results are discussed in detail in Sec. III D. The position where the driver is fully reflected by the background can be estimated as where x B is the initial boundary position between the two plasmas. This estimate is obtained by computing the volume of the background plasma required for the driver plasma to deposit its kinetic energy, i.e. x r − x B corresponds to the magnetic stopping radius of the system 43 . In the simulation with L x = 4 d i , represented in Figs. 6 a2) and b2), we observe once more the main features identified in Fig. 4, but unlike the L x = 1 d i case, the driver is long enough and ends up reflected by the dipole. We observe that the diamagnetic current reaches the expected standoff and has enough plasma to maintain it near the dipole for a time period (tω ci ≈ 3 to tω ci ≈ 5) longer than the 2 d i case shown in Fig. 4. As a result, the magnetic decompression in the background region is delayed for longer drivers. However, because the full driver reflection also occurs later, longer drivers will result in short-lived reflections of the compression of the magnetic field. In Figs. 6 a3) and b3), we show the results for a driver with infinite length (L x = +∞). In this simulation, the driver plasma is only partially initialized inside the simulation domain, and a flow is continuously injected from the lower x boundary. An infinite driver configuration allows us to understand the dynamics of the system in an asymptotic regime in which the driver plasma stays close to the dipole. As expected, until tω ci = 3, the features observed are very similar to L x = 2 d i and L x = 4 d i . After this time, the magnetic and the driver kinetic pressures balance each other near x 0 , so the diamagnetic current remains stationary. Because the driver can hold for longer near the dipole, the decompression in the background region is much slower and is not visible for the time range of the plot. We can also observe that the background waves are only visible during a transient. In all the three simulations, the coupling velocity measured was always v c ≈ 0.49 v 0 . Given the results shown in Fig. 6, we chose a driver length of 2 d i to reproduce the experimental results. This driven length is large enough to ensure that the driver arrives at the dipole and small enough to observe a significant reflection of the compression of the magnetic field as we see in the experiments. D. Plasma coupling with density ratio As expected from previous works, increasing the ratio between the driver and background plasma densi- Table I for a full list of the parameters). The dashed lines represent the slopes of the flow velocity v0, the coupling velocity vc, and the reflection velocity vr. ties should improve the coupling between the two plasmas 20,42 , meaning that, for denser drivers, the transfer of momentum and energy from the driver to the background plasma is more efficient. To better understand the role of the coupling mechanism, we performed simulations with different values of the driver density, namely n d = n 0 (D1), n d = 2 n 0 (D2) and n d = 4 n 0 (D3), while keeping a constant background density n 0 and a driver length L x = 2 d i . For each run, the magnetic moment was chosen such that the expected standoff obtained from Eq. (1) was always L 0 = 1.8 d i . The synthetic magnetic field and current density diagnostics were obtained for these simulations and are shown in Fig. 7. In Figs. 7 a1) and b1) we can see ∆B z and J y for the lowest driver density considered, n d = n 0 (i.e., background and driver with the same initial density). In this regime, the coupling is less efficient and, as a result, the coupling velocity v c ≈ 0.38 v 0 is lower than obtained in the higher densities cases represented in Figs. 7 b) and c). Due to the low coupling velocity, the driver plasma is reflected more quickly by the background than for denser drivers, and the expected position x r for the total reflection on the background is farther from the dipole than the expected standoff x 0 , meaning x r < x 0 . As a result, Fig. 7 a) shows similarities with the short driver length represented in Fig. 6 a), because, in both simulations, the driver parameters do not ensure that the driver arrives near the dipole. In Figs. 7 a2) and b2), we show the results for n d = 2 n 0 , which is the same run represented in Fig. 4. For this density, the coupling velocity, measured as v c ≈ 0.49 v 0 , is high enough to secure a reflection of the driver by the dipole, as we observe at tω ci ≈ 3. In Figs. 7 a3) and b3) we show the case with the highest driver density n d = 4 n 0 , which is similar to the n d = 2 n 0 case, because the measured coupling velocity for Fig. 7 c) was v c ≈ 0.56 v 0 , i.e., only slightly larger than the v c measured for Fig. 7 b). In the high density case, we also see that the current density structures during the plasma reflection are filamented, due to analogous multi-stream velocity distributions discussed for Fig. 6 b). To guarantee that the driver reaches the expected standoff, we thus require that x r > x 0 . In fact, the position where the driver is reflected x r , for no dipole cases, increases with the driver length L x and the velocity ratio v c /v 0 , and thus, both quantities must be large enough to guarantee that x r > x 0 . In turn, the ratio v c /v 0 increases with increasing driver density ratio n d /n 0 , and so, the driver should be sufficiently long and dense to effectively couple to the background plasma. Our results (in particular Sec. III B) show that a driver with L x = 2 d i and n d = 2 n 0 qualitatively reproduces the experimental results. A separate study was also performed to analytically de- termine the properties of the driver-background plasma coupling. The results of this study will be presented in a future paper. E. Dependency of the magnetopause position with the magnetic moment To confirm that the features previously associated with the magnetopause location change according with its expected position, we performed simulations with a 2 d i long driver with density n d = 2 n 0 for three different magnetic moments. Considering the magnetic moment that results in the expected standoff L 0 = 1.8 d i as M 0 (simulation B/E2 on Table I), simulations with the magnetic moments 2 M 0 (E1) and M 0 /2 (E3) were also performed, corresponding respectively to the expected standoffs L 0 ≈ 2.3 d i and L 0 ≈ 1.4 d i . Fig. 8 shows the ∆B z and J y synthetic diagnostics at y = 0 for the three simulations. Figs. 8 a1) and b1) show the results for the highest magnetic moment M = 2 M 0 . We see that the current structures associated with the magnetopause and the background waves are less evident than for the lower magnetic moments, as they are formed farther from the dipole. Figs. 8 a2) and b2) correspond to the magnetic moment M 0 that leads to L 0 = 1.8 d i and are the same results shown in Fig. 4. As previously mentioned, there are two main observable current structure standoffs. The first one is associated to the diamagnetic current, which is reflected around tω ci ≈ 3 near the expected value x 0 = −L 0 = −1.8 d i . This standoff is related to the interaction between the driver ions and the dipole. The second standoff occurs between tω ci ≈ 2 and tω ci ≈ 3 and it is located in the background plasma region. This standoff also occurs near x = −1.8 d i . In Figs. 8 a3) and b3), we show the results obtained for the half magnetic moment M = M 0 /2. In this case, the magnetic pressure exerted by the dipole is lower, leading to a smaller L 0 , and consequently, the diamagnetic current feature visible in b3) is closer to the dipole than in Figs. 8 b1) and b2). The main changes, however, occur in the magnetopause current. Unlike what we observe for the other magnetic moments, the magnetopause current, pinpointed in the current density plot, lasts for a longer time (until tω ci ≈ 4). This current is also more separated from the diamagnetic current standoff and is easier to identify. This is consistent with the experimental observations. To identify the pressure balances associated with the two observed standoffs, and because the magnetic and kinetic pressures vary over time, we studied the tem- poral evolution of the different plasma and magnetic pressure components of the system. In particular, we calculated the spatial profiles of the magnetic pressure B 2 /8π, the ram pressure n j m j v 2 f lj and the thermal pressure n j m j v 2 thj as a function of time for y = 0. In these expressions, n j , m j , v f lj and v thj refer to the density, mass and flow and thermal velocities, respectively, of the ions (j = i) and electrons (j = e). The magnetic pressure was calculated from the magnetic field measured in each PIC grid cell located at y = 0. The flow and thermal pressures, were calculated from averaged particle data. To ensure that the calculation of each kinetic pressure considered a sufficiently large number of particles, all the particles between −0.1 d i < y < 0.1 d i were binned into equal-sized bins of width 0.05 d i over the x direction. For each bin we computed: i) the average density of each species, ii) the flow velocity, corresponding to the average of the velocity of the particles, and iii) the thermal velocity, corresponding to the standard deviation of the velocity of the particles 44 . With these averaged quantities, the ram and thermal pressures were calculated in each bin for each species of ions and electrons and each component of the velocity x, y, and z. The y and z components of the pressures, however, are negligible. These pressure profiles were obtained for simulation E3 with magnetic moment M = M 0 /2 and are plotted in Fig. 9 for times where a) the magnetopause and b) the diamagnetic current standoff can be observed. The kinetic pressures represented were calculated by adding all the components of the ram and thermal pressures of the ions and electrons for the background (P 0 ) and the driver (P d ) plasmas. The magnetic pressures represented were calculated by considering the total and the relative magnetic field pressures (P mag = B 2 z /8π and P rel = P mag − B 2 0 /8π, respectively). The pressures were normalized to the initial ram pressure of the driver ions. Fig. 9 a) shows the magnetic and kinetic pressures at time tω ci ≈ 2.33 where we observed the magnetopause in Fig. 8 b3). When the driver starts pushing the background, the pressure of the driver at the interface between the plasmas increases because the driver density and thermal velocities also increase. During the flow, the driver transfers energy and momentum to the background plasma, and as a result, the background develops a strong kinetic pressure. At the time represented in Fig. 9 a), the background plasma pressure equals the total and the dipolar magnetic pressures in x ≈ −1.4 d i , near the location of the magnetopause current of Fig. 8 b3). This observation supports the hypothesis that this current emerges from the standoff between the background and magnetic pressures. Fig. 9 b) shows the pressures for tω ci = 3 where we see the beginning of the reflec- Fig. 8 c)), during the occurrence of a) the magnetopause and b) the standoff of the diamagnetic current. The magnetic pressures are Pmag = B 2 z /8π and P rel = Pmag − B 2 0 /8π. The kinetic pressures P d and P0, corresponding to the driver and background plasmas, respectively, consider both the ions and electrons and the flow and thermal components of the velocity. c) Temporal evolution of the variation of the total kinetic energies of the driver ∆W kin,d and background ∆W kin,0 plasmas, the magnetic energy ∆Wmag, and the total energy of the simulation box ∆Wtot. The total energy is calculated by adding all the kinetic energies and the electric and magnetic energies. Since the background plasma is magnetized, the electric energy term is many orders of magnitude smaller than the magnetic energy term. The energies were normalized to the initial total energy of the driver ions W d,ini . The loss of energy conservation near tωci ≈ 4 is caused by the escape of background plasma particles and magnetic field through the right hand side of the simulation box. tion of the driver. The driver pressure equals the magnetic and dipolar pressures near After this time, the driver is incapable of moving any further into the background because the magnetic pressure exceeds its kinetic pressure. The energy variations integrated over the entire simulation domain can also help us understand the system dynamics. Fig. 9 c) shows the variation of the total driver and background kinetic energies, ∆W kin,d and ∆W kin,0 , respectively, as well as the variation of the magnetic energy ∆W mag , and of the total energy ∆W tot . The kinetic energies of the background and driver plasmas consider all ions and electrons. In early times tω ci < 3, as the driver and background plasmas interact, the driver transfers its energy to the background plasma and the magnetic field. The total energy, given by the sum of the electromagnetic energy and the kinetic energies, remains constant during this period. After the driver is fully reflected by the dipole for tω ci > 3, the magnetic field loses most of its energy to the background and driver plasmas leading to a drop of the magnetic energy. After tω ci ≈ 4, the background ions start to leave the simulation box, and the total energy is no longer conserved. The background kinetic energy remains approximately constant because the background plasma loses energy to the sink at the right boundary of the simulation but gains energy from the magnetic field. For both driver and background plasma, the ions carry most of the energy. From Fig. 9, we can identify the positions where multiple pressure balances occur, and therefore, develop an insight into the pressure equilibria that are behind the structures of the current density synthetic diagnostics. Using the previously calculated pressures, we obtained the equilibrium positions where certain pressure balances manifested and plotted them in Fig. 10 alongside J y . 10. Temporal evolution of the current density Jy at y = 0, with the closest locations to the dipole of different pressure balances for multiple times. The represented locations of pressure balances are the equilibria between the driver kinetic pressure P d with the total magnetic field pressure Pmag = B 2 z /8π, represented by the solid line; the background kinetic pressure P0 with the pressure exerted by the relative magnetic field P rel = Pmag − B 2 0 /8π, by the dotted line, and P d = P rel , by the dashed line. The results correspond to simulation E3 (see Table I). This analysis shows that the system has, in general, two magnetopause structures: one driven by the background, and one by the driver plasma. The former structure is defined by the balance P 0 = P rel . For the latter structure to form, the driver needs to have almost enough energy to push the diamagnetic current up to the magnetopause, defined by Eq. 1. This is illustrated in Fig. 10, where we show the location of the pressure equilibrium between the driver kinetic pressure and the total magnetic pressure, P d = P mag . As shown in Fig. 9, the current associated with the background magnetopause seems to overlap with the region of background and magnetic pressure balance. Unlike the driver, the background plasma is magnetized. If we neglect the compression of the magnetic field in the downstream region, the pressure balance that describes this magnetopause can then be estimated by the equilibrium of the kinetic pressure of the background plasma with the relative magnetic pressure, P 0 = P rel . In Fig. 10, we show that this pressure balance, represented by the dotted line, describes well the position of the current feature identified as the magnetopause between times tω ci ≈ 2 and tω ci ≈ 3. After tω ci ≈ 3, the magnetopause current is well described by the pressure balance P d = P rel , as illustrated by the dashed line in Fig. 10. In fact, after inspecting the phase spaces in Figs. 5 a3) and b3), we can observe that a combination of driver plasma particles (separated from the bulk distribution) and background ions pushes the dipolar field and sets the position of the magnetopause. We stress that, because we are determining equilibria via MHD pressure balances but are checking the intersection between pressure curves with kinetic resolution, some caution must be made to ensure that we are observing the equilibrium between pressures and not merely the interface between the different regions of interest. To ensure that the pressure equilibria were correctly obtained, the corresponding pressure profiles were always carefully inspected with additional diagnostics. F. Realistic parameters Due to the need for more extensive scans (and thus using physically equivalent but computationally feasible parameters), the simulations shown so far considered reduced ion mass ratios, cold plasmas, and higher velocities than the ones used in the LAPD experiments -see Table II. To ensure that the main results presented in the previous sections are also valid with realistic parameters, we have performed a set of simulations with parameters similar to those expected experimentally. Three simulations were performed, labeled as runs F1 to F3. Run F1 employs realistic mass ratios m i,d /m e = m i,0 /m e = 1836. Additionally, run F2 also considers a ratio between the electron thermal and flow velocities close to the ones expected for the LAPD experiments, namely v the,x /v 0 = 2.5 and v thi,x /v 0 = 0.033, leading to higher temperatures than in the previous simulations, and thus allowing possible thermal effects on the system. Finally, run F3 considers the same electron thermal velocity ratios of F2 but the standard reduced mass ratios. The ∆B z and J y plots for these simulations are shown in Fig. 11. Note that, due to changes in m i /m e , the spatial and temporal scales were recalculated for the new parameters. Once again, the magnetic dipole moment for the three simulations was adjusted to ensure that L 0 = 1.8 d i . As expected, these simulations show the same main structures discussed in the previous sections. We observe the typical reflection of the compression of the magnetic field and the current structures of the magnetopause and diamagnetic cavity. However, some differences are also visible. In Figs. 11 a1) and b1), i.e. for the realistic mass ratios but cold plasmas simulation, we observe a stronger filamentation of the plasma flow reflected off the dipole and a thinner diamagnetic current. This is because d e is the characteristic length scale of the current layer and we have lower d e /d i values for larger m i /m e . Figs. 11 a2) and b2), for the simulation with higher temperatures, show no major differences with Figs. 11 a1) and b1), even though there is a significant increase in the thermal velocities. In Figs. 11 a3) and b3), however, we observe significant differences for reduced mass ratios with realistic thermal velocity ratios. In particular, we observe in the current density plot smoother magnetic and current structures and less defined background waves between the magnetopause and the dipole. We also observed for increased ion thermal velocities, for example, v thi /v 0 ≈ 0.25, that the background waves are no longer visible. Additionally, other simulations were performed to look for possible changes with realistic parameters. A simulation with a lower flow velocity v 0 = 0.01 c and realistic thermal velocity ratios lead to no significant features observed, and the obtained synthetic diagnostics were very similar to the ones in Figs. 11 a3) and b3), meaning that the system scales well with v 0 . Another simulation was performed to observe if the shape of the initial density profiles of the plasmas would affect the main results. Namely, the constant density profiles used on both the driver and background plasmas were replaced by Gaussian density profiles with a typical gradient scale σ = 1 d i on the edges of the plasmas. This simulation did not show meaningful differences, in agreement with previous plasma coupling works, which observed that the leading edge of the plasmas evolves similarly for different initial density profiles 45 . G. Finite transverse size For simplicity, and because we were more interested in studying the system along the axis of symmetry y = 0, the previous simulations only considered a driver with infinite width L y and a length of L x = 2 d i . In the experiments, however, the drivers had a width comparable to their lengths and did not have the sharp boundaries used in the simulations. To investigate if and how our results are modified with a more complex-shaped driver, we 11. Temporal evolution of a) the variation of the magnetic field ∆Bz and b) the current density Jy at y = 0, for the simulations with similar parameters to the experiments. Run F1 considers realistic mass ratios for the driver and background plasmas and low ratios between the thermal and flow velocities; run F2 uses realistic mass ratios and thermal velocity ratios close to the ones expected in the experiments; run F3 uses the realistic thermal velocity ratios but reduced mass ratios. performed a simulation with a finite width, semi-circularshaped driver plasma. This driver is initially defined with the conditions (x + 7.25 d i ) 2 + y 2 < (3.25 d i ) 2 and x > −6 d i and has length L x = 2 d i and width L y = 6 d i . Fig. 12 shows the results of this simulation and includes the initial shape of the driver in Fig. 12 a). Due to the finite width of the new driver and its particular shape, we should expect to see significant differences in the regions of the simulation plane far from y = 0. In the total ion density plot of Fig. 12 a) for a time tω ci = 3, when there is a strong interaction of the driver with the dipolar magnetic field, we observe the propagation of waves at the lower and upper sides of the dipole caused by the finite width of the driver, that was not present for infinite width drivers. In Figs. 12 b) and c), we see the usual magnetic and current density plots at y = 0 for this simulation. By shortening the driver plasma width, the background particles escape from the bottom and top regions of the simulation box, and the driver has more difficulty holding the magnetic decompression in the background region. The decompression, therefore, occurs quicker for finite drivers, as seen in Fig. 12 b), leading to short reflections of the magnetic compression. Although this complex-shaped driver gets us closer to the experimental configuration, the simulations did not include all the properties of the experimental driver, as for example, the non-uniform density, velocity profiles of the plasmas and the flow divergence. Additionally, 3D effects should also be considered. Future simulations are planned to study the effect of these properties in the results. However, we expect that these features will not change the main results of the simulations. IV. CONCLUSIONS In this work, we have performed PIC simulations of mini-magnetospheres in the interaction between a plasma flow and a magnetized background plasma. In particular, we have successfully reproduced results from recent experiments performed at the LAPD, validating the experimental platform to study mini-magnetospheres in the laboratory. We have also explored an extensive parameter space defining the interaction, allowing us to i) determine how the main properties of the system change with the parameters and ii) identify the required conditions for the creation of a mini-magnetosphere. Our simulations have shown that some system features are present across multiple regimes. The initial flow of the driver expels the magnetic field in the upstream region, leading to a magnetic cavity, and compresses the downstream magnetic field. The driver travels through the background until the magnetic field pressure is large enough to counterbalance the driver plasma pressure. A fast decompression of the background magnetic field then follows. If the background decompression occurs after the total reflection of the driver plasma, then we can observe the reflection of the compression of the magnetic field. To see this feature, the driver needs to be short enough to anticipate the driver reflection relative to the decompression but sufficiently long to ensure that it can get close to the dipole. For the super-Alfvénic flows considered, the driver particles are reflected upstream during the interaction with the background plasma and the magnetic field. The coupling velocity (i.e., the velocity at which the leading end of the driver travels through the background) is lower than the flow velocity and increases with the increase of the ratio between the driver and background densities. The coupling velocity and the length of the driver determine how far the driver can go through the background region without a dipole, for a uniform driver plasma. The interaction of the plasmas with the dipole results in two magnetopauses. The first describes the balance between the kinetic pressure of the propelled background plasma plus the pressure of the plasma internal magnetic field and the total magnetic pressure. The seconds describes approximately the balance between the kinetic pressure of the driver plasma separated from the bulk distribution and the relative magnetic pressure. Using simulations with different dipole moments, we have shown that, for lower magnetic moments, the driver and background standoffs are closer to the center of the dipole, and the magnetopause current is more clearly identified than for higher magnetic moments. Furthermore, it is also easier to separate the magnetopause and diamagnetic currents for lower magnetic moments, consistent with experimental observations. In the simulations performed, we also observed the formation of waves in the background plasma region, between the magnetopause and the center of the dipole, where the magnetic field gradient was significant. These waves result from the excitation that always followed the formation of the magnetopause and were only observed for background plasmas with relative low ion thermal velocities. This condition may explain the absence of these waves in the experimental plots. Most of the simulations presented in this work were performed in idealized configurations. In particular, we used reduced ion-to-electron mass ratios, unrealistically high flow velocities, a simple flat-top driver density profile, and neglected thermal effects. In Sec. III F and III G, we presented simulations that drop some of these simplifications. Replacing reduced ion mass ratios with realistic ones and considering high thermal velocities ratios close to the obtained in the experiments did not lead to significant changes in the results. The same occurred when considering smoothed density profiles. It was also possible to conclude that the main features of the system scaled as expected with the absolute value of the driver flow velocity. We also presented a simulation to study possible effects associated with the complexity of the experimental laser-ablated driver. A simple circular segment-shaped driver was considered and led to similar results in the axis of symmetry as the infinite width driver simulations. However, wave-like structures were observed on both the bottom and upper sides of the dipole. For future studies on the regions outside the axis of symmetry, the driver shape and complexity must be considered. Additionally, we also performed other parameter scans related to the complexity of the driver. For instance, we performed simulations where the driver ions were heavier than the background ions to simulate the small role of the carbon ions in the experimental driver. These studies showed no significant differences to the lighter ions simulations. In conclusion, the simulations were consistent with the LAPD experimental results, and the multiple parameter scans performed dictated the formation conditions of the main features of mini-magnetospheres. For future works, we intend to exploit the features present in the sides of the dipole, exploit anti-parallel magnetic field configurations, perform 3D simulations, and consider even more realistic properties of the driver.
2022-01-10T02:15:26.673Z
2022-01-07T00:00:00.000
{ "year": 2022, "sha1": "be2961aeb478402d2710a282135bd887e0326197", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d359858fe1aad3cd7b872c190b43246d399ec2c4", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
242067537
pes2o/s2orc
v3-fos-license
Communicative Language Teaching in Public Universities in Afghanistan: Perceptions and Challenges — While Communicative Language Teaching (CLT) has been advocated in Afghanistan, little is known about the perception of Afghan EFL (English as a Foreign Language) teachers on CLT. This study aims to investigate the perceptions and challenges of CLT in Afghan public universities from the perspectives of EFL teachers. The study employed a mixed-method approach comprising survey questionnaires and a qualitative interview. A sample of sixty-two Afghan EFL teachers was selected to participate in a survey questionnaire, while five were interviewed. Findings from the questionnaires and interview showed that Afghan EFL teachers have positive perceptions regarding CLT. The results also showed that the education system is one of the significant challenges for CLT implementation in Afghan public universities. The current study is valuable for policymakers, teachers, and students for improvement of EFL classes in Afghanistan. I. INTRODUCTION With a history of about four decades of war, Afghanistan struggles to rebuild its education sector by improving the primary education quality, training teachers, preparing learning materials, and strengthening the Ministry of Education as the administrator of the education system (Dandawate & Dhanamjaya, 2019). The Afghan government has also acknowledged the importance of English and introduced English language courses as compulsory subjects from the primary up to the tertiary educational levels (Singh & Sadri, 2019). From 1985 to 2004, English was only taught starting from grade seven; today, English is taught as a compulsory subject from grade four of school (Alamyar, 2017), indicating the significant importance the English subject has received. Moreover, English has gained recognition as a language of trade, politics, and employment both in the private or public sectors (Alamyar, 2017). For example, many international non-governmental organizations such as United Nations (UN) and United States Agency for International Development (USAID) have offices in the country, and they require employees who have English skills. Likewise, although English was not a requirement for governmental positions from 2001-2008, the current situation portrays a different picture. Knowledge of the English language seems advantageous for job seekers in the public sectors as many offices (e.g., the Ministry of Foreign Affairs) have started to have dealings requiring the use of English. With all these new changes and development, more and more Afghans are motivated to acquire and improve the mastery of the English language, and the teaching and learning (TnL) of English have also received increasing attention. However, being in a war-stricken country, many schools, universities, and other educational institutions suffer from the lack of the proper infrastructure and equipment for effective TnL of English. Most of these educational premises do not even have the basic equipment such as projectors or DVDs. If they do, the equipment is not suitable for modern language TnL. The dominance of traditional teaching methods such as Audio-lingual and Grammar-translation further contributes to the problems associated with ineffective TnL of English (Hikmat, 2009). Noori (2018), however, revealed that Afghan EFL teachers were very positive about using Communicative Language Teaching (CLT) as an approach to improve the scenario of English language teaching in Afghanistan. Therefore, this study aims to investigate in greater depth the Afghan EFL teachers' perceptions about CLT and the challenges they face in the implementation of CLT in their lessons. II. REVIEW OF LITERATURE The whole concept of CLT focuses on developing L2 students' communicative competence, hence the emphasis on teaching English for communication (Hymes, 1972;Richards and Rodgers, 2001;Ying, 2010). In CLT classrooms, students are taught to become users of the target language (Hymes, 1992) that can handle meaningful communication with suitable linguistic proficiency in different social contexts (Dos Santos, 2020). Indeed, according to Hymes (1972), communicative competence covers both linguistic and social competence, i.e., the ability to "know when to speak, when not, what to talk about, with whom, when, where, in what manner" (Hymes, 1972, p227). In practice, CLT has been perceived as having the capability to engage learners in communication as a prerequisite for the development of communicative competence (Savignon, 2007), unlike the established traditions that emphasize learners' formal knowledge acquisition. CLT is a combination of various techniques and goals to improve students' components of communicative competence, namely grammatical, discourse, functional, sociolinguistic, and strategic competence (Brown, 2000;Canale & Swain, 1980;Savignon, 1997). Communicative activities in the classroom (e.g., games, role plays, and problem-solving tasks) offer learners opportunities to practice their communication skills meaningfully in different contexts and take on different roles (Ozsevik, 2010). All these could equip students with essential and relevant skills of communication. Research on the perception of CLT implementation in ESL or EFL contexts have consistently shown that teachers are optimistic about the benefits of CLT on their students, despite the challenges that CLT presents in their context. For example, a study among 75 secondary school teachers in Iran (Anani Sarab et al., 2016) revealed that, while the teachers agreed with the principles of CLT, its implementation had to start with improvements in various aspects including teacher training and teaching materials, and some influential contextual factors, particularly the class size. Similarly in Pakistan, Ahmad & Rao (2013) found that teachers were enthusiastic about applying CLT in their classrooms. Yet, lack of appropriate materials, grammar-based examinations, and insufficient teacher training were some of the problems that must be overcome. In another study, Huang (2016) reported that Taiwanese teachers agreed that cultivating English language proficiency among students was necessary. The teachers, however, were concerned with their insufficient communication proficiency and confidence in implementing CLT. In Thailand, Kwon (2017) investigated six teachers using the interview method. The results indicated that the teachers were very optimistic that the implementation of CLT would improve their students' English language proficiency. However, low English language proficiency among both teachers and students did not allow a full use of English as the medium of instruction (MOI). In short, although many curricula have shifted their focus to CLT from traditional theories, mismatch still prevails between theories and practice (Littlewood, 2007) and much literature shows that traditional methods are still commonly used in most EFL settings (Littlewood, 2007;Rao, 2013;Li, 1998). In Afghanistan, the scenario does not differ much. In his study involving Afghan EFL lecturers, Noori (2018) found that while the lecturers already put CLT into practice, they disagreed that it was effective. Various challenges mentioned ranging from large classes, grammar-based focus, weak support from the administration, and student-related issues, such as low English language proficiency and motivation to participate in lessons. In a case study involving two English teachers, Faizy (2020) discovered that they commonly used their mother-tongue and focused on error corrections in their teaching. While these practices are the opposite of the CLT principles, the constraint faced, including students' poor language proficiency, grammar-based examinations and large class sizes would only allow for teacher-centered teaching. Kakar et al. (2020) agreed large and crowded classes would limit teachers from giving individual attention to students, not to mention opportunities for each student to practice communicative skills. Worse, there are some students who are reluctant to shoulder learning when teachers switch from teacher-centered to student-centered teaching (Kakar et al., 2020). In short, although CLT would improve students' communicative competence, the issues that CLT presents could not simply be ignored, particularly when the challenges come from various aspects. CLT implementation in Afghan English language classes is increasingly popular, yet the challenges that teachers and students face remain. So, this paper aims to investigate the perceptions of the Afghan public universities EFL lecturers regarding the implementation of CLT in their classrooms, and the challenges they face/perceive in relation to CLT implementation. III. METHODOLOGY AND RESEARCH DESIGN This study used a concurrent mixed-method design comprising a quantitative questionnaire and qualitative interviews which would offer a powerful combination of quantitative and qualitative data (Miles et al., 1994). Sixty-two EFL lecturers from 20 public universities and three higher education institutions were selected for this research through convenient sampling, where participants were selected based on their willingness and accessibility (Creswell, 2012). So, this research chose the convenient sampling method due to the foreseen difficulty of accessing the participants, i.e., the lack of Internet in some areas and transportation issues of travelling from one university to another during the data collection phase. Two cross-sectional survey questionnaires were used. The first questionnaire, adopted from Karavas-Doukas (1995), gathered the perceptions of EFL lecturers on CLT. This questionnaire used 1-5 Likert scales and contained five themes with 24 items, namely place and importance of grammar (1,3,12,17,23), group and pair work (2,9,13,21,22), quality and quantity of error-correction (6,10,14,15), the role of teacher in the classroom (7,16,19,24), and the role of learners (4,5,8,11,18,20). According to the questionnaire developer, the means above three is considered positive, and lower than THEORY AND PRACTICE IN LANGUAGE STUDIES 1435 three is negative. For the reliability of the questionnaire, the coefficient of split-half has been reported as 0.88. The second quantitative questionnaire was adopted from Ozsevik (2010), and consisted of 18 items. The first six (1, 2, 3, 4, 5, 6) were related to teachers and the next four (7,8,9,10) were student related. Items 11,12,13,14,15 were related to education-system and the last four items (16,17,18) were related to CLT. This questionnaire also used a Likert scale of 1-4 where 1=not a challenge at all, 2=a challenge, 3=a mild challenge, and 4=a major challenge. To collect the data, first, the quantitative questionnaires were sent online to the respondents. Online data collection is widespread nowadays and can help gain systematic and organized data (Skarupova & Blinka, (2013). After the questionnaires were collected, interviews were conducted with five respondents to gather in-depth information about CLT in Afghanistan. Later, both quantitative questionnaires (questionnaire of perceptions and questionnaire of challenges) were analyzed through SPSS IBM Version 25 for descriptive statistics (mean, standard deviation, and frequency), while the interviews were analyzed through thematic network analysis which is a very flexible data analyzing method that can be modified for different purposes (Braun and Clarke, 2006). A. Questionnaire for Perceptions As mentioned, the quantitative data were analyzed through SPSS IBM Version 25 for descriptive statistics (mean, standard deviation, frequency) to find the perceptions of Afghan EFL lecturers. The results show the respondents had positive views towards all the five principles included in the questionnaire (refer to Table 1). The highest mean score of all principles examined was the role of teachers in CLT classrooms, with the mean value of 3.91. For the other principles, namely the role of learners and their contributions in learning, the Place /Importance of Grammar, the Pair/Group Work, and the Quality/Quantity of Error-Correction, their mean values are 3.73, 3.48. 3.38 and 3.17, respectively. In the following sections, all the data obtained from the questionnaire with their descriptive statistics are explained. For ease of reference, Tables 2, 3 and 6 below present details of the respondents' responses on all the five principles, together with the individual items associated with each principle. B. Perceptions of Afghan EFL Lecturers about CLT and Its Principles The first principle is the role of teachers in CLT which has received the highest mean among all the five principles examined. As shown in Table 2, 61.3% of the respondents believe that the teachers' role as the authority and instructor in a language classroom is no longer adequate to describe them as teachers. About 77% of the respondents acknowledge that transmission of knowledge is one of the differing roles teachers should play. Next, 69.4% of the respondents agreed that the role of teachers is to impart knowledge through various activities such as writing and giving examples. Most of the respondents (87.1%) believe textbooks alone are not sufficient to meet the needs of students and that teachers must use supplementary materials to meet the needs of students' learning. All these items highlight that most Afghan EFL lecturers are aware of the different roles that they have to assume with the CLT implementation. Next, the Afghan EFL lecturers also expressed positive views on their perceptions of students' roles and contributions in CLT. About 91.9% of the respondents agreed that all classroom tasks and teaching activities must suit the students' needs. Another 83.5% believe that learner-centered teaching approaches can contribute to students' potential and make them responsible for their learning. Approximately 66% of the teachers think that CLT would not be effective in a large class, implying the requirement for small classes to implement CLT in their context. THEORY AND PRACTICE IN LANGUAGE STUDIES Regarding students' acquisition of the language, about 68% of the EFL lecturers agreed that the communicative use of language, i.e., students learn language through using it, would be effective. This suggests these EFL lecturers have confidence that CLT could help their students with the language. However, when asked whether students should be allowed to suggest contents and/or activities to be conducted in class, 43.5% answered that students do not have the right knowledge to do so, while 38.7% thought otherwise. This finding is significant as more than one-third of the respondents indicated their willingness to allow students to contribute to their language learning. However, perhaps the approach would be too drastic as 42% of the respondents agreed that the students were not used to taking responsibility for their own learning. In Table 3, findings regarding the third principle -the importance of pair and group workevidently show that generally, most of the respondents view this principle positively. A high percentage (92%) of the EFL lecturers agreed that group work is essential for building cooperative relationships among students, which could eventually lead to genuine interactions. About 74% of the lecturers agreed that group work can help learners develop their autonomy for learning, thus contributing towards rewarding classroom experiences. This perception is further emphasized as 51.6% of the respondents disagreed that group work have little use. This response indicates that the lecturers may not have difficulty monitoring students' performance in group work. Similarly, about 56% of the respondents disagreed that students do their best when taught as a whole class and that small group work can never replace formal instruction even by a competent teacher. These findings imply that the lecturers see students working with peers as valuable to learning. In fact, 59.7% of the respondents view group work activities as not difficult to prepare and that the time spent preparing the activities was worthwhile. In general, we can conclude that the EFL lecturers realize the efficiency of cooperative work in a CLT classroom. Next, the findings show the respondents' views on the importance of Grammar in CLT (see Table 4). It should be noted that in Afghanistan, the Grammar Translation Method has been a dominant teaching approach in English language classrooms. Thus, it is not a surprise to have a mean of 3.48 (Refer to Table 2), which indicates a positive perception regarding the role of grammar in CLT. About 69% of the participants also agreed that the knowledge of grammatical rules could not guarantee the ability of students to use the language nor be fully capable of communicating with native speakers (56.4%). Accordingly, 61.3% of the respondents agreed that grammar should be taught to an end, not as an end to itself, implying that grammar should be taught to help students use the language correctly. In relation, about 48% of the respondents agreed that the direct instruction of grammar rules is essential for effective communicative purposes. However, when asked whether grammatical accuracy should be an important criterion to judge language performance, 42% of the respondents each expressed their agreement and disagreement. This finding is fascinating as it could clearly reflect the notion of fluency versus accuracy in language teaching (see Brumfit, 1984) that many language lecturers are torn between. In this context, as Grammar Translation Method has long been a dominant approach, it would be expected to find Afghan EFL lecturers who wish for students to use the language grammatically, and those who wish students to be fluent, particularly with the implementation of CLT. The last item investigated is error-correction. Based on Table 4, generally, the respondents formed a positive perception about error-correction, although it has the lowest mean (3.17) among the five principles. From Table 4, 74.2% of the respondents agreed that lecturers should provide feedback that focuses on appropriateness rather than linguistic form. However, much correction was considered a waste of time by half of the respondents and in fact, about 55% of the respondents felt that lecturers should not correct all grammatical errors produced by students. In addition, 46.8% of the respondents disagreed that CLT would produce fluent but inaccurate language users. The findings related to error correction imply the balance that the lecturers intend to achieve with CLT utilization. Although the lecturers have confidence in CLT as a suitable method to help their students improve in language learning and language use, their view on grammatical language is also strong, considering the decade-long focus on grammar teaching and the grammar-focus examinations. In general, based on the findings gathered on the respondents' views on the five principles of CLT included, the THEORY AND PRACTICE IN LANGUAGE STUDIES lecturers involved in the survey were very positive with the prospect of improving their students' learning and use of language through the utilization of CLT. The findings also indicate that the teachers were willing to assume new roles, adopt different teaching approaches and reduce their classroom authority while still maintaining a critical aspect of the language, i.e., the accuracy of the language use, which has been dominant in their context. The following section discusses the challenges that are associated with CLT as perceived by the respondents. C. Challenges in Communicative Language Teaching for Afghan EFL Lecturers This section presents the descriptive statistics based on findings from the questionnaire for challenges in CLT. Of the 62 participants in the study, 83.87% (52 persons) responded that they apply CLT in their classes, while the other 16.12% (10 persons) responded otherwise. Below are tables that provide detailed descriptive statistics for each challenge with all the statements included in the questionnaire, for lecturers applying CLT (indicated by Y) and those who do not (indicated by N). Table 5 shows that the education system is the first big challenge for CLT application perceived by both groups. The first area of concern is the large class size. Most lecturers (those who apply -51.9% and those who do not apply CLT -60 %) agreed that large class size is a significant challenge in CLT implementation. In addition, 77 % of the lecturers applying CLT consider grammar-based examinations a challenge, which is agreed by 60% of their colleagues who do not use CLT. As for the lack of authentic materials for CLT, a significant percentage of lecturers who use CLT agreed that this is a challenge; interestingly, while 50 % of those who do not use CLT decided that this was a challenge, another 40% believed that this was a mild challenge. When asked about the traditional views on teachers' and learners' roles that are not compatible with CLT, lecturers who use CLT expressed that it would be a challenge (agreed by 67.3%), while those who do not use CLT disagreed this as a challenge. In terms of lack of support from the administration, quite a big percentage of those who use CLT and those who do not regarded this as a challenge. TABLE PERCEPTIONS OF LECTURERS ON EDUCATION SYSTEM-RELATED CHALLENGES Student-related challenges portrayed in Table 6 are the second-highest challenge for lecturers who apply CLT. From the table, a large percentage of lecturers (75% of lecturers that use CLT and 70% of those who do not use CLT) believe that the students' low proficiency is a challenge in CLT. Furthermore, 65.4 % of the lecturers utilizing CLT agreed that the students' passive learning style is a challenge. Similarly, more than half of the lecturers who use CLT admitted that students who resist participating in class and lack the motivation to develop communicative competence would pose a challenge to CLT implementation. In contrast, about 70 to 80% of lecturers who do not use CLT did not see these three characteristics of students as a challenge. While student-related challenges are the second-highest challenge perceived by the lecturers who apply CLT, lecturers who do not use CLT perceived CLT-related challenges as the second-highest challenge (refer to table 9). These could probably be the assumptions that may have driven them to regard CLT as unfavorable. Indeed, their assumptions were not baseless, as from Table 7 we could see that lecturers who use CLT formed significant percentages about these ICT-related challenges. For example, about 73% of lecturers who use CLT (comparatively to 70% of their counterparts) answered that the lack of effective and efficient instruments to measure communicative competence formed a challenge to them. In addition, while only 40% of those who do not use CLT felt that Western education assumptions were not suitable for Asian contexts and that this was a challenge, the percentage is more prominent (58%) for the other group of lecturers. Finally, half of the respondents from each group agreed that it is a challenge that CLT does not take into account the differences between ESL and EFL contexts. Table 8 shows the findings on teacher-related challenges, and data in Table 9 illustrates that teacher-related challenges were the least challenges by both groups of lecturers. As shown in Table 9, the mean of teacher-related challenges for lecturers who do not use CLT is 2.55, while for their counterparts who use CLT, the mean is 2.10, indicating that these challenges form a mild challenge. In fact, a high percentage (more than 70%) of teachers in each group (referring to Table 8), for example, agreed that lack of knowledge about the appropriate use of language or insufficient proficiency in the English language among the teachers was not a challenge. However, there is one exception, which is related to lack of time to develop teaching materials. Both groups of lecturers felt that the lack of time to develop materials for communicative classes was a challenge (agreed by 51.9% of those who use CLT and 60% of those who do not). The lecturers have conflicting views about whether each item is a challenge for the last three items in Table 8. First, while more than half (55.8%) of lecturers who use CLT agreed that lack of opportunities to attend CLT courses is a challenge, only 40% of their counterparts shared the same view. Next, while most lecturers (75%) who use CLT claimed that teachers' lack of knowledge about the English culture is not an issue, 60% of their counterparts thought otherwise. Likewise, although about 81% of the lecturers who use CLT felt that lecturers' misconception about CLT was not a challenge, their counterparts were split (50/50) about this factor as a challenge. In short, most of these items were collectively seen as not seriously challenging CLT implementation. Perhaps, the lecturers who did not use CLT in their lessons have their own reasons, which may not be covered in this research. D. Findings Derived from the Interview As mentioned, five respondents were interviewed about their perceptions and challenges faced in relation to CLT implementation in Afghanistan public universities. Based on the analysis, several themes emerged. Most findings are consistent with those from the questionnaire, and they further explain the respondents' perceptions of CLT and the challenges encountered. One of the perceptions that may have discouraged the lecturers from using CLT is that CLT is an approach for teaching speaking only and CLT focuses solely on the speaking ability of students. Therefore, CLT is considered an approach that is in contrasts with the requirement of the curriculum that emphasizes grammar focus. With this view, some Afghan EFL teachers have been reluctant to employ CLT. This is clearly a misconception as the communicative competence as defined in CLT emphasizes the combination of discourse sociolinguistic, strategic and grammar competences (Canale & Swain, 1980). About challenges, students' low English language proficiency and a mixture of students of various proficiency levels in a class are strongly viewed as a challenge and have become obstacles in the CLT implementation. The interviewees further commented that students' proficiency issues combined with other challenges, namely large-size classes, inadequate teaching materials, and teachers' lack of knowledge in CLT, may affect the effectiveness of CLT implementation in their classes. In short, these additional findings from the interview data are insightful and further enlighten the results of the questionnaires. For instance, while the questionnaire data showed lecturers' great interest in using CLT, the interview data revealed that they still have some misconceptions regarding CLT. In other words, although Afghan EFL lecturers are positive about CLT, they require the appropriate training on aspects of CLT, including the underlying theory and the teaching methodology. It is hope that with appropriate knowledge of CLT, the lecturers will be even more optimistic about employing CLT to help their students improve their speaking ability and all aspects of English, including grammar. V. DISCUSSION AND CONCLUSION This study was conducted to investigate the perceptions of Afghan EFL lecturers about CLT implementations and its challenges in Afghan public universities. The overall findings reveal that Afghan EFL lecturers have positive perceptions about CLT despite many challenges derived from their context. The lecturers' positive perceptions resonate with studies in Afghanistan (Noori, 2018) and in other EFL contexts (e.g., Rahimi & Naderi, 2014;Vaezi, & Abbaspour, 2014). In fact, a study investigating Iraqi lecturers using the same questionnaires also resulted in very similar results (Sherwani & Kilic, 2017). The findings also reflect closely with that of Chang's (2011), who investigated Taiwanese EFL lecturers. The apparent similarity concerns the role of teachers in CLT, suggesting that EFL lecturers in Afghanistan and Taiwan view the significant roles of teachers in CLT. The importance placed on the role of teachers in CLT coincides with the view that teachers are vital in any teaching methodologies (Ellis, 1996), including in a learner-centered classroom, such as in CLT. While Ellis (1996) believes in the requirement of lecturers' proficiency and resources in CLT, Larsen-Freeman (2000) emphasizes the multiple roles lecturers play in CLT classrooms including as facilitators, advisors, and co-communicators. Similarly, Littlewood & William (1981) state that lecturers in CLT classrooms have to participate in class activities so that students can actively negotiate meaning. Thus, when the Afghan EFL lecturers believe that the teachers' role is important, they may possibly indicate their beliefs in the different roles that they have to play during lessons. Nonetheless, it is important to note that as the Afghan society has a top to bottom hierarchy for its social relationships, the role of lecturers as co-communicators may not be optimally exercised as students may feel awkward to have teachers as co-communicators in classroom activities. The finding that revealed Afghan EFL lecturers placed importance on the role of students in CLT, implies the former's enthusiasm to utilize CLT in their lessons. There could be ample reasons why the lecturers support CLT for their students. First, it may be related to the traditional methods that focus more on grammar rather than communicative use of the language, thus hindering the development of students' oral communicative skills. As communication in English has continued to become vital (Hu, 2002), when students cannot communicate in English, the lecturers may want to find alternatives that can improve the situation (Hikmat, 2009). The next reason could be triggered by the lecturers' own educational experiences. In this study, almost 65% of respondents had their higher education abroad; they THEORY AND PRACTICE IN LANGUAGE STUDIES 1441 may have personally gone through a better system elsewhere and/or engaged in CLT. Upon returning to work they are possibly inspired to help Afghan students learn and improve their language skills through CLT. Thirdly, according to Afghanistan's National Higher Education Strategic Plan 2010-2014, English was the MOI in 2015; nonetheless, the plan was not materialized. The students' English language proficiency remains low. Afghan EFL lecturers may probably be motivated to prepare the students for the upcoming change. Should the decision to make English as the MOI revive students would have the appropriate language proficiency to function in the new academic environment. Yet, if the current situation persists, Afghan EFL lecturers may use CLT to adopt the 21st century teaching methods which focus on communication, culture, collaboration, and critical thinking. Regarding challenges of CLT, the data showed that the top challenge was the education system, which covers aspects such as the curriculum, the administration, facility and infrastructure, teacher training, and teaching load. These findings are not uncommon: Noori (2008) for example, found that lack of support in administration, large classes, heavy teaching load, students' low proficiency, and grammar-based exams formed challenges to CLT implementation. Thus, classes of 65-200 students mentioned by interviewees in this study are certainly a significant problem. According to the American Council on Teaching the Foreign Languages (ACTFL), the maximum number of students in one class should be no more than 15. The National Education Association (NEA) and the Association of Department for Foreign Languages (ADFL) also recommend 18 students per class. These suggested numbers would enable teachers to have sufficient time for teacher-student and student-student interactions and close monitoring of the students' progress. In Afghanistan, with big EFL class size, it is problematic to meet these suggestions, thus jeopardizing the potential success of CLT. These challenges as expressed by the lecturers require urgent attention so that language TnL in the country would be advanced, appropriate with the trend worldwide. And as propagated by Li (1998), the mismatch between what is required by CLT and what the system allows should be resolved to reap the benefits of CLT. Findings gathered also highlighted other challenges such as the lack of support and inadequate infrastructure, the lack of appropriate resources, and the lack of teachers' training and thus knowledge of CLT. These did not differ much from those found in other EFL settings (e.g., Abate, 2014; Rahman, 2015; Anani Sarab et al., 2016; Huang, 2016; Kwon, 2017). Regrettably, some of these constraints have contributed in the unwillingness to use CLT as shown by the questionnaire results. Instead of taking risks to use CLT in a less-than-adequate environment, the lecturers remain with the traditional methods which they are very familiar with. These findings, nonetheless, are valuable as insights to the authority on how to improve some TnL practices in Afghanistan. This study has contributed new insights into the academic community of Afghanistan, particularly on research on CLT. The insights on perceptions of the lecturers on CLT and the challenges that could hinder effective implementation of CLT in Afghanistan have been discussed. As a widely used teaching approach that is suitable with the requirements of 21st century learning, CLT should be advocated as a teaching method in Afghan EFL classes to help Afghan students acquire good English language communication skills that may open many more doors of opportunities for young Afghans in the academic field and future careers. VI. RECOMMENDATION FOR FUTURE STUDIES Studies investigating both teachers' and students' beliefs about CLT should be conducted to determine the level of preparedness that teachers and students have on CLT implementation. Next, as issues related to the administration have been highlighted as a challenge, an investigation focusing on perceptions and views from the administrative side would further balance, if not complete the insight into CLT implementation. Finally, a study with a more rigorous methodology that includes other than questionnaires and interviews plus a larger number of respondents may provide better insight into the prospects of utilizing CLT in Afghanistan. VII. LIMITATIONS OF THE STUDY Since this study only covered 23 universities and higher education institutes and 62 EFL lecturers, the results are not generalizable to all universities in Afghanistan. Likewise, since only five respondents were interviewed, the results may not be comprehensive enough to portray the actual situation. In addition, due to the limited Internet coverage, this study only included those lecturers who had the Internet access. Including the views of lecturers who could not get the Internet access may have provided better insights about CLT in Afghan public universities. Noor Mala Ibrahim is a senior lecturer in Language Academy, UTM, Johor Malaysia. She has more than 30 years of teaching experience. Her interests include Teaching English for Specific Purposes, Academic Writing, Corpus Linguistics, and Discourse Analysis. Mujtaba Jamal is an English language lecturer at the Department of English Language and Literature, Faculty of Languages and Literature, Ghazni University, Afghanistan. He is a member of the Academic Council of Ghazni University and has an M.A. in Linguistics from Osmania University, India.
2021-11-04T15:11:56.278Z
2021-11-02T00:00:00.000
{ "year": 2021, "sha1": "d1d5afb00bade2dd4b4890584896d4e34f7560c8", "oa_license": null, "oa_url": "https://tpls.academypublication.com/index.php/tpls/article/download/1880/1541", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f390dc60de58051f99379589fc8a599d6ceb0d91", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [] }
235282966
pes2o/s2orc
v3-fos-license
Study on electronic structure and optical properties of doped ZnO system In this paper, the electronic structure and photoelectric properties of P and Cu doped ZnO systems have been studied by Density functional theory method. The results show that the formation energies of ZnO-P-Cu, ZnO-P-2Cu, ZnO-P and ZnO-Cu systems decrease in turn Compared with the intrinsic ZnO system, the ZnO-P, ZnO-P-Cu, ZnO-P-2Cu and ZnO-Cu systems have higher activity, the band gap of ZnO-P and ZnO-P-2Cu systems is reduced, and the electron transition is easier. In the doped system, the peak of the dielectric function shifts to the left and increases, the absorption of the electron to the photon increases obviously, and the absorption spectrum appears red shift, from the calculated results, it can be concluded that P and Cu single-doped and co-doped ZnO have great influence on the electronic structure and optical properties of ZnO system, which provides a theoretical basis for further study of the influence of doping on the properties of ZnO. Introduction As a new type of Indirect bandgap material, zinc oxide has attracted much attention [1][2][3] in recent years due to its low dielectric constant, high photoelectric coupling rate, high chemical stability, excellent piezoelectric and photoelectric properties. The gap width of ZnO is 3.37 ev and the exciton binding energy is 60 meV [4] .Compared with other photoelectric materials, ZnO has potential applications in many fields [5][6][7] such as photoelectricity, ferromagnetism and thin film preparation. Researchers have been looking for ZnO products with good properties. The main methods are metal or non-metal doping, dye sensitization, noble metal deposition, and so on, the n-type semiconductor Zno has lower performance than p-type, so how to obtain high-performance p-type ZnO is the key to its wide application. The electronic structure and optical properties of N-Mn doped ZnO have been studied [8] . The results show that the optical Absorption Coefficient of N-Mn co-doped ZnO increases in the visible region, the electronic structure of Ag-N co-doped ZnO has been studied [9] , and the results show that the acceptor energy level of Ag-N co-doped ZnO is shallower than that of single doped ZnO, and the localization of hole state is decreased, the Electron structure and optical properties of Ce-N co-doped [10] , and the results show that Ce-N co-doped ZnO is a kind of potential p-doped ZnO, the p-type conductivity of C-cu co-doped ZnO has been studied by Ding Luocheng et al [11] . The results show that p-type ZnO can be obtained in the doped system When the ratio of C-Cu to ZnO is 1:2, a new semiconductor material with higher p-type, better electron migration, better conductivity and lower doping energy can be obtained Xiao Lijuan et al [12] have studied the latest progress of p-type doped ZnO thin films, discussed the difficulties in the preparation of p-type ZnO thin films and their solutions, and reviewed the latest progress in the study of the first principles of P-Cu co-doped ZnO, there have been no reports of this. The Crystal Structure, electronic structure and optical properties of P-Cu single-doped and co-doped ZnO have been studied by first-principles calculations, it is found that P-Cu doping exhibits better p-type characteristics and higher stability, which provides theoretical support for exploring the conduction mechanism of ZnO. Method of calculation The energy band structure, density of states and optical properties of P-doped, Cu-doped, P-Cu co-doped and P-2Cu co-doped ZnO systems have been calculated and analyzed. The lattice constants a=b=3.2342Å, c=5.1901Å, α=β=900, γ=1200 [13] , The unit cell used in this paper is a 2x2x2 supercell based on a ZnO unit cell. When doping, P and Cu are used to replace the O and Zn sites in ZnO, respectively. The electron configurations of the co-doped system atoms are Zn(3d 10 4s 2 ), O(2s 2 2p 4 ), P(3S 2 3P 3 ), Cu(3d 10 4s 1 ). The calculation uses Materials. The CASTEP module in Studio 8.0 uses GGA/PBE exchange correlation functionals. The Brillouin zone integral uses 3x3x1 k-point settings, and the plane wave truncation energy formula is 340eV. The maximum displacement is 0.001Å, the internal stress is 0.05Gpa, and the intermolecular The maximum interaction force is 0.03 eV/Å, and the overall energy of the structure converges to 1.0x l0-5 ev/atom. Table 1 shows the lattice constants, volume and formation energy of the optimized systems of ZnO, ZnO-P, ZnO-Cu, ZnO-P-Cu and ZnO-P-2Cu. The formation energy is the physical quantity which indicates the difficulty and stability of doping formation. The formula is: Formation Energy and stability analysis In the formula, E ZnO-mP-nCu represents the total energy of P and Cu doped system, E ZnO represents the total energy of Intrinsic System, E P , E Cu , E O , E Zn represents the ground state energy of P、 Cu、 O and Zn atoms, respectively, m and n represent the number of doped and substituted atoms. Compared with ZnO, the defects in ZnO-P, ZnO-Cu, ZnO-P-Cu and ZnO-P-2Cu can easily cause the change of crystal energy. The calculated results show that the cell volume and c value of ZnO-P, ZnO-Cu, ZnO-P-Cu and ZnO-P-2Cu systems increase, except ZnO-P systems, the larger the c/a values, the smaller the lattice symmetry of ZnO, and the shift of the impurity energy level toward lower energy, which is favorable for the formation of shallow acceptors. The formation energy of ZnO-P and ZnO-Cu systems is lower than that of ZnO-P-Cu and ZnO-P-2Cu systems. The Formation Energy of ZnO-Cu system is the lowest, which indicates that the doped system is easier to form. [14] 6.498 10 Fig.1 shows the energy band structure of ZnO system before and after doping. (a), (b), (c), (d) and (e) are ZnO, ZnO-P, ZnO-Cu, ZnO-P-Cu and ZnO-P-2Cu system energy band, where G, F, Q and Z are high symmetry points in Brillouin zone, the band gap of ZnO-Cu system is 0.734 eV, 0.688 eV, 0.169 eV, 0.410 Ev and 0.378 eV, respectively. The less energy is needed for stimulated transition of ZnO-Cu system, the system can respond to stimulated light even in low light energy environment. As can be seen from Fig.1(a), the energy band of ZnO is a typical Indirect bandgap structure, and the band gap of the system is similar to the calculated results in [15][16] , but quite different from the experimental value of 3.37 eV [17] . The gap in band gap is mainly caused by GGA approximation, the excessive consideration of the electron-electron interaction leads to the expansion of the valence band and the conduction band, which reduces the band gap width, but this does not affect our analysis of the band gap variation. Band Structure analysis As can be seen from Fig.1(b), the band gap of p-doped system decreases, and the coupling of P-3p and Zn-3d states is the main reason for the formation of p-type Degenerate semiconductor in the valence band of p-doped system, the energy required for electron to absorb light energy from fermi level to conduction band is minimum. As can be seen from figures1(c), 1(d) and 1(e), different amounts of impurity energy levels are present in the doped band gap, and the number increases in turn, indicating that the carrier concentration is increasing, the results show that the activities of ZnO-P-Cu, ZnO-P-2Cu and ZnO-Cu systems are increased in turn. Density of states analysis The density of states of ZnO, ZnO-P, ZnO-Cu, ZnO-P-Cu and ZnO-P-2Cu systems is given in Fig.2. It can be seen from Fig.2(a) that the valence band of ZnO is mainly formed by the hybridization of the 3d state of Zn and the 2p state of O, and the valence band top is mainly composed of the 2p state of O, the conduction band is mainly contributed by the 4s state of Zn. It can be seen from Fig.2(b) that the valence band of P-doped ZnO system is mainly contributed by the 3d state of Zn, the 2p state of O and the 3p state of P, the valence band top is formed by the hybridization of the 2p state of O and the 3p state orbit of P, and the conduction band bottom is mainly determined by the 4s state of Zn. It can be seen from Fig.2(c) 4 not change much near Fermi level, and the 3d state of Cu in ZnO-P-2Cu system moves to the right with strong locality, the impurity energy levels in the band gap are mainly contributed by the 3d state of Cu. It can be seen that the Cu atoms have a great influence on the system, which enhances the localization of the system and results in the increase of the impurity energy levels in the forbidden band. Fig. 4 shows the reflection Spectra (a), Absorption Spectra (b), dielectric function imaginary part (c) and loss function (d) of ZnO, ZnO-P, ZnO-Cu, ZnO-P-Cu and ZnO-P-2Cu systems. As can be seen from Fig.3(a), the reflection peaks of the doped system shift to the left and the reflectivity decreases obviously below 18.2eV. In the calculated energy range, the reflectivity of ZnO-P, ZnO-Cu and ZnO-P-2Cu systems is obviously lower than 18.2eV, which may be due to the decrease of the conductivity of ZnO films with the increase of P and Cu doping. As can be seen from Fig.3(B), the doping system shows an obvious red shift, which is caused by the decrease of the forbidden band width of the doped system. When the forbidden band width decreases, the more easily the electrons are excited from the valence band to the conduction band, the less energy the electrons need, this induces a red shift at the absorption edge. In doping system, the number of absorption peaks increases obviously, which indicates that the transition probability of Valence Band Electron Guide Band of excited state increases. Fig.3(c) is the imaginary part of the dielectric function of the system. As a broad band gap, the spectrum of ZnO is produced by the electron transition between the energy levels. The dielectric peaks can be explained by the energy band structure and the density of states of ZnO Indirect bandgap. Compared with the Intrinsic ZnO, the main peaks of the imaginary part of the dielectric function of each doped system move to the low energy region, which is related to the decreasing of the forbidden band width, the lattice distortion and the formation of the impurity energy level. As can be seen from end, respectively, the large amplitude of dielectric peak in ZnO-P system is mainly due to the enhancement of electron transition between impurity level and conduction band due to the introduction of impurity level in the forbidden band. Fig.3(d) is an energy loss spectrum. It can be seen from the diagram that the energy loss of pure zinc oxide is about 18eV, and the doping makes the energy loss peak move to the low energy end. . Conclusion In this paper, the electronic structure and optical properties of P and Cu doped ZnO systems have been studied by using the first principles ultrasoft pseudopotential method of Density Functional theory and the Generalized Gradient Approximation method. The results show that the band gap of ZnO is 0.734 eV, the volume of ZnO-P, ZnO-Cu, ZnO-P-Cu and ZnO-P-2Cu systems is larger than that of ZnO, and the formation energy of doped systems decreases in turn, the activity of ZnO-P-Cu, ZnO-P-2Cu and ZnO-Cu systems increases in turn, the bandgap width of ZnO-Cu and ZnO-P-2Cu systems decreases, the energy of electron transition decreases, the electron transition occurs more easily, and the dielectric function peak of doped systems shifts to the left and increases, the absorption of Photon is obviously enhanced, and the absorption spectrum of doped system appears red shift. From the calculation results, it can be concluded that P, Cu and co-doped ZnO have great influence on the electronic structure and optical properties of the system.
2021-06-03T01:13:46.929Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "11f60e324ba6af112b1a11ccdac9764967a31858", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1907/1/012033/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "11f60e324ba6af112b1a11ccdac9764967a31858", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258667690
pes2o/s2orc
v3-fos-license
Selective Fluorimetric Detection of Pyrimidine Nucleotides in Neutral Aqueous Solution with a Styrylpyridine-Based Cyclophane : A styrylpyridine-containing cyclophane with diethylenetriamine linkers is presented as a host system whose association with representative nucleotides was examined with photometric and fluorimetric titrations. The spectrometric titrations revealed the formation of 1:1 complexes with log K b values in the range of 2.3–3.2 for pyrimidine nucleotides TMP (thymidine monophosphate), TTP (thymidine triphosphate) and CMP (cytidine monophosphate) and 3.8–5.0 for purine nucleotides AMP (adenosine monophosphate), ATP (adenosine triphosphate), and dGMP (deoxyguanosine monophosphate). Notably, in a neutral buffer solution, the fluorimetric response to the complex formation depends on the type of nucleotide. Hence, quenching of the already weak fluorescence was observed with the purine bases, whereas the association of the cyclophane with pyrimidine bases TMP, TTP, and CMP resulted in a significant fluorescence light-up effect. Thus, it was demonstrated that the styrylpyridine unit is a useful and complementary fluorophore for the development of selective nucleotide-targeting fluorescent probes based on alkylamine-linked cyclophanes. Introduction Nucleotides play a crucial role in several biological processes, for example as essential building blocks in DNA replication and RNA synthesis [1,2]. Furthermore, they are essential in cell signaling, metabolism, and enzyme reactions as cofactors for NAD + and FAD and as energy carriers in the form of triphosphate nucleotides [3,4]. Therefore, the detection and monitoring of nucleotides are important tasks to contribute to the assessment and understanding of biochemical processes in living organisms [5][6][7][8][9]. Along these lines, photometric and electrochemical analysis, as well as 1 H NMR spectroscopic analysis, are routinely used methods for nucleotide detection; however, elaborate protocols, relatively expensive equipment, and limited sensitivity are drawbacks of these methods [10,11]. For this purpose, fluorescence spectroscopy is a useful and easily accessible analytical tool because it enables the efficient and sensitive detection of biologically relevant analytes with suitable fluorescent probes (chemosensors), which change their emission properties upon analyte binding [12][13][14][15][16][17][18][19][20]. Along these lines, fluorescent probes that can detect nucleotides by means of emission quenching or emission enhancement (light up) have been reported [21][22][23][24]. However, selective chemosensors for particular nucleotides are still needed, so the development of such fluorescent probes still represents a rewarding and challenging research field in chemistry [25][26][27]. The most abundant nucleotide is adenosine triphosphate (ATP), which plays an important role in the energy transport in living organisms [28,29] and as a main biochemical component in cancer cells, where it can either enhance or suppress tumor growth, depending on the concentration [30]. Consequently, several different methods and approaches for the efficient and selective detection of ATP have been reported [31][32][33][34][35][36][37][38]. On the contrary, the selective analysis and sensing of other nucleotides has been scarcely reported so far. For example, the selective photometric detection of thymidine triphosphate (TTP) relative to other mono-, di-and triphosphate nucleotides has been realized with gold nanoparticles and a p-xylylbis(Hg 2+ -cyclen) complex [39]. Likewise, cytidine triphosphate (CTP) has been shown to induce selective luminescence quenching of a terbium(III)-organic framework [40], and a polyhydroxy-substituted Schiff base receptor has been reported to be a selective fluorescent chemosensor for CTP and ATP [41]. More recently, a bisnaphthalimide receptor with a pyridine spacer has been introduced as a selective fluorescent probe for CTP [42]. Moreover, anthracene derivatives with two appended imidazolium groups have been reported whose emission is efficiently quenched by GTP [43]. Although some cyclophanes are already available for fluorimetric nucleotide detection, there is still room for further development. Specifically, variations of the aromatic unit appear promising because this part of the host molecule provides an essential binding site for π stacking with the nucleic base. Surprisingly, most employed aromatic subunits are fused polycyclic fragments with limited conformational flexibility, such as naphthalene or anthracene, whereas more flexible scaffolds with resembling π surface, such as stilbenes or styryl-substituted hetarenes, have not been employed for this purpose, so far. Along these lines, we proposed that the known 2-styrylpyridine unit may serve as a useful, complementary aromatic component in nucleotide-binding cyclophanes because it provides a flexible aromatic surface, which may enable a more variable π stacking, along with a decent dipole, which may increase the binding affinity by dipole-dipole interactions with the nucleic base. Herein, we report on the synthesis of a bis-styrylpyridine-based cyclophane, and demonstrate that it may be used for fluorimetric detection and differentiation of nucleotides at physiological pH. Synthesis The known dibromostyrylpyridine derivative 1 [70] was formylated by lithiumhalogen exchange with n-BuLi and subsequent reaction with DMF to the corresponding styrylpyridine bis-carbaldehyde 2 in 63% yield (Scheme 1, see Supplementary Materials). Condensation of the latter with diethylenetriamine and subsequent reduction of the tetraimine intermediate 3 with NaBH 4 gave the macrocyclic polyamine 4 in a yield of 23%. The known derivative 1 was synthesized by a varied procedure and identified by comparison with the literature data [71], and the new compounds 2 and 4 were identified and fully characterized by NMR spectroscopy ( 1 H, 13 C, COSY, HSQC, and HMBC), elemental analyses, and mass spectrometry ( Figures S2-S7). In all cases, the E-configuration of the alkene units in compounds 1, 2, and 4 were indicated by characteristic coupling constants of the alkene protons ( 3 J H-H = 16 Hz). and fully characterized by NMR spectroscopy ( 1 H, 13 C, COSY, HSQC, and HMBC), elemental analyses, and mass spectrometry ( Figures S2-S7). In all cases, the Econfiguration of the alkene units in compounds 1, 2, and 4 were indicated by characteristic coupling constants of the alkene protons ( 3 JH-H = 16 Hz). Scheme 1. Synthesis of cyclophane 4. Solvent and pH-Dependent Absorption and Emission Properties In the MeOH solution, cyclophane 4 exhibited an absorption maximum at λabs = 314 nm and a fluorescence maximum at λfl = 379 nm with low emission quantum yield (<0.01) (see supplementary materials). The pH dependence of the absorption properties of cyclophane 4 was determined by spectrometric acid-base titrations in Britton-Robinson buffer (Figure 1). At neutral pH, the absorption maximum was at λabs = 314 nm. The absorbance increased both at lower (pH < 5) and higher pH (>8) values, with the highest absorbance at pH 2. The absorption maximum also shifted with varying pH, from λabs = 321 nm at pH 2 to λabs = 314 nm at pH 7 and to λabs = 318 nm at pH 12. Furthermore, a slight shoulder at λabs = 364 nm was observed at pH 2, which steadily disappeared with increasing pH. The data from the photometric titration were used to determine the pKa values of 5.2 and 9.4. Another pKa value was estimated to be in the range of 2-3, as has been usually observed for resembling cyclophanes with the same diethylenetriamine linker [62]; Scheme 1. Synthesis of cyclophane 4. Solvent and pH-Dependent Absorption and Emission Properties In the MeOH solution, cyclophane 4 exhibited an absorption maximum at λ abs = 314 nm and a fluorescence maximum at λ fl = 379 nm with low emission quantum yield (<0.01) (see Supplementary Materials). The pH dependence of the absorption properties of cyclophane 4 was determined by spectrometric acid-base titrations in Britton-Robinson buffer (Figure 1). At neutral pH, the absorption maximum was at λ abs = 314 nm. The absorbance increased both at lower (pH < 5) and higher pH (>8) values, with the highest absorbance at pH 2. The absorption maximum also shifted with varying pH, from λ abs = 321 nm at pH 2 to λ abs = 314 nm at pH 7 and to λ abs = 318 nm at pH 12. Furthermore, a slight shoulder at λ abs = 364 nm was observed at pH 2, which steadily disappeared with increasing pH. and fully characterized by NMR spectroscopy ( 1 H, 13 C, COSY, HSQC, and HMBC), elemental analyses, and mass spectrometry ( Figures S2-S7). In all cases, the Econfiguration of the alkene units in compounds 1, 2, and 4 were indicated by characteristic coupling constants of the alkene protons ( 3 JH-H = 16 Hz). Scheme 1. Synthesis of cyclophane 4. Solvent and pH-Dependent Absorption and Emission Properties In the MeOH solution, cyclophane 4 exhibited an absorption maximum at λabs = 314 nm and a fluorescence maximum at λfl = 379 nm with low emission quantum yield (<0.01) (see supplementary materials). The pH dependence of the absorption properties of cyclophane 4 was determined by spectrometric acid-base titrations in Britton-Robinson buffer (Figure 1). At neutral pH, the absorption maximum was at λabs = 314 nm. The absorbance increased both at lower (pH < 5) and higher pH (>8) values, with the highest absorbance at pH 2. The absorption maximum also shifted with varying pH, from λabs = 321 nm at pH 2 to λabs = 314 nm at pH 7 and to λabs = 318 nm at pH 12. Furthermore, a slight shoulder at λabs = 364 nm was observed at pH 2, which steadily disappeared with increasing pH. The data from the photometric titration were used to determine the pKa values of 5.2 and 9.4. Another pKa value was estimated to be in the range of 2-3, as has been usually observed for resembling cyclophanes with the same diethylenetriamine linker [62]; The data from the photometric titration were used to determine the pK a values of 5.2 and 9.4. Another pK a value was estimated to be in the range of 2-3, as has been usually observed for resembling cyclophanes with the same diethylenetriamine linker [62]; however, no adequate fit was obtained for this region, so a more accurate value was not available. The emission spectrum of cyclophane 4 revealed a broad maximum at λ fl = 410 nm at pH 2. With increasing pH to 4, the emission intensity firstly increased by a factor of ca. 2 and reached the highest intensity with a bathochromic shift of ∆λ fl = 5 nm. With the further addition of base (pH > 4), the fluorescence was strongly quenched by about 70% with a hypsochromic shift of the emission maximum of ∆λ = 27 nm at pH 8.5. At pH > 9, the emission intensity remained low, with a slight increase in the emission after pH > 10. Most notably, at neutral pH, the emission of the styrylpyridine is already sufficiently quenched so this compound may be used as a fluorescence light-up probe for target nucleotides at a physiological pH range, that is, under conditions usually found in real biological samples. Nucleotide-Binding Properties of 4 The association of the macrocyclic polyamine 4 with selected nucleotides was investigated by photometric and fluorimetric titrations with adenosine monophosphate (AMP), ATP, deoxyguanosine monophosphate (dGMP), thymidine monophosphate (TMP), TTP, and CTP in cacodylate buffer solution at pH 7.2, that is, conditions at which the emission is already very low (Figures 2 and 3). Upon addition of AMP, ATP, and dGMP to 4, the absorbance (λ max = 314 nm) decreased with the formation of a red-shifted absorption band (∆λ = 5 nm) and isosbestic points at λ = 323 nm, 320 nm, and 320 nm, respectively ( Figure 2A). In the presence of these nucleotides, the already weak fluorescence of the cyclophane 4 was further quenched with different efficiencies, that is, with I/I 0 of 0.46 (AMP), 0.59 (ATP), and 0.04 (dGMP) at saturation ( Figures 2B and 4A). Moreover, the fluorescence maximum of styrylpyridine 4 was blue-shifted with ∆λ = 34 nm on the addition of AMP and ATP, whereas no shift of the fluorescence maximum was observed with dGMP. Titrations of the cyclophane 4 with TMP and TTP decreased the absorbance with red shifts of Δλ = 3 nm ( Figure 3A). However, in contrast to titrations with the other nucleotides (see above), the addition of TMP and TTP resulted in a significant increase and blue shift (Δλ = 45 nm) of the fluorescence band ( Figure 3B). The fluorescence light-up effect is more pronounced with TMP (I/I0 = 2.72) than with TTP (I/I0 = 2.43), respectively ( Figure 4A). Upon the addition of CMP to 4, the absorption band remained essentially unchanged. At the same time, a fluorescence light-up effect was observed upon the addition of CMP along with a blue shift of the fluorescence maximum of Δλ = 41 nm; however, the increase The binding isotherms were determined from the fluorimetric titration data as logKb = 2.8, 3.2, and 2.3 for 1:1 complexes with TMP, TTP, and CMP, respectively (Table 1, Figure S1). For comparison, the reported logKb values of resembling pyrene-and anthracenebased cyclophanes are 4.77 and 5.16 [63], and 3.60 [62] for complexes with TTP, that is, somewhat higher than the values for cyclophane 4. Furthermore, the binding constants for cyclophane 4 are higher for the complexes with purine nucleotides than for the pyrimidine nucleotides, which is in accordance with a literature-known pyrene-based cyclophane [63]. Table 1. Absorption and emission properties of cyclophane 4 and its complexes with nucleotides, and the corresponding binding constants, logKb. Discussion The pKa values of 2-3, 5.2, and 9.4 for cyclophane 4 are assigned to the eight available protonation sites, namely the amine and pyridine functionalities. Specifically, the pKa values of the secondary amines fall in the range of the ones of similar, known amino-contain- The binding constants were determined from the experimental binding isotherms of the photometric titrations. Thus, the experimental data were reasonably fitted to a 1:1 binding stoichiometry of nucleotide and 4 with logK b values of 4.1, 5.0, and 3.8 for AMP, ATP, and dGMP, respectively (Table 1). These values are in the same range of logK b values for two resembling pyrene-based diethylenetriamine-cyclophanes with logK b values of 3.00 and 4.15 with AMP, 5.48 and 5.55 with ATP and 3.51 and 4.50 with dGMP [63] and slightly higher than those observed with the resembling anthracene-based cyclophane with a logK b value of 3.38 with ATP [62]. In comparison with mono-and triphosphate nucleotides, higher binding constants were also obtained with ATP as compared with AMP [63]. Titrations of the cyclophane 4 with TMP and TTP decreased the absorbance with red shifts of ∆λ = 3 nm ( Figure 3A). However, in contrast to titrations with the other nucleotides (see above), the addition of TMP and TTP resulted in a significant increase and blue shift (∆λ = 45 nm) of the fluorescence band ( Figure 3B). The fluorescence light-up effect is more pronounced with TMP (I/I 0 = 2.72) than with TTP (I/I 0 = 2.43), respectively ( Figure 4A). Upon the addition of CMP to 4, the absorption band remained essentially unchanged. At the same time, a fluorescence light-up effect was observed upon the addition of CMP along with a blue shift of the fluorescence maximum of ∆λ = 41 nm; however, the increase of the fluorescence intensity (I/I 0 = 1.23) was less pronounced than the one with TMP and TTP ( Figure 4). Notably, the increased emission intensity of compound 4 upon complex formation with the pyrimidine nucleotides can be seen with the naked eye ( Figure 4B). From the fluorimetric titration data, the limit of detection (LOD) of 4 was estimated to be 0.09 µM, 0.02 µM, and 0.04 µM for TMP, TTP, and CMP, respectively (Table S1). The binding isotherms were determined from the fluorimetric titration data as logK b = 2.8, 3.2, and 2.3 for 1:1 complexes with TMP, TTP, and CMP, respectively (Table 1, Figure S1). For comparison, the reported logK b values of resembling pyrene-and anthracene-based cyclophanes are 4.77 and 5.16 [63], and 3.60 [62] for complexes with TTP, that is, somewhat higher than the values for cyclophane 4. Furthermore, the binding constants for cyclophane 4 are higher for the complexes with purine nucleotides than for the pyrimidine nucleotides, which is in accordance with a literature-known pyrene-based cyclophane [63]. Discussion The pK a values of 2-3, 5.2, and 9.4 for cyclophane 4 are assigned to the eight available protonation sites, namely the amine and pyridine functionalities. Specifically, the pK a values of the secondary amines fall in the range of the ones of similar, known aminocontaining macrocyclic structures [63]. Accordingly, the pK a values of the two central amino groups are estimated to be in the range of 2-3, and the pK a value of 9.4 is assigned to the four lateral amino groups. In addition, the pK a value of 5.2 relates to the two pyridine units, which is in accordance with the known pK a value of 5.0 for 2-styrylpyridine [73]. Overall, the acid-base titrations revealed the expected protolytic equilibrium resulting from the protonation of the amino functionalities and the pyridine unit in an acidic medium (Scheme 2). In particular, as has been shown for resembling fluorophore-containing polyamine-linked cyclophanes [62,74], the emission of the styrylpyridine is efficiently quenched by a photoinduced electron transfer (PET) reaction of the electron-donating amine functionalities with the excited fluorophore, whereas upon protonation this deactivation pathway is suppressed and the emission intensity increases significantly [75]. Apparently, the pyridine unit does not interfere with this general process; however, under acidic conditions, the formation of the corresponding pyridinium may be responsible for the shifts of the emission maximum at lower pH values [76]. As compared with resembling anthracene-and pyrene-based cyclophanes, which show a fluorescence light-up effect upon complexation of TTP, CTP, and ATP and fluorescence quenching with GTP [62,63], cyclophane 4 exhibits a different dependence of the fluorimetric response on the type of nucleotide. Namely, a fluorescence enhancement occurs upon binding of pyrimidine nucleotides TMP, TTP, and CMP, whereas an effective quenching of the fluorescence results from association with purine nucleotides AMP, ATP, and dGMP. This observation may be explained by the specific pH-and structure-dependent emission properties of the cyclophane 4. Firstly, the amino functionalities of the linker units quench the emission of such cyclophanes by a PET reaction (see above) [62], which readily explains the low emission at the applied pH of 7.2. More importantly, cyclophane 4 exhibits two different emission maxima: a fluorescence maximum at λ = 429 nm in the unbound state and a blue-shifted one around λ = 384 nm upon complexation of the nucleotides. As it has been observed already with similar aminoalkyl-linked cyclophanes that these compounds tend to form emitting excimers [63], it is proposed that the red-shifted emission of 4 also originates from an intramolecular excimer formation between the two styrylpyridine units (Scheme 3). This proposal is in agreement with the excimer formation of resembling azastilbene-type derivatives, which is accompanied by a red shift of the emission maximum [77][78][79][80]. Upon binding of the pyrimidine nucleobases with the cyclophane 4, the emission increases as a result of the formation of the host-guest complexes, presumably because the complexation of the nucleotide involves hydrogen bonding with the amino functionalities [81,82], which in turn suppresses the PET quenching of the photoexcited fluorophore and leads to increased fluorescence intensity. In addition, the accommodation of the nucleotide in the cavity of the cyclophane also inhibits the excimer formation so only the blue-shifted monomer emission is detected. In contrast, the binding of purine nucleobases leads to emission quenching of cyclophane 4. This fluorescence quenching of cyclophane 4 by purine nucleotides may be explained by a different binding As compared with resembling anthracene-and pyrene-based cyclophanes, which show a fluorescence light-up effect upon complexation of TTP, CTP, and ATP and fluorescence quenching with GTP [62,63], cyclophane 4 exhibits a different dependence of the fluorimetric response on the type of nucleotide. Namely, a fluorescence enhancement occurs upon binding of pyrimidine nucleotides TMP, TTP, and CMP, whereas an effective quenching of the fluorescence results from association with purine nucleotides AMP, ATP, and dGMP. This observation may be explained by the specific pH-and structure-dependent emission properties of the cyclophane 4. Firstly, the amino functionalities of the linker units quench the emission of such cyclophanes by a PET reaction (see above) [62], which readily explains the low emission at the applied pH of 7.2. More importantly, cyclophane 4 exhibits two different emission maxima: a fluorescence maximum at λ = 429 nm in the unbound state and a blue-shifted one around λ = 384 nm upon complexation of the nucleotides. As it has been observed already with similar aminoalkyl-linked cyclophanes that these compounds tend to form emitting excimers [63], it is proposed that the red-shifted emission of 4 also originates from an intramolecular excimer formation between the two styrylpyridine units (Scheme 3). This proposal is in agreement with the excimer formation of resembling azastilbene-type derivatives, which is accompanied by a red shift of the emission maximum [77][78][79][80]. Upon binding of the pyrimidine nucleobases with the cyclophane 4, the emission increases as a result of the formation of the host-guest complexes, presumably because the complexation of the nucleotide involves hydrogen bonding with the amino functionalities [81,82], which in turn suppresses the PET quenching of the photoexcited fluorophore and leads to increased fluorescence intensity. In addition, the accommodation of the nucleotide in the cavity of the cyclophane also inhibits the excimer formation so only the blue-shifted monomer emission is detected. In contrast, the binding of purine nucleobases leads to emission quenching of cyclophane 4. This fluorescence quenching of cyclophane 4 by purine nucleotides may be explained by a different binding mode of the purine nucleotides ATP, AMP, and dGMP, as compared with one of pyrimidine nucleotides, which leads to a fluorescence enhancement upon formation of the cyclophane-nucleotide complex [83][84][85]. At the same time, it cannot be excluded that the purine nucleotides bind in a similar mode as the pyrimidine nucleotides and that the fluorescence quenching by ATP, AMP, and dGMP is just the result of a stronger quenching efficiency of the purine bases. Accordingly, the latter have a much lower reduction potential than the pyrimidine bases [86,87] and can, therefore, induce an efficient fluorescence quenching by a photoinduced electron transfer reaction with the excited styrylpyridine. Chemistry 2023, 5, FOR PEER REVIEW 9 the pyrimidine bases [86,87] and can, therefore, induce an efficient fluorescence quenching by a photoinduced electron transfer reaction with the excited styrylpyridine. To the best of our knowledge, this is the first reported cyclophane-based fluorescent probe that can discriminate between purine and pyrimidine nucleobases based on a clear light-up effect induced by the latter. Nevertheless, a resembling anthracene-based derivative bearing two imidazolium-containing alkyl chains is known to show these properties [43]. Because of the significant light-up effect of 4 upon binding to TMP and TTP, cyclophane 4 may be employed as a fluorescent probe for the detection of thymine-based nucleotides. Notably, the detection of nucleotides is accomplished under physiological conditions at pH 7.2, rendering cyclophane 4 also interesting for biological applications. For comparison, only a few examples of cyclophanes have been explicitly reported that enable the detection of nucleotides at neutral pH [54,88], so there is still a demand to develop such recognition systems for nucleotides, that is, as the one reported herein, which operate in a physiological pH range. Conclusions The spectroscopic investigation of the nucleotide-binding properties of the cyclophane 4 revealed that purine bases AMP, ATP, and dGMP are binding upon fluorescence quenching, whereas in contrast, with pyrimidine bases TMP, TTP, and CMP, a clear, distinguishable fluorescence light-up effect was observed. Overall, we have demonstrated that the styrylpyridine unit is a useful and complementary fluorophore for the development of selective nucleotide-targeting fluorescent probes based on alkylamino-linked cyclophanes, especially considering the observation that this probe operates at the physiological pH range. Therefore, further studies of the particular binding modes as well as systematic variations of the substitution pattern, should enable the development of efficient chemical sensors for bioanalytical applications. Materials and Methods The commercially available chemicals (Alfa, Merck, Fluorochem, or BLDpharm) were of reagent grade and used without further purification. Nucleotides ATP (adenosine-5′-triphosphate disodium salt) and CMP (cytidine-5′-monophosphate disodium salt) were purchased from Feinbiochemika (Heidelberg, Germany), and nucleotides TMP (thymidine-5′-monophosphate disodium salt hydrate), TTP (thymidine-5′-triphosphate tetrasodium salt), AMP (adenosine-5′-monophosphate sodium salt) and dGMP (2′-deoxyguano- To the best of our knowledge, this is the first reported cyclophane-based fluorescent probe that can discriminate between purine and pyrimidine nucleobases based on a clear light-up effect induced by the latter. Nevertheless, a resembling anthracene-based derivative bearing two imidazolium-containing alkyl chains is known to show these properties [43]. Because of the significant light-up effect of 4 upon binding to TMP and TTP, cyclophane 4 may be employed as a fluorescent probe for the detection of thymine-based nucleotides. Notably, the detection of nucleotides is accomplished under physiological conditions at pH 7.2, rendering cyclophane 4 also interesting for biological applications. For comparison, only a few examples of cyclophanes have been explicitly reported that enable the detection of nucleotides at neutral pH [54,88], so there is still a demand to develop such recognition systems for nucleotides, that is, as the one reported herein, which operate in a physiological pH range. Conclusions The spectroscopic investigation of the nucleotide-binding properties of the cyclophane 4 revealed that purine bases AMP, ATP, and dGMP are binding upon fluorescence quenching, whereas in contrast, with pyrimidine bases TMP, TTP, and CMP, a clear, distinguishable fluorescence light-up effect was observed. Overall, we have demonstrated that the styrylpyridine unit is a useful and complementary fluorophore for the development of selective nucleotide-targeting fluorescent probes based on alkylamino-linked cyclophanes, especially considering the observation that this probe operates at the physiological pH range. Therefore, further studies of the particular binding modes as well as systematic variations of the substitution pattern, should enable the development of efficient chemical sensors for bioanalytical applications. Materials and Methods The commercially available chemicals (Alfa, Merck, Fluorochem, or BLDpharm) were of reagent grade and used without further purification. Nucleotides ATP (adenosine-5 -triphosphate disodium salt) and CMP (cytidine-5 -monophosphate disodium salt) were purchased from Feinbiochemika (Heidelberg, Germany), and nucleotides TMP (thymidine-5 -monophosphate disodium salt hydrate), TTP (thymidine-5 -triphosphate tetrasodium salt), AMP (adenosine-5 -monophosphate sodium salt) and dGMP (2 -deoxyguanosine-5 -monophosphate sodium salt hydrate) were purchased from Sigma-Aldrich (St. Louis, MO, USA). 1 H NMR spectra were recorded with a JEOL ECZ 500 ( 1 H: 500 MHz and 13 C: 125 MHz) and a Varian VNMR S600 ( 1 H: 600 MHz and 13 C: 150 MHz) at T = 25 • C. The 1 H NMR and 13 C{1H} NMR spectra were referenced to an internal standard in CDCl 3 [TMS: δ( 1 H) = 0.00 ppm, δ( 13 C) = 0.00 ppm]. Structures were assigned with additional information from gCOSY, gHSQC, and gHMBC experiments, and the spectra were processed with the software MestreNova. The mass spectra were recorded with a Finnigan LCQ Deca (driving current: 6 kV, collision gas: argon, capillary temperature: 200 • C, support gas: nitrogen) and an Orbitrap mass spectrometer Thermo Fisher Exactive (driving current: 3.5 kV, capillary temperature: 300 • C, capillary voltage: 45 V, injection rate: 5 µL/min, scanning range: 150−750 m/z, and resolution: ultra-high) and processed with the software Xcalibur. The CHNS analysis data were determined in-house with a HEKAtech EuroEA combustion analyzer. The melting points were measured with a melting point apparatus BÜCHI 545 (Büchi, Flawil, CH) and are uncorrected. The absorption spectra were recorded on a Varian Cary 100 Bio absorption spectrometer with Hellma quartz glass cuvettes 110-QS (layer thickness d = 10 mm). The emission spectra were recorded on a Varian Cary Eclipse fluorescence spectrometer with Hellma quartz glass cuvettes 115 FQS (layer thickness d = 10 mm). All measurements were recorded at T = 20 • C as adjusted with a thermostat if not stated otherwise. The sample solutions in the titration experiments were mixed with a reaction vessel shaker Top-Mix 11118 (Fisher Bioblock Scientific). E-Pure water was obtained with an ultrapure water system D 4632-33 (Wilhelm Werner GmbH, Leverkusen, D) with filters D 0835, D 0803, and D 5027 (2×).
2023-05-14T15:10:16.015Z
2023-05-11T00:00:00.000
{ "year": 2023, "sha1": "5148a037021135829e25f380fc7301945af43f2e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/chemistry5020082", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7328d364529bc0d57f4805bc2c7419dd65a88049", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
119285101
pes2o/s2orc
v3-fos-license
Gamma-ray evidences of the dark matter clumps We discuss the possibility of identification of point-like gamma-ray sources (PGS) with small scale dark matter (DM) clumps in our Galaxy. Gamma-rays are supposed to originate from annihilation of DM particles in the clumps, where annihilation rate is supposed to be enhanced, besides higher density, due to smaller relative velocities $v$ of DM particles. We parameterized the annihilation cross section $\sigma_\text{ann}(v)$ in the form of an arbitrary power law dependence on the relative velocity $v$ with/without factor of Sommerfeld-Gamow-Sakharov, implying existence of a new Coulomb-like interaction. Adopting different parameters of cross section and clump, satisfying condition $\Omega\lesssim 0.2$ on density of DM particles of question, they are constrained from comparison with Fermi/LAT data on unidentified PGS as well as on diffuse $\gamma$-radiation; results are applied to concrete DM candidates. Such analysis is found to be sensitive enough to existing uncertainty in the density profiles of DM in the clump what can provide a tool for their test. Also we discuss possibilities when gamma-radiating clump changes visibly its position on celestial sphere and it is seen as a spatially extended gamma-source (EGS), what can be probed in future experiments like Gamma-400. Introduction From the first articles revealing the indirect effects of the cold dark matter (CDM) in the form of heavy neutral leptons [1][2][3][4][5][6][7][8] or supersymmetric particles [7,9,10], such indirect effects had been the subject of intensive studies in the cosmic ray (CR) data. It was shown that the DM particles could form the hierarchic structures over a wide range of scales and masses (from small scale clumps to large scale structures) [11][12][13][14][15][16]. The annihilation of DM particles within these structures can give the cosmic ray signals [4][5][6][7][8][17][18][19][20][21][22]. In the clumps the annihilation rate should be enhanced due to higher density and possibly due to amplification of annihilation cross section at small relative velocities of DM particles, which is especially small in the lightest clumps, which are likely to be the most abundant. The mentioned factors can lead to that the clumps, located in a neighborhood of Solar system, are manifested as discrete (basically as point-like) gamma-ray sources [19,21,[23][24][25]. * k-belotsky@yandex.ru † kirillov-aa@yandex.ru ‡ khlopov@apc.univ-paris7.fr In this paper we continue the previous study of given effect [25] and, mainly, make more accurate calculations, take into account diffuse γ-radiation, consider possibility of observation of spatially extended sources in the light of future experiments. DM annihilation in the CDM clumps Predictions of density profile inside the clumps suffer with some uncertainties [26][27][28][29]. For the most of calculations in this paper we use profile obtained in [19,21,23], which gives rather minimal prediction for γ-flux from the clump. Comparison with other profile models is given below in terms of the numbers of predicted PGS. The chosen profile (indicated further as BGZ) was taken in the form where η = 1.8, r 0 = 0.05R, R ≈ 10 18 (M/M ) 1/3 cm with M being the clump mass [19,21]. It is supposed that only minor component of clumps can survive until the present time [21]. For our estimation ξ = 0.002 is taken as the clumps fraction from the full density of DM in Galaxy. Here we study γ-radiation effect for only minimal clump mass, formally assuming that all DM clumps are with this mass. In fact, they are predicted to be most abundant [19,21,30], however effect of clumps (sub-halos) from a high-mass tail of mass distribution can be also noticeable [31]. A smallness of relative velocity v of DM particles concentrated in the clumps may strongly affect to annihilation rate if the corresponding cross section σ ann depends from v [32,33]. We choose the cross section at the parameterized form given below, to cover a wide class of models of DM particles: Here β is a free parameter taking (for generality) the continuous range of values (as usual β = 1 means the s-wave amplitude only, β = −1 means the amplitude from s-and p-waves), σ 0 at given β is determined by cosmological density of the particles Ω. The factor C(v, α) takes explicitly into account a possible Coulomb-like interaction (we will refer to it as "y-interaction") of DM particles, which leads to a Sommerfeld-Gamow-Sakharov enhancement [34][35][36][37] and has the form: Here α is the fine structure constant of additional interaction. Such interaction may influence (decrease) relic density a little, but may significantly enhance the annihilation in the present Universe where the particle velocities are small [38][39][40][41][42][43]. Annihilation effects become noticeable even for a subdominant component with Ω Ω CDM as it takes place in case of heavy neutrinos [44] to be shown below. We suppose that an active (annihilating) component of DM may be both dominant and subdominant, i.e. Ω ≤ Ω CDM with Ω CDM ∼ 0.2 being the total relative density of cold dark matter in Universe. In estimation of cosmological density Ω we follow to the standard approach [45,46]. It is worth to note that the given scheme does not take into account possibility of binding pairs of considered particle-antiparticle due to y-interaction. The rate of such recombination may exceed expansion rate due to high recombination cross section, and this process becomes crucial, if DM particles decoupled from ambient plasma, leading inevitably to annihilation. It may strongly suppress abundance of these DM particles [38]. DM clumps as gamma-ray sources In our estimations it is assumed that DM particle of question is of Dirac type 1 with mass m ∼ 100 GeV. If annihilating DM particles do not constitute all DM (Ω < Ω CDM ) then their contribution to density of clumps is assumed to be proportional to Ω/Ω CDM (eq. (5)). We do not specify annihilation channel of photon production, assuming that their averaged multiplicity for energy E γ > 100 MeV is N γ = 10. It is quite typical value for high energy processes at respective energy release. The photon flux at distance l from the clump center is given by where the particles/antiparticles number density is Note that the fraction of subdominant DM particles should be suppressed in the clumps of mass M < M min , if they are, where M min is the minimal mass which could be formed by considered DM particles if they prevailed in density. In our study we do not take into account this. The value σ ann v is determined taking into account velocity distribution of DM particles inside the clump which is assumed to be Maxwellian one with "virial" temperature T vir = GM m/2R. γ-Radiation may be registered by LAT, if E γ > 100 MeV [52] and their flux exceeds the point source sensitivity F min ≈ 3 · 10 −9 cm −2 sec −1 . The value F min allows to calculate the maximal distance l max at which the clump can be registered as γ-source. It gives for BGZ l max ∼ 10 −3 pc for β = 1.5, σ 0 = Model Cored Isothermal [47] Burkert [48] NFW [49] Einasto [50] Moore [51] N/N BGZ 1.2 6.0×10 2 2.1 × 10 3 3.3 × 10 3 1.7 × 10 4 Table 1: The number of PGS as predicted for some density profiles in the unites of that for BGZ profile. 10 −35 cm 2 and 10 −10 M without y-interaction and l max ∼ 10 pc with one and the same parameters, and for the Moore model l max ∼ 10 −1 pc and l max ∼ 10 2 pc respectively. All the obtained l max Galactic size, what justifies assumption that clump number density n cl ≈ const and corresponds to the local one. The number of clumps which may be detected by LAT is where where ρ loc = 0.3 GeV/cm 3 . The analogous results for some other models are given in tab. 1. For all the models "core"-radius and clump size were taken the same as in BGZ model. It is seen that the BGZ model predicts the minimal number of PGSs. So other models become more sensitive to observational data and the comparison with the data becomes very important tool for probing them. The spatial distribution of clumps at distance l max from the Earth is expected to be homogeneous, therefore its distribution on celestial sphere should be isotropic. Distribution of unidentified PGS registered by Fermi LAT [53] is almost isotropic except of region of the galactic plane. The isotropic component include ∼100 sources. Supposing that the nature of all of them can be related to dark matter clumps, it is possible to determine regions of the parameters magnitudes β and σ 0 for the typical clump masses 10 −10 ÷ 10 −6 M [22]. The results are shown at fig. 1 where the factor ζ is This factor includes uncertainties of the chosen parameters of DM particle properties (not of clump density profile). The predicted number of the clumps showing themselves as PGS depends on it as N ∝ ζ −2/3 . It is useful to take it into account while considering results for other models (see tab. 1). The clumps which are not visible (farther than l max ) should contribute in diffuse γ-radiation. γ-Flux from them in given solid angle can be expressed as where F and P are introduced in Eq. 4, l halo is the distance to the edge of halo along to line of sight and l eff halo ≈ 10 kps is its effective value (typical for many halo density profiles); l max is negligible with respect to l halo . One requires that where Φ exp is the diffuse γ-background measured by LAT [54]. Eq. 10 gives upper limit in the plot fig. 1. As seen, the case of clump mass M = 10 −10 M is fully excluded, the case of M = 10 −6 M is constrained but up to ∼10 PGS are still possible. Higher masses avoid such constraining. Several specific models of annihilating DM particles have been considered: neutralino [55], heavy neutrino [44,56] (with y-interaction and without it), Kaluza-Klein particles [57], dark atoms OHe [58,59]. All candidates except for a heavy neutrino (ν 4 ) with additional interaction cannot explain LAT data but, at the same time, do not contradict them in the case of BGZ profile. The case of ν 4 is pointed on fig. 1(b). As seen, the part of nonidentified point-like LAT ). It is worth to note that the heavy neutrino parameters are strongly restricted by underground experiments [44,61]. At the same time the prediction of ν 4 relic density suffers with uncertainty connected with recombination of the pair of y-interacting neutrino-antineutrino in early Universe [44]. Some possible features of the model 4.1 Spatially extended sources The distribution of clumps does not exclude a possibility of their presence in close vicinity to Solar system. Therefore an angular size of such the sources can exceed the angular resolution of detector δ (corresponding to the solid angle, say, Ω δ = π(δ/2) 2 ). In the capacity of criterion of extended gammasource (EGS) we choose the following. The minimal amount of photons needed to recognize a source over background from a region Ω δ is defined by the minimal point source flux F min . Statistically significant signal from region with the size of νΩ δ must be then at least > √ νF min . For clump candidate to EGS we require fulfilment of similar condition for the circle region of πδ 2 = 4Ω δ size around the clump center excluding the π(δ/2) 2 center part: Here for minimal flux a little harder condition was taken than it should be proceeding from the number of Ω δ regions which the ring [δ/2 . . . δ] covers, i.e. √ 3F min . However, this criterion (11) is softer than requirement F (Ω δ ) ≥ F min for several adjacent Ω δ re- gions simultaneously or even F (νΩ δ ) ≥ νF min for all the νΩ δ region wholly, and it needs a special analysis of observation data. Based on Eq. (11) we find that Fermi LAT can hardly observe any clump as non-point-like source, what does agree with observation data. However, it becomes possible at the improved in future experiment values of F min and δ. At the fig. 3 we show the regions of parameters of clump mass M and β satisfying to data on diffuse and discrete sources of γ-radiation. Dashed line shows EGS discovery potential at the achievement of δ = δ G400 ≈ 0.3 • (vs δ LAT ≈ 0.6 • for Fermi LAT at E γ 1 GeV [52]) and the same F min as for Fermi LAT. In principle, such values can be reached for Gamma-400 detector [62] at some energy interval. EGS discovery will help to test the clump model. Note, that improvement (minimization) of F min (in perfect case, minimal flux per unit of solid angle) is found to be more promising from viewpoint of EGS discovery than improvement of δ only. However, at better δ one can partially compensate low sensitivity to photon flux by taking several Ω δ region. "Traveling" sources Provided DM clump accounts for PGS, there can be sources which change visibly its position on ce-lestial sphere within time of observations (the time between EGRET experiment [63] and Fermi LAT is ∼10 years), due to its closeness to Solar system. Clump velocity in Galaxy is v ∼ 250 km/s. For 10 years they travel the distance about ∆ ∼ 2.6 · 10 −3 pc. Let us estimate the number of clumps with visible movement. The number of clumps per the spherical layer r ÷ r + dr and per interval cos θ ÷ cos θ + d cos θ, where θ is the angle between radius-vector r (originating from observer) and vector of clump velocity v, is Here we refer to angular distribution of v as isotropic one with θ ranging from 0 to π. Integration (12) gives the distribution in angle which clump travels on the sky, Here we changed the variables r and cos θ to φ through integration with δ-function (divergence at φ → 0 is a consequence of our approximation n cl = const for 0 < r < ∞). Estimation of the number of the moved (visibly) sources gives Here φ max 1 does not affect the result, φ min = max(δ resol , δ lim ), where δ resol = δ 2 EGRET + δ 2 LAT ∼ 1.47 • with δ EGRET ≈ 1.34 • and δ LAT ≈ 0.6 • being the angular resolutions of detectors EGRET and LAT for E γ 1 GeV, δ lim ∼ ∆/l max corresponds to angular shift of the clump at maximal observable distance. For maximal estimations in (14) we put φ min = δ resol , that gave result independent on the model of the clump density profile. In case of φ min = δ lim > δ resol , all the visible clumps (if they are) should be traveling. Thus, as seen from (14), for mass of clumps M ∼ 10 −7 M the probability to find the moved sources becomes noticeable (for any density profile). The value M ∼ 10 −7 M is close to the minimal one which is allowed from data on diffuse γ-background ( fig. 3). However, time of observation(s) grows and future experiment like Gamma-400 will work at better angular resolution. Both from the time and from the resolution the expected number of traveling sources (TS) depends as a cube power (see (14)). It challenges for future experiment to probe the model with searching for TS for clump mass up to ∼ (10 −6 ÷ 10 −5 )M . Even now, one may consider time interval of 20 years (from the start of EGRET) and expect to find several TS. In case of N trav 1 the movement of the sources should have a regular character accounted for by solar system motion around Galactic center (GC). Respective data analysis would provide much more strong test for discussed model, since virtually all the known astrophysical explanations of PGSs do not relate them with objects of halo (but rather with galactic disc or distant galaxies, in case of which observations are insensitive to solar system motion around GC). Existing difference in data on non-identified PGSs of EGRET and LAT (LAT confirms existence of only 30-40% of the sources from catalogue 3EG [53,63] and observes new sources which had not been registered by EGRET) could be explained by effects of traveling clump-sources in small part. Conclusion In this paper we refine the previous results [25] and have shown that DM clumps could be point-like (and extended) sources of the γ-radiation and they can partially explain non-identified γ-sources, registered by LAT and EGRET. The proposed method allows to estimate an amount of accessible to observation of DM clumps for various models of DM particle and density profile. Note that the suppression of the subdominant fraction of DM particles in clumps of mass M < M min (if they are) has not been taken into account and requires special research. The values of parameters σ 0 and β, at which DM model is either consistent or inconsistent with the LAT data, have been determined for the most "conservative" model BGZ [21]. The high sensitivity of the predictions to the choice of density profile model is shown. The clumps, situated in a close vicinity of Solar system, may account partially for noncoincidences between catalogues EGRET and LAT (the sources, registered by EGRET and not confirmed by LAT) and also for a spatially extended gamma-ray sources which can be detected by future gamma-telescopes. Possibilities that gamma-radiating clump changes visibly its position on celestial sphere and it is seen as a spatially extended gamma-source can play a role of distinctive features, to be probed in future experiments. It would allow to distinguish from alternative models of possible gamma-source origin and in particulary the model associated with primordial black hole clusters [64,65].
2012-12-25T21:54:14.000Z
2012-12-25T00:00:00.000
{ "year": 2012, "sha1": "d17d1d3ee3d4c9890e85a416a61b164004de7c04", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1212.6087", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d17d1d3ee3d4c9890e85a416a61b164004de7c04", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15453333
pes2o/s2orc
v3-fos-license
Association of Blood Cadmium Level with Cardiometabolic Risk Factors and Liver Enzymes in a Nationally Representative Sample of Adolescents: The CASPIAN-III Study Introduction. This study aimed to determine the association of blood cadmium level with cardiometabolic risk factors and liver enzymes in adolescents. Methods. This case control study comprised 320 Iranian adolescents, 160 with metabolic syndrome and an equal number of controls. They were selected from participants of a nationwide survey entitled the CASPIAN-III study. Cadmium was measured by atomic absorption method. Results. The mean age of the case and control groups was not significantly different (15.3 ± 2.6 versus 14.63 ± 2.5 years, resp., P > 0.05). The mean cadmium level was near double-fold higher than the standards of the World Health Organization, without significant difference between the MetS and control groups (10.09 ± 2.21, 9.97 ± 2.38 μg/L, resp., P > 0.05). Cadmium level had positive but nonsignificant correlations with diastolic blood pressure, serum triglycerides, fasting blood glucose, LDL-C, and liver enzymes. Conclusion. Cadmium level had positive but nonsignificant association with some cardiometabolic risk factors and liver enzymes. The associations did not reach statistical significant level, and this may be because of the high levels of cadmium in both groups studied or because of the young age group of participants. Controlling environmental pollutants shall be a priority for the prevention of chronic diseases. Introduction Metabolic syndrome (MetS) is an emerging health problem at global level and increases the risk of most chronic diseases. It origins from early life and consists of various components including obesity, elevated blood pressure, elevated serum glucose, and dyslipidemia in terms of increased triglycerides and reduced high-density lipoprotein cholesterol (HDL cholesterol) levels [1]. It is no more limited to the western countries and adult populations [2,3]. Asians have an ethnic predisposition to MetS, and it is one of health concerns in Iran [4,5]. MetS is a multifactorial condition, and in addition to genetic and lifestyle factors, environment influences the development of this disorder [6]. Heavy metals are one of the environmental factors that may have a role in this regard. Heavy metals or toxic metals such as mercury, lead, and cadmium have no biological function in human system and are potentially toxic even at trace concentrations. Cadmium can enter into blood stream by eating and drinking cadmium-contaminated food or water and/or by breathing cadmium-contaminated air [7][8][9]. Lee and Kim reported for the first time that blood cadmium level is a risk factor for MetS [9]. Various studies showed that urinary cadmium levels are significantly and dose dependently associated with both impaired fasting glucose and diabetes and even can lead to diabetic nephropathy [10,11]. A study in Pakistan revealed that high cadmium levels in biological samples of diabetic women may play a role in the pathogenesis of diabetes mellitus and may also impact on their neonates [12]. With the advent of large-scale metal mining and smelting, as well as fossil fuel combustion in the industrial countries, the emission rate of heavy metals has increased dramatically [13]. Both MetS and cadmium exposure and accumulation in the body start at young age [14,15]. Therefore, a relationship may exist between cadmium and MetS from childhood. This study aimed to compare the serum cadmium level, cardiometabolic risk factors, and liver function tests in adolescents with and without MetS. Methods This case control study was conducted as a substudy of the third survey of the national school-based surveillance system entitled Childhood and Adolescence Surveillance and PreventIon of Adult Noncommunicable disease (CASPIAN-III) (Caspian is the name of the world's largest lake, located in Northern Iran) study. The main study was approved by the institutional review boards at national and provincial levels. Written consent and oral assent were obtained from students and their parents, respectively. The current substudy was conducted on blood samples collected in the main study and was approved by the Ethics Committee of Isfahan University of Medical Sciences. This study was performed in accordance with the ethical standards of the Helsinki Declaration. The main study was conducted as a school-based nationwide health survey among 5570 students aged 10-18 years, who were recruited by multistage random cluster sampling from urban and rural areas of 27 provincial counties in Iran. Those students with history of any acute or chronic diseases and any medication use were not included in the study [16]. A trained team of health professionals conducted the physical examination under standard protocols by using calibrated instruments. Weight, height, and waist circumference (WC) were measured. Body mass index (BMI) was calculated as weight (Kg) divided by height squared (m 2 ). Blood pressure was measured under standard protocol [17]. For blood sampling, students were invited to the nearest health center to the school. Fasting venous blood samples were centrifuged, and fresh sera were analyzed for fasting blood glucose (FBG), lipid profile, and liver function tests, that is, alanine aminotransaminase (ALT) and aspartate aminotransaminase (AST) by using Pars Azmoon reagent kits (Tehran, Iran). For measuring cadmium, frozen sera of 160 participants with MetS and an equal number of healthy controls were used. Cadmium levels were determined by atomic absorption spectrophotometer by using hollow cathode lamps. Similar to the first survey of CASPIAN study [18], we used the definition provided by Cook et al. [19]. This definition is based on criteria analogous to that of the National Cholesterol Education Program Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults Adult Treatment Panel III (ATP III) [20]. It defines the MetS as having at least three of the following criteria: WC was at or above the 90th percentile value for age and sex; SBP and DBP were at or above the 90th percentile for age, sex, and height; the midpoint value for HDL-C (≤40 mg/dL) was used as a 10th percentile value; the midpoint value for TG (≥110 mg/dL) was taken as the 90th percentile value for age. FBG levels of ≥100 mg/dL were considered to be high [21]. Statistical Analyses. Statistical analyses were performed using SPSS statistical package version 18 for Windows. Chisquare and independent sample -tests were used to compare categorical and quantitative data, respectively. Correlation models were used to assess the relationships between the diagnostic components of MetS and cadmium concentration. values of <0.05 were considered as statistically significant. Results The study population consisted of 320 adolescents (160 with MetS and 160 healthy controls). The mean age of the case and control groups was not significantly different (15.3 ± 2.6 versus 14.96±2.51 years, resp., > 0.05). The mean cadmium level was near double fold higher than the standards of the World Health Organization [22], without significant difference between the MetS and control groups (10.09±2.21, 9.97 ± 2.38 g/L, resp., > 0.05). Table 1 presents the characteristics of the study population. BMI, total cholesterol (TC), TG, FBG, ALT, SBP, and DBP were significantly higher in the MetS group than in controls. The corresponding figure was not significantly different for AST, HDL-C, and low density lipoprotein cholesterol (LDL-C). According to the regression analysis, cadmium level had positive but nonsignificant relationship with LDL-C, TG, FBG, ALT, AST, and DBP (Table 2). Discussion We investigated the association of cadmium level with cardiometabolic risk factors, MetS, and liver function tests in a nationally representative sample of Iranian adolescents. To the best of our knowledge, this study is the first of its kind in the pediatric age group. Cadmium level was near twofold higher than standard levels [22] in all of the population studied. However, cadmium level was not significantly different among adolescents with and without MetS. Likewise, cadmium had positive, but nonsignificant association with liver function tests and most cardiometabolic risk factors. This nonsignificant association may be because of high levels of cadmium in both groups with and without MetS. In addition, it is suggested that the adverse health effects of Children can be exposed to cadmium through contaminated air, water, soil, food, consumer products, and secondhand smoke [23]. The estimated half-life of cadmium is about 10 to 30 years, and over time, it accumulates in different organs as kidney, liver, bone marrow, and muscles, and these organs could be a source of cadmium continuously released into the bloodstream [24,25]. Contrary to our results, a study in Korea revealed that blood cadmium levels increased the risk of MetS in adults [9]. It is well documented that chronic cadmium exposure may cause impaired fasting glucose and diabetes in humans [26,27]. Heme oxygenase-2 (HO-2) acts as a protective factor against type-2 diabetes and obesity; cadmium has the propensity to alter its catabolism and may increase the risk of diabetes [28]. We did not find any significant association of cadmium with MetS and FBG; this may be because of the young age group studied; such association may develop over time. Some studies have reported blood cadmium level as a risk factor for prehypertension in both women and men [29]. Cadmium concentrates in the kidney and may induce proteinuria and renal dysfunction; in turn it may cause hypertension. Moreover, renal cadmium reduces CYP4A11 and PPARs, which may be related to hypertension and sodium retention [30,31]. We found positive association between cadmium and blood pressure, but the weak and nonsignificant correlation may be because of the young age of the study participants, and longitudinal studies are necessary to assess the long-term effects of cadmium on blood pressure. In our study, the association of cadmium level with serum lipid profile was weakly positive, but nonsignificant; this may be because in both groups with and without MetS, cadmium level was considerably high without significant difference between the two groups. Experimental studies have shown that cadmium exposure induces alterations in lipid profiles [32][33][34]. No epidemiological study has been performed in this regard. However, some studies showed that cadmium levels in blood and urine are independent factors associated with the development of atherosclerotic plaques by the influence on selected lipid metabolism parameters [35][36][37]. Environmental factors have various health impacts on risk factors of noncommunicable diseases [38] even in children and adolescents [39,40]. Different sources of pollutants should be controlled to prevent their short-term and longterm adverse health effects. Study Limitations and Strengths. The main limitation of this study is its cross-sectional nature, so the associations of different variables should be considered with caution. The study strengths are the novelty of studying the association of cadmium with cardiometabolic risk factors and liver enzymes in the pediatric age group and using data of a nationally representative group of adolescents, which would increase the generalizability of the study findings. Conclusion Cadmium level was considerably high in both groups of adolescents with and without MetS. It had positive but nonsignificant association with cardiometabolic risk factors and liver enzymes. This finding may be because of the high levels of cadmium in both groups studied or because of the young age group of participants. Controlling environmental pollutants shall be considered as a health priority for primordial/primary prevention of noncommunicable diseases.
2017-08-27T06:58:17.333Z
2013-05-16T00:00:00.000
{ "year": 2013, "sha1": "c100c604169af967e953807220f1914d971b8b38", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jeph/2013/142856.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78c4640618e140c9b758aaad50d4353495d0d4a2", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
230547353
pes2o/s2orc
v3-fos-license
CRANIAL ULTRASOUND: EFFICIENT SCREENING TOOL FOR EARLY DETECTION OF BRAIN INJURY IN PRETERM INFANTS , Introduction Preterm births remain as a major challenge in prenatal health.The World Health Organization (WHO) in 2018 stipulated that every year, an estimated 15 million babies are born preterm (before completing the 37 weeks of gestation), and this number is rising. 1,2reterm birth complications are the leading cause of death among children under 5 years of age, responsible for approximately 1 million deaths in 2015.Preterm births are the major cause of neonatal mortality and nearly half of all cases have had congenital neurological disorders including cerebral palsy.Although all births less than 37 weeks are classified as preterm births, the births less than 32 weeks' gestation comprise the highest mortality and neonatal disorders. 3,4rain injury is a major complication of preterm birth.It puts the infants that survive in conditions at risk of developmental disorders, cognitive dysfunction, and impaired manner.The brain injury of preterm infants differs from that of full-term infant.The most significant development differences that occur in the third semester pregnancy is the maturation of pressure-passive circulation, involution of the germinal matrix, selective stage-dependent differences in vascular white matter, and the sensitivity of oligodendrocyte against hypoxic injury.Cerebral injuries, such as white matter lesion and intra ventricular hemorrhage, are considered the cause of neurological disorders in preterm infants. 5,6 large number of prenatal risk factors with different capacities in terms of prediction of neurologic developmental disorders have been identified.Some multiple risk factors increase the risk of disability, which may act as additional effector multiplication effect.Some examples of prenatal risk factors are antepartum hemorrhage, congenital infections, preeclampsia, mode of delivery, short gestational age, microcephaly, and congenital anomalies.In addition, socio-economic status, parental education, maternal age, and parity also influence the outcome of the abnormality. 4,7,8ranial ultrasound has become an important diagnostic tool since the 1970's.Its non-invasive nature has caused the ultrasound to become an ideal imaging modality in neonates.The fontanel and other structures of neonates and young infants are still open, thus allowing them to be the acoustic window for evaluating the brain.The cranial ultrasound examination has several advantages, among which are: minimal manipulation in infants, safe, repeatable so that it can evaluate the evolution maturation and lesions in the brain, and reliable in detecting brain pathology (such as hemorrhage, cystic lesions, ischemia, calcified, infection of the brain, and brain structures of major anomalies).In addition, the cranial ultrasound is relatively cheap compared to other neurological imaging modalities and is also a good modality for serial brain imaging for the neonates. 8,9reterm infants who are clinically healthy have the possibilities of having brain injury that could not be visualized on ultrasound examination.It means, the visible abnormalities on ultrasound sometimes do not provide significant clinical symptoms or are asymptomatic.Some research literature mentioned that in spite of full term or near-full term gestation and declared as healthy, infants could show an abnormality by cranial ultrasound imaging. 10,113][14] Research by Hagmann et al in 2010 on healthy infants born full term in Uganda showed abnormalities presented by the cranial ultrasound examination in 55 from 112 infants; in the form of an increase in white matter echogenic, sub ependymal cysts, and choroid plexus cysts. 13ang et al in 2004 found 6 from 2309 (0.26%) full term healthy infants have major brain lesions, such as intra ventricular hemorrhage, intra parenchymal bleeding and corpus callosum agenesis. 15n order to prevent the missing prognostic abnormalities, a screening protocol in the form of cranial ultrasound examination should be conducted at least four or five times.The first examination should be conducted as early as possible in order to detect long-term intra-uterine damage or congenital malformations.It is important to be sure whether the results appear normal at this time, whether the anatomy looks normal for the infant's gestational age, whether there is not a picture of a congenital infection or metabolic disease, and to generate baseline data for management problems that could arise later, such as bleeding or echogenic of white matter. 8,9his study aimed to find the relationship between the cranial ultrasound findings with the gestational age and mode of delivery and to reveal if the cranial ultrasound can be used to detect brain injury in healthy preterm infants in Saiful Anwar Hospital Malang, Indonesia. Methods This study was an observational-analytic with a crosssectional design, conducted in the perinatology and ultrasound room of the Saiful Anwar Hospital Malang.The protocol has been approved by the Medical Research Ethics Committee of the Medical Faculty of Universitas Brawijaya Malang No: 352/KEPK/VII/2012.A total of 38 healthy preterm infants were included in this study.The inclusion criteria of the study subjects were healthy preterm infants at the age of ≤ 4 days.Infants who have congenital abnormalities were excluded from the study.Medical records and anamnesis were used to collect data on gestational age and mode of delivery.The cranial ultrasound was performed by three radiologists who have experiences in their field for at least 10 years.In order for the diagnosis to be positive, a minimum of two examiners statements is needed.The examination was performed using an ultrasound (General Electric Logic S6 series, USA) with a curved transducer (5MHz) and linear transducer (10MHz) and was conducted with the anterior fontanel window and six coronal pieces and five sagittal pieces.Relationships between the gestational age and mode of delivery with a cranial ultrasound imaging were tested using the Fisher Exact test with a 95% confidence level, α= 0.05 significant when p < 0.05. Results Out of the 38 preterm infants, 52.6% of them were female and 47.4% were male.A total of 52.6% of the infants were born through caesarian section.Most of the infants were ≥ 32 weeks (89.5%) and had birth weight between 1500 and 2500 grams.Cranial ultrasound examination showed three kinds of abnormal imaging, which were the increased periventricular echogenic, increased parenchymal echogenic, and indistinguishable gray-white matter differentiation (Table 1).Two infants revealed increased periventricular echogenic (5.3%); one infant with an increase in the right lateral superior periventricular and the other one in the caudothalamic notch (Figure 1).Two infants were found to have increased parenchymal echogenic (unilateral and focal thalamus echogenic) (5.3%) (Figure 2).The indistinguishable differentiation of gray-white matter was discovered in two infants (5.3%).Distinguishing between the cortical and sub-cortical was difficult to perform in both infants (Figure .3).It was found that in 34 infants with a gestational age ≥ 32 weeks, 3% of them had an increase in periventricular echogenic, while in infants with gestational age < 32 weeks, 25% of them had it.Increased parenchymal echogenic was found in 6% of infants with gestational age ≥ 32 weeks. The indistinguishable gray-white matters were found as much as 3% in the gestational age group ≥ 32 weeks and 25% in the gestational age group < 32 weeks.There were differences in the proportion of each variable of cranial ultrasound based on the gestational age, but the differences were not statistically significant (p > 0.05) It was found that in 18 infants with spontaneous birth, 11% of them had increased periventricular echogenic.On the other hand, there was no such image found from the 20 infants with caesarean section birth.No increase in parenchymal echogenic was present on the infant group with spontaneous birth.The white gray matter differentiation was found in 10% infants with spontaneous birth.However, it was not discovered in the group of infants with caesarean section birth.There were differences in the proportion of each variable of cranial ultrasound based on the mode of delivery, but the differences were not statistically significant (p > 0.05).Statistical analysis showed that there is no significant correlation between the results of cranial ultrasound with the combination of gestational age and mode of delivery (p > 0.05). Discussion The The samples obtained in this study were 38 preterm infants, 18 of whom were male and 20 were female.There were 34 infants with gestational age ≥ 32 weeks and 4 infants with gestational age < 32 weeks.Macones stated that birth at less than 32 gestational weeks has the highest mortality and neonatal disorders.After 32 gestational weeks, there was a noticeable increase in vascular system and the number of anastomosis in the deep white matter, and thus changing the pattern of injury. 3,16total of 18 infants were delivered spontaneously, and 20 infants were born through caesarean section.Previous study showed that birth by emergency cesarean, with or without a prior attempt of vaginal births, correlated with high intracranial hemorrhage compared to spontaneous birth, whereas the incidence of brain hemorhage in elective cesarean did not differ significantly with the spontaneous births.This research found two infants with an increased periventricular echogenic: one infant with ≥ 32 gestational weeks with a lesion in the lateral superior ventricles, and another infant with a gestational age < 32 weeks with a lesion in the caudo-thalamic notch with ventricular asymmetry.Both underwent spontaneous delivery.It is likely for a preterm infant with a hyper-echoic lesion in the periventricular to have a white matter injury, but edema can also present such image and show improvements without any further brain damage. 3,17Increased periventricular echogenic might show a normal condition if it lasts less than a week.Infants with increased echogenic in the caudothalamic notch require further observation because the lesions in this area are probably a germinal matrix hemorrhage which usually occurs in preterm infants. 18Two infants had unilateral increased echogenic in the thalamus.The density of unilateral thalamus in preterm infants was reported by de Vries (1992) and van Wezel-Meijlrin (1999) and it is related to severe asphyxia condition and treatment of longer ventilation, although the prognosis is better than infants who have bilateral lesions of the thalamus. 13However, the studies on healthy full-term infants in Uganda found that unilateral lesions of the thalamus also occur in asymptomatic infants.This study found two infants had increased unilateral and focal thalamus echogenic, in which both the infants had a gestational age ≥ 32 weeks with caesarean section mode of delivery.Both may not be related to global hypoxic ischemic.Infarction in the territory of the perforator artery can explain these ultrasound findings, because the perforator artery is not always associated with acute symptoms.Unilateral abnormalities can also be thalamus hemorrhage, which is associated with venous thrombosis, usually accompanied with intra ventricular hemorrhage and clinical seizures in the patient.This study did not find any intra ventricular hemorrhage image in the ultrasound findings. 13,14he indistinguishable gray-white matter was found in one infant in each gestational age group ≥ 32 weeks and < 32 weeks.Both infants were delivered spontaneously.The visualized cortical structures became more hyper-echoic, making it difficult to distinguish from the subcortical structures.The edema conditions could give such image and will improve without any further brain damage.In this study, there was no increased intracranial pressure in all infants. 7,19tatistically, there was no significant correlation between abnormal ultrasound findings (increased periventricular echogenic, increased parenchymal echogenic, and graywhite matter differentiation) with the gestational age and mode of delivery.A similar studies on full term infants with more samples also showed that there was no correlation between ultrasound findings with gestational age and mode of delivery The research conducted by Behnke et al 1999 on 266 infants with cranial ultrasound examination within 96 hours of birth age found that there was no significant correlation between neurobehavior status with head ultrasound abnormalities findings.Therefore, it was concluded that ultrasound abnormalities found within 96 hours of life have no clinical significance in the neonatal period.The data in this study was obtained within 4 days, and therefore it is still possible that the cranial ultrasound findings are light findings and do not affect the clinical condition of the infants. 5,14 Conclusion There are abnormal cranial ultrasound findings in some healthy preterm infants, although there is no significant correlation between ultrasound findings with gestational age and mode of delivery.Cranial ultrasound in preterm infants can become a screening tool for an early detection of brain injury. Table 1 . Cranial ultrasound results Table 2 . The results of cranial ultrasound based on the gestational age Table 3 . The results of cranial ultrasound based on the mode of delivery Table 4 . The results of cranial ultrasound based on the combination of mode of delivery and gestational age
2020-12-17T09:11:47.569Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "3826671a7411475a9480eed27b0859ccedbc8b32", "oa_license": "CCBYNC", "oa_url": "https://mnj.ub.ac.id/index.php/mnj/article/download/438/436", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "38a670d7f4b93fe50cb8487b29c72a10caaa248e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235480778
pes2o/s2orc
v3-fos-license
Lattice Dynamics and Structural Phase Transitions in Eu2O3 Using the density functional theory, we study the structural and lattice dynamical properties of europium sesquioxide (Eu2O3) in the cubic, trigonal, and monoclinic phases. The obtained lattice parameters and energies of the Raman modes show a good agreement with the available experimental data. The Eu-partial phonon density of states calculated for the cubic structure is compared with the nuclear inelastic scattering data obtained from a 20 nm thick Eu2O3 film deposited on a YSZ substrate. A small shift of the experimental spectrum to higher energies results from a compressive strain induced by the substrate. On the basis of lattice and phonon properties, we analyze the mechanisms of structural transitions between different phases of Eu2O3. ■ INTRODUCTION Due to the high reactivity with oxygen, the most stable compounds of rare-earth (RE) elements are their oxides with a general formula R 2 O 3 , called sesquioxides, in which the RE ions (R) exist in the trivalent state. 1 The unique physical and chemical properties of the RE sesquioxides such as the dense Kondo effect, heavy fermionic behavior, 2 and high dielectric constants, 3 just to mention a few, make them attractive from both scientific and technological perspectives. Europium oxide exists in three stoichiometric forms. At lower oxygen pressure, europium oxidizes first to the NaCl-type EuO and then to the spinel-type Eu 3 O 4 before the stable sesquioxide is formed. The trivalent Eu 3+ ions with the 4f 6 electron configuration ( 7 F 0 ) have a total angular momentum J = 0 and L = S = 3. At ambient conditions, Eu 2 O 3 crystallizes in a cubic (C) structure and experiences structural transformations to monoclinic (B), trigonal (A), hexagonal (H), and cubic (X) phases with increasing temperature. 1,4 Under pressure, the structural transition from the cubic C-type to the trigonal Atype phase, which starts at 5.0 GPa and finishes at about 13.1 GPa, is observed. This transition leads to a volume collapse of 9% at 8.6 GPa. 5 The trigonal phase remains stable up to the highest experimentally feasible pressure. After release of the pressure, the trigonal phase transforms to the monoclinic phase. 5 Also a direct phase transition from the C-type structure to the B-type monoclinic structure was observed at about 8.0 GPa, and the B-type structure was retained after the pressure was released, indicating that the monoclinic phase is metastable at room temperature. 6 A pressure-induced phase transition from the monoclinic to trigonal crystal structure was observed at about 4.7 GPa. 7 Finally, at ambient conditions, Eu 2 O 3 may exist in a stable cubic form and in a metastable B-type structure similar to the Sm 2 O 3 and Gd 2 O 3 compounds. Both materials transform to the trigonal structure under a high pressure of about 5 GPa. The effect of pressure on R 2 O 3 has been extensively studied by many research groups, and a review of the pressure-induced phase transitions of most sesquioxides is presented in ref 8. The lattice dynamics of Eu 2 O 3 was investigated by Raman scattering in the single crystal and powder samples at ambient conditions, 9,10 and in nanocrystalline samples, both at ambient conditions and under hydrostatic pressure. 11 As, in contrast to the bulk crystal, the Eu 2 O 3 nanoparticles do not transform to the trigonal phase under high pressure, there are no Raman spectra measured for the A-type structure so far. When compared with other RE sesquioxides, the cubic phase of the europium compound systematically shows anomalously low Raman shifts (softening) for the middle-frequency oxygen vibrations. 10 It was suggested that this anomaly results from the presence of oxygen vacancies (nonstoichiometry) in the crystal structure. To gain deeper insight, first-principles studies of the phonon properties of Eu 2 O 3 are indispensable. In the previous density functional theory (DFT) studies, the electronic properties of the cubic C-type and trigonal A-type phases were studied for different magnetic orders. 12 The C → A structural transition was found near p = 5 GPa, which is in good agreement with the experimental data. A detailed investigation performed within the approach that combines the GW and local density approximation (LDA) + U methods found a strong effect of local Coulomb interaction U on the electronic bands. 13 The stability of the cubic (C), trigonal (A), and monoclinic (B) phases of Eu 2 O 3 was studied within the LDA + U method, and the sequence of the pressure-induced phase transitions C → A → B was predicted. 14 In this paper, we study the structural and dynamical properties of the cubic (C), trigonal (A), and monoclinic (B) phases of Eu 2 O 3 using the DFT approach. We examine the stability of these phases under pressure. The Eu-partial phonon density of states (DOS) obtained for the cubic structure is compared with nuclear inelastic scattering (NIS) measurements. Furthermore, we compare the calculated energies of Raman modes with the available experimental data for the cubic and monoclinic structures. Finally, we analyze the crystal structure changes and propose possible mechanisms of phase transitions between cubic, trigonal, and monoclinic structures. ■ CALCULATION METHOD The spin-polarized DFT calculations were performed using the projector augmented-wave potentials, 15,16 the generalized gradient approximation in the Perdew−Burke−Ernzerhof (PBE) parametrization, 17 and the following valence electron configurations 5s 2 5p 6 4f 7 6s 2 and 2s 2 2p 4 for Eu and O, respectively, as implemented in the VASP code. 18,19 In this approach, all core electrons are treated fully relativistically. For valence electrons, we did not include spin−orbit coupling (computationally very demanding) since we checked that its influence on phonon dispersion relations of Eu 2 O 3 is negligible; so they were treated scalar-relativistically. The strong local electron interactions were included within the DFT + U scheme, 20 assuming the intraorbital Coulomb parameter U = 8.3 eV and the Hund's exchange J = 0.77 eV on the 4f orbitals as in the earlier studies of EuO. 21,22 This choice of parameters can be justified by the comparison with the experimental data. The electronic band gap of Eu 2 O 3 obtained with a similar parameter U agrees very well with the measured value. 13 The Hund's exchange J weakly depends on the valency state; therefore, the selected value is appropriate also for Eu 2 O 3 . The energy cutoff for plane wave expansion was set to 520 eV. The lattice parameters and atomic positions of the C-, B-, and A-type phases were optimized for supercells, which are 2 × 2 × 2, 1 × 3 × 1, and 3 × 3 × 2 multiplication of conventional cells of the cubic, monoclinic, and trigonal structures, respectively. All calculations were performed for T = 0 K and a ferromagnetic order of the magnetic moments of Eu atoms. A k-mesh of 4 × 4 × 4 points in the Monkhorst−Pack scheme 23 was used for integration over the reciprocal space of cubic as well as trigonal structures and 2 × 2 × 2 points for the monoclinic phase. All structures were optimized with respect to the external pressure and atomic forces using the conjugate gradient technique with the energy convergence criteria set at 10 −7 and 10 −5 eV for electronic and ionic iterations, respectively. The maximum residual stresses were below 0.01 GPa for all simulated systems. For the relaxed structures, the phonon dispersion relations as well as the total and element-projected phonon DOS were calculated using the direct method 24 implemented in the PHONON software. 25 In this approach, the Hellmann− Feynman forces generated on all atoms of the supercell by single-atom displacements from the equilibrium positions are used to determine the force constants and build the dynamical matrix. The phonon energies and polarization vectors were calculated by diagonalization of the dynamical matrix. To describe the longitudinal optic/transverse optic (LO/TO) splitting induced by macroscopic polarization, the static dielectric tensor and the Born effective charges were determined using density functional perturbation theory. 26 ■ THEORETICAL STUDIES Crystallographic Structure. In the present study, we consider the cubic, trigonal, and monoclinic phases (traditionally marked with C, A, and B letters, respectively) of Eu 2 O 3 observed experimentally. 4 The cubic cI80 structure is the stable one for europium sesquioxide at ambient conditions. It is described by the Ia3̅ The trigonal phase, described by the P3̅ m1 (164) space group, has a primitive hexagonal unit cell containing one formula unit (five atoms). Our calculations were carried out in the 3 × 3 × 2 hexagonal supercell with 90 atoms. The relaxed lattice constants are a h = 3.782 Å and c h = 5.945 Å. The theoretical values obtained at pressure p = 6 GPa (3.749 and 5.788 Å), where the trigonal phase is stable, correspond well to the experimental values (at 5.72 GPa) of a h = 3.719 Å and c h = 5.770 Å. 27 It is worth noting that the crystal density is significantly higher (about 10%) in the trigonal phase compared to the cubic structure, which fully agrees with the value of the volume collapse at high pressure reported in ref 5. Finally, the monoclinic mS30 structure of the C2/m (12) space group and six formula units in the primitive unit cell was studied with the 1 × 3 × 1 supercell of 90 atoms. This phase is observed at high temperatures (above 1000 K) and can also be found as a metastable state at room temperature. 5 By relaxation of the monoclinic structure, we found the lattice parameters a m = 14.29 Å, b m = 3.63 Å, and c m = 8.89 Å and the monoclinic angle β = 100.14°similar to those obtained experimentally: a m = 14.12 Å, b m = 3.60 Å, c m = 8.82 Å, and β = 100.02°. 9 Comparing the crystal volumes optimized at ambient conditions, one should notice about 2% difference between monoclinic and trigonal structure volumes, which corresponds well to −1.6% volume collapse experimentally observed in the B → A phase transition 7 and supports the first-order character with a displacive mechanism of this transition suggested by Atou et al. for the Sm 2 O 3 compound. 28 Phonons. For all optimized structures, the phonon dispersion relations and phonon DOS were calculated, and they are presented in Figure 1. According to the mass sequence, europium atoms occupy states of mostly lower energies, while phonons with oxygen contribution dominate at higher energies. The cubic phase, which is stable at ambient pressure, has all phonon branches with real frequencies ( For the trigonal phase, which is not stable at ambient conditions, we present two different sets of phonon dispersions calculated at p = 0 and 6 GPa, which are depicted with solid and dashed lines, respectively, in the middle panel of Figure 1. The pressure dependence is clear as due to shortening of interatomic distances, bonds stiffen and phonon frequencies increase. Compared to the cubic phase, the number of phonon branches reduces because of the smaller number of degrees of freedom (one formula unit instead of eight in the cubic structure), and the dispersions of these branches are enhanced. Also, the LO/TO splitting, present at the Γ point, is more pronounced, and the highest energy accessible in the spectrum increases by a few millielectron volts in comparison to the cubic phase. These changes can be well understood by taking into account the significant increase of crystal density mentioned above (from 4.25 to 4.75 amu/Å 3 ) in the same external conditions. The narrow energy gap in phonon DOS, observed for the cubic phase, closes and the partial DOS overlap slightly. The phonon dispersion relations calculated for the monoclinic structure are presented in the lowest panel of Figure 1. In the primitive unit cell, there are 15 atoms (3 formula units) which leads to 45 nondegenerated phonon branches. Similar to the trigonal structure, the increase of the LO frequencies at the Γ pointconnected with the macroscopic electric field generated by longitudinal displacements of atoms in different directionscauses discontinuities of the infrared-active modes. Along some directions of the reciprocal space, mainly around the M point, the lowest branch exhibits imaginary frequencies (plotted with negative values) reflecting the dynamical instability of this structure at low temperatures and ambient pressure. The calculations of phonon dispersion curves for the monoclinic structure at p = 10 GPa show similar soft-mode behavior, indicating that it cannot be stabilized only by pressure, and thermal effects play an important role in obtaining the stable or metastable B-type phase. The Eu-and O-atom projected phonon DOS resemble that calculated for the trigonal structure. The vibrations of heavier atoms dominate in the low-frequency region up to 23 meV. In contrast, the vibrational energy of oxygen atoms can be divided into several distinct regions that occur at frequencies and intensities comparable to those in the trigonal phase. This is a result of the close structural relationship between the C2/m and P3̅ m1 phases of Eu 2 O 3 . 9,29 The B-type structure can be obtained by a slight lattice deformation of the A-type phase, leading to a splitting of 1a (D 3d ) and 2d (C 3v ) atomic positions into less symmetrical 2b (C 2h ) and 4i (C s ) sites. 9 Zone-Center Phonon Modes. Group theory predicts 120 zone-center vibrational modes in the cubic C-type structure where A g , E g , and T g are Raman-active modes (R), T u is infrared-active (I), and A u and E u are silent modes. T modes are also denoted as F modes. The E and T phonon modes are doubly and triply degenerate, respectively. Regarding the monoclinic B-type structure, 45 zone-center vibrational modes are described by one-dimensional irreducible representations The even (g) and odd (u) modes are Raman-active and infrared-active phonons, respectively. Finally, the A-type structure has the following 15 zonecenter vibrational modes where A 1g and E g modes are Raman-active and the E u and A 2u modes are infrared-active. The zone-center mode frequencies (wave-numbers in cm −1 ) calculated using DFT, their irreducible representations (IR), and activities (RRaman and Iinfrared) are presented in Table 1. The calculated frequencies are compared with the Inorganic Chemistry pubs.acs.org/IC Article available experimental data of the Raman modes obtained for the cubic and monoclinic structures. 9−11 In the cubic structure, there are 22 Raman modes, but the number of peaks actually observed experimentally is much smaller because of the insufficient intensity and/or too small spectral resolution. For example, two Raman peaks with calculated frequencies of 285.7 and 292.7 cm −1 are reported as being of the T g + E g symmetry because these two peaks with different symmetry are measured at a coinciding frequency of about 289 cm −1 in the polycrystalline powder. 10 In the nanocrystalline sample with the average particle size of 60−70 nm, the same peak is observed for a slightly lower frequency, 266.4 cm −1 . 11 In general, all measured values are adequately reproduced in our DFT calculations as shown in Table 1. The comparative study of Raman spectra of R 2 O 3 sesquioxides with the C-type crystal structure presented in ) show an anomalous decrease of the frequency in comparison with the other R 2 O 3 compounds. The authors formulated the hypothesis that this anomalous "softening" is related to the oxygen vacancies in Eu 2 O 3 . However, the measured frequencies agree well over the whole energy range with our values calculated for an ideal crystal. In view of the above, it is clear that the assumption of the oxygen deficiencies being a source of the anomalous softening is questionable. Raman spectra measured for Eu 2 O 3 , Ga 2 O 3 , and Sm 2 O 3 single crystals with B-type monoclinic structure do not exhibit any anomalous behaviour. 9 All frequencies of Raman-active modes are in good agreement with the calculated data. To the best of our knowledge, there is no Raman spectroscopy data for the trigonal structure of Eu 2 O 3 , but the Raman modes measured for other A-type sesquioxides are arranged just as in our calculations: two bending vibrations of low-frequency between 100 and 200 cm −1 and two stretching vibrations occurring at a higher frequency region between 400 and 450 cm −1 . 9,30,31 Under pressure, the Raman modes of the trigonal structure shift to higher frequencies. In Table 1, the frequencies of the zone-center modes calculated for the trigonal structure at p = 0 and 6 GPa are presented. ■ EXPERIMENTAL RESULTS In order to verify the ab initio calculations performed for the cubic phase, the theoretical results were compared with the Eu-partial phonon DOS of a polycrystalline Eu 2 O 3 film obtained from NIS. Exposed to air, metallic Eu rapidly oxidizes forming a mixture of cubic and monoclinic phases of Eu 2 O 3 with a high concentration of oxygen vacancies. Moreover, the hygroscopic nature of Eu 2 O 3 favors the formation of Eu hydrates and hydroxides, which further contaminate the sesquioxide. Using a commercially available Eu 2 O 3 powder, the Eu-partial phonon DOS was determined by NIS on the Mossbaueractive isotope 151 Eu of europium. 32,33 Sufficient details, however, of the sample characterization were either not presented or confirmed the problems described above. Therefore, to investigate the pure cubic sesquioxide phase, a 20 nm thick Eu 2 O 3 film was deposited on a YSZ(001) substrate in the ultrahigh vacuum system 34 located at the Nuclear Resonance Beamline ID18 35 of the ESRF-The European Synchrotron in Grenoble, France. Prior to Eu deposition, the substrate was annealed at 925 K for 60 min at a pressure below 3.0 × 10 −9 mbar. A metallic Eu foil enriched to 97% in the Mossbaueractive isotope 151 Eu, supplied by the Oak Ridge National Laboratory (USA), was sublimated from an effusion cell with a molybdenum crucible for producing a steady flux of Eu atoms at the rate of 6.0 Å/ min. During deposition of europium, the substrate was kept at 823 K, and high-purity (99.9995%) molecular oxygen was supplied into the growth chamber at a pressure of 1.0 × 10 −6 mbar, precisely controlled via a leak valve. To ensure complete oxidation of the metallic Eu, the film was annealed for 60 min under the deposition conditions. In order to protect the Eu 2 O 3 film from further oxidation, it was covered at room temperature by an 8.0 nm thick Nb layer. The sample was characterized by X-ray diffraction (XRD, Cu K α line using a Rigaku SmartLab instrument) and X-ray absorption spectroscopy on the Eu L 3 absorption edge of Eu (6977 eV) performed at the SUL-X beamline of the Synchrotron Radiation Source at Karlsruhe Institute of Technology in Germany. The ATHENA and ARTEMIS program packages, from the IFFEFIT software, 36 were used for data reduction and modeling. The XRD scan shown in Figure 2a confirms the formation of the cubic Eu 2 O 3 phase with a lattice constant a = 10.80 Å, which is ca. 0.5% smaller than the bulk value a = 10.86 Å. The scan on an empty support plate (without a sample) revealed the origin of the additional peaks present in the XRD data. The oxidation state of Eu was determined by comparison of the experimental X-ray absorption near edge structure (XANES) data on the film with a reference Eu 2 O 3 powder sample and EuO films 21,22 measured in the same experiment. The obtained XANES data are plotted in Figure 2b and show a distinct difference between the position of the absorption L 3 edge of Eu in EuO where the Eu atoms, similar to the metallic Eu, exhibit an oxidation state of Eu 2+ and in Eu 2 O 3 where the Eu atoms are in the Eu 3+ state. By using the least-squares method, the XANES data of the film were fitted by a linear combination of the two reference samples with the ratio between them being a fit parameter. The result is plotted in Figure 2b by a solid/red line and indicates the presence of 5.0% EuO in the investigated film. However, the contribution of monoxide is hardly visible as a small shoulder of the strong sesquioxide absorption peak. Most likely, this phase is formed at the film/substrate interface, where the control of the oxygen concentration is challenging. The 151 Eu-partial phonon DOS was obtained 37 from the energy dependence of the probability for nuclear inelastic absorption 38,39 of X-rays with energy 21.5414 keV with an energy resolution of 1.1 meV (full width at half maximum) 40 measured at room temperature at the Nuclear Resonance Beamline ID18. The film was illuminated at a grazing angle of about 0.15°by a focused X-ray beam with dimensions, vertical × horizontal ≈ 10 μm × 100 μm. Figure 3 compares the experimentally obtained DOS with the theoretical results. Compared with the DOS obtained for the optimized lattice constant (solid/green line in Figure 3), the experimental spectrum is slightly shifted to higher energies due to the compressive strain induced by the substrate. In order to verify this assumption, we The relative weight of both contributions is fixed to the results obtained by the XANES study, i.e., 95% Eu 2 O 3 and 5% EuO. To account for the broadening of the phonon spectrum features, which originate from the finite energy resolution and phonon scattering at crystal imperfections, the ab initio-calculated DOS was additionally convoluted with a Gaussian profile of fwhm = 6 meV. The resulting spectrum shows a very good agreement of the peak positions and the cutoff energy in relation to the experimental data. The higher number of phonon states below 10 meV in the experimentally obtained DOS can be attributed to interface-specific vibrational modes often present in thin films, 22,41 which are not considered in the ab initio calculations performed for a bulk crystal. ■ STRUCTURAL PHASE TRANSITIONS The total energies obtained after optimization at p = 0 read −47.2563, −47.0005, and −47.0456 eV per formula unit for cubic, trigonal, and monoclinic phases, respectively, which confirm that the most stable phase is the cubic one. The quite small difference between the energies of trigonal and monoclinic phases can be assigned to the displacive phase transition between them. It agrees with the previous theoretical and experimental values of energy differences between these two phases. 14 The pressure-induced phase transition between the cubic and trigonal phases of Eu 2 O 3 is of the first order. 5 Indeed, our calculations for pressure p = 6 GPa show that the respective enthalpies are equal to −44.249 (cubic) and −44.282 eV (trigonal) per formula unit. The previous ab initio study also supports this result showing the crossing of enthalpies around 5 GPa. 12 In the first-order phase transition, the crystal structure changes discontinuously and the atomic displacement pattern can be very complicated. In the case of Eu 2 O 3 , where Z changes by a factor of 8 and a significant (≈10%) collapse of the volume is observed, it is quite hard to establish a correspondence between the cubic and trigonal structures since the majority of 40 atoms in the primitive Ia3̅ unit cell are significantly displaced. To describe this rearrangement in a systematic way, we have split the problem into three stages: relative orientation of cell vectors, deformation of the cubic supercell, and finally, tuning of atomic positions. We started from two general observations: (i) in the trigonal structure, the threefold axis along the main diagonal of the cubic structure should be preserved and (ii) the number of atoms in the system should not change. These assumptions limit possible transformations to the rotation aligning the threefold axis in both structures, scaling along this axis while keeping the atomic density constant, and rigid translations. The values of the parameters of these transformations (angles, scaling factors) could be determined by minimizing the sum of squares of distances between corresponding atoms in both structures. In general, this function has a very complicated shape and multiple local minima; thus, it is difficult to minimize it by classical gradient-type methods. Therefore, we have used a multistage procedure employing the first step of the genetic algorithm searching the whole parameter space for possible valleys, the basin-hoping algorithm to select the deepest one, and simulated annealing to find the best local minimum followed by standard least-squares minimization of the distance function. All above steps were monitored and visually inspected with custom-built visualization program using ASE and NGLview python libraries 42, 43 and JupyterLab environment. 44 The final relationship between the structures is depicted in Figure 4 in several views along the c-axis of the trigonal structure, showing atoms of the cubic structure as translucent spheres and atoms in the positions from the trigonal structure as more solid spheres. The relationship between structures is illustrated by arrows connecting the corresponding atoms. Subfigure (a) shows just the atoms, panel (b) shows, slightly tilted for better clarity, all relationships between atoms, and panels (c) and (d) show the relationships between Eu and O atoms separately. The structural relationships depicted in Figure 4 are rather difficult to comprehend on the flat static image. Therefore, we have prepared a live 3D version of this figure, accessible in the Supporting Information as Figure S1, Inorganic Chemistry pubs.acs.org/IC Article which can be interactively rotated, zoomed, and so forth with any modern web browser. Generally, atomic displacements in the C → A phase transition are substantial. The mean square displacement (MSD) of Eu and O atoms, ignoring the contribution of volume collapse and overall cell deformation, reads 1.386 and 1.803 Å 2 , respectively. Each four Eu layers (perpendicular to the threefold axis) with very pronounced atom movement are separated alternately by the other four Eu layers with smaller shifts. Looking along the threefold axis, a careful observer can see in the trigonal structure the removal of Eu atoms (occupying in cubic phase 8b Wyckoff's positions) from the centers of Eu hexagons in favor of O atoms and creation of oxygen chains instead of Eu−O−Eu−O zig-zags characteristic for the cubic structure. In the C phase, all Eu−O distances are the same; however, after the phase transition (in the A phase), the oxygen atoms can be divided into two groups: (i) O1, located in 2d (C 3v ) Wyckoff's positions, closer to europium atoms and (ii) O2, occupying 1a (D 3d ) sites, positioned between Eu−O1 dimers. The effect is not very strong; however, a difference in distances of O2 and O1 to the nearest Eu atom exceeds 10%. During the phase transition, Eu−O1 dimers rotate, ordering themselves along the hexagonal axis into Eu−O−Eu−O chains, while "lonely" oxygen atoms, O2, overcome the barrier between two neighboring Eu atoms and locate themselves in the oxygen chains in the center of the Eu hexagons. In the A → B phase transition, the situation is completely different. The MSD of heavy Eu atoms drops down more than 1 order of magnitude (0.068 Å 2 ) in comparison to the C → A transition. Also, the MSD of oxygen atoms is significantly smaller (0.281 Å 2 ). Inner Eu−O1 distances shorten slightly due to O1 movement along the Eu−Eu line parallel to the hexagonal axis. In turn, O2 atoms move perpendicularlyout of straight oxygen chains ( Figure 5). Having both trigonal and monoclinic structures fitted well with each other, we were able to define a transformation matrix TM and a relative shift of origin [1/2, 0, 0]. We obtained Wyckoff's positions splitting confirmed by the group-theoretical predictions. 48 Finally, using Bilbao Crystallographic Server package Get_irreps, 45−47 we drew a diagram ( Figure 6) of physically irreducible representations and order parameters for the A → B phase transition with transformation matrix TM. The symmetry relation between these phases reveals a serial cascade of continuous changes of the crystal structure, which are coupled with the shear deformation causing a displacive phase transition. However, the transition has a first-order character with a finite change in volume. The mentioned shear deformation originates from the rise of the monoclinic angle and can be linked with slight softening of acoustic phonon branches around the Γ point visible in Figure 1. We found that this elastic instability is strongly coupled with the lowest optical mode at the Γ point. Therefore, it can be expected that by lowering the external pressure, one can obtain from the trigonal structure a metastable monoclinic phase through the displacive first-order phase transition rather than the cubic structure through the order−disorder transition, which usually has a significantly higher energy barrier. The situation changes at high temperatures, where thermal fluctuations start to play an important role. ■ CONCLUSIONS We have performed a theoretical and experimental study of the structure and lattice dynamics of Eu 2 O 3 sesquioxide. Using the first-principles DFT approach, we calculated the structural parameters and phonon spectra in the cubic, trigonal, and monoclinic structures. The close structural relationship between the monoclinic and trigonal structures is reflected in the calculated partial Eu and O phonon DOS, which are very similar for both phases. In the cubic phase, the partial DOS of europium and oxygen atoms are separated by a narrow gap, which distinguishes the lattice dynamics of this phase from the others. A good agreement between the calculated and measured frequencies of the Raman modes was found for the C-and Btype structures. The Raman frequencies of the A-type structure, which were unknown so far, are also presented. The calculated phonon DOS for the cubic phase was verified experimentally. The 151 Eu-partial DOS was measured at room temperature for a 20 nm thick Eu 2 O 3 film using NIS. The formation of cubic Eu 2 O 3 was confirmed by XRD, while X-ray absorption spectroscopy unveiled the presence of ca. 5% EuO. The experimental phonon DOS showed a good agreement with the DOS calculated for a C-type structure assuming 0.5% lattice compression, most likely induced by the substrate. We also analyzed the phase transitions observed in Eu 2 O 3 : C → A and A → B. We developed a numerical procedure for searching the atomic rearrangement during the phase transition. Analyzing structural changes, in particular, atom rearrangement, in the cubic-to-trigonal (C → A) phase transition, we discovered creation of monoatomic oxygen chains along the threefold axis. Substituting O2 atoms of the The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.inorgchem.1c00708. Live 3D figure presenting the relationship between cubic and trigonal structures (HTML) Live 3D figure presenting the relationship between trigonal and monoclinic structures (HTML)
2021-06-20T06:17:05.580Z
2021-06-18T00:00:00.000
{ "year": 2021, "sha1": "12923d3539929225f02e68a50f3075b170dc17bc", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.inorgchem.1c00708", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9e11243e6c8c068d7d52a89580487d894e00fa6e", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
10655671
pes2o/s2orc
v3-fos-license
Effects of Introducing Xpert MTB/RIF on Diagnosis and Treatment of Drug-Resistant Tuberculosis Patients in Indonesia: A Pre-Post Intervention Study Background In March 2012, the Xpert MTB/RIF assay (Xpert) was introduced in three provincial public hospitals in Indonesia as a novel diagnostic to detect tuberculosis and rifampicin resistance among high risk individuals. Objective This study assessed the effects of using Xpert in place of conventional solid and liquid culture and drug-susceptibility testing on case detection rates, treatment initiation rates, and health system delays among drug-resistant tuberculosis (TB) patients. Methods Cohort data on registration, test results and treatment initiation were collected from routine presumptive patient registers one year before and one year after Xpert was introduced. Proportions of case detection and treatment initiation were compared using the Pearson Chi square test and median time delays using the Mood’s Median test. Results A total of 975 individuals at risk of drug-resistant TB were registered in the pre-intervention year and 1,442 in the post-intervention year. After Xpert introduction, TB positivity rate increased by 15%, while rifampicin resistance rate reduced by 23% among TB positive cases and by 9% among all tested. Second-line TB treatment initiation rate among rifampicin resistant patients increased by 19%. Time from client registration to diagnosis was reduced by 74 days to a median of a single day (IQR 0–4) and time from diagnosis to treatment start was reduced by 27 days to a median of 15 days (IQR 7–51). All findings were significant with p<0.001. Conclusion Compared to solid and liquid culture and drug-susceptibility testing, Xpert detected more TB and less rifampicin resistance, increased second-line treatment initiation rates and shortened time to diagnosis and treatment. This test holds promise to improve rapid case finding and management of drug-resistant TB patients in Indonesia. Introduction In December 2010, the World Health Organization (WHO) recommended the use of Xpert MTB/RIF (Xpert) as a new automated molecular test to rapidly and simultaneously detect TB and rifampicin resistant (RR-)TB, which can be a good proxy for multidrug-resistant (MDR-) TB [1][2][3]. Xpert is recommended to be used as the initial test for individuals at risk of MDR-TB, because it has similar accuracy to that of conventional culture and drug-susceptibility testing (DST) for rifampicin (RIF) and provides results within two hours instead of weeks (liquid media) or months (solid media) [4,5]. While large scale demonstration studies have shown that Xpert introduction is feasible in high-burden countries under project conditions [6,7], not many studies have evaluated the effect on patient-important outcomes under programmatic conditions outside the African region [8][9][10][11][12]. The national TB control programme of Indonesia adopted Xpert in March 2012 as a routine test for presumptive MDR-TB patients as part of their efforts to scale up services for programmatic management of drug-resistant TB (PMDT) [13]. The present study aimed to evaluate the effects of Xpert introduction upon TB and RIF resistance detection rates, treatment initiation rates and health system delays in three provincial public hospitals in Indonesia. A patient cohort tested with conventional diagnostics during one year pre-intervention (Year 1) was compared to a cohort tested with Xpert during one year post-intervention (Year 2). As a secondary objective, Xpert results were compared with culture and DST within the same individuals in Year 2, where the first diagnostic was used as initial test for TB and RR-TB and the latter as diagnostic workup to confirm MDR-TB. Study Population and Methods Included were all individuals at risk of MDR-TB registered between 1 March 2011 and 31 March 2013 at three clinics offering PMDT services in West-, Central-and East-Java. Table 1 shows definitions of the nine mutually exclusive risk groups for MDR-TB according to Indonesian PMDT guidelines. In March 2012, no data were collected because sites transitioned from conventional to Xpert testing. The intervention involved four system changes: introduction of a new diagnostic algorithm; introduction of revised test request form, laboratory and clinical registers; installation of the Xpert machine, computer and uninterrupted power supply system; and training of laboratory and clinical staff. No changes were made with respect to human resources, funding for supplies and drugs, sample and result referral systems, or other processes. The conventional diagnostic approach was to collect one sputum sample from each individual and conduct smear microscopy and culture on solid or liquid media. If culture was positive for TB, an isolate was re-cultured for first-line DST. After the intervention, one sputum sample was collected for Xpert testing and a second sample was used for diagnostic workup with culture and first-line DST. Two out of three sites sent samples for culture and DST to a reference laboratory using liquid media, while the third site was linked to a laboratory using solid media. Guidelines dictated that second-line MDR-TB treatment was to start immediately after an Xpert RR-TB result was obtained and if necessary, adjusted later according to DST results. Primary data sources were the electronic presumptive client registers. Individual patient data were transferred to a database in Microsoft Excel: client registration date, identification number, sex, birth date (or age), address, referring health facility, MDR-TB risk group, specimen identification number, smear microscopy/culture/DST/Xpert result and release date, and treatment registration number and start date. Laboratory registers were used to verify and complete test results and dates. Treatment registers and consultations with PMDT doctors helped to validate and complete treatment information. The various registers were linked by matching unique identification numbers or sex, age and address. Inconsistencies were discussed with laboratory and hospital staff and solved on-site. After registration, individuals were followed for at least three months to record treatment start, culture and DST results. Records were excluded if individuals were entered double (first record was discarded as it usually contained a failed test result), or if second-line treatment was started before testing. Data analysis was done using IBM SPSS Statistics 21. Differences in TB and RR-TB positivity rates and second-line treatment initiation rates between Year 1 and 2 were compared using the Pearson Chi square test with a 5% significance level. Median time between registration, test result release and treatment start before and after the intervention were compared using the Mood's Median test with a 95% confidence interval (CI) and 5% significance level. Censored cases were RR-TB cases not treated at Results After excluding double entries (n = 6 before, n = 2 after), a total of 975 individuals were registered in the year before (Year 1) and 1,442 in the year after (Year 2) introduction of Xpert, which is a relative increase of 47.9% (Fig 1). In Year 2 only 998 (69.2%) individuals were tested with Xpert, because of a three-month stock-out of tests (cartridges) from mid-August to mid-October 2012. During that period 327 people (22.7%) were tested with culture and DST instead and analyzed separately. Population characteristics were similar in Year 1 and 2 in terms of gender and age and differed slightly with regard to MDR-TB risk group (Table 2). However, both years saw most individuals being categorized as relapse cases after completing Category-1 or -2TB treatment. The only significant difference was the location of referring health care facilities: 6% more individuals were sent from within the same facility where testing was done (e.g. TB ward or out-patient department), while 8% fewer people were referred from other facilities in the same district. In further analysis, 15 (1.5%) and 32 (2.2%) people were excluded from Year 1 and 2 respectively, because they received second-line treatment before being tested. In Year 1 and 2, 90.7% (871/960) and 91.7% (1,293/1,410) of individuals received a diagnostic test for TB, respectively. Test and treatment results for individuals tested with and without Xpert in Year 2 were compared to Year 1 as different cohorts (S1 Table). When individuals tested with culture in Year 1 and 2 were compared, the proportion that tested TB positive decreased by 14.7% from 65.2% to 50.5% (p<0.001), but no other significant differences were observed. When individuals tested with culture in Year 1 were compared with those tested with Xpert in Year 2, differences were more pronounced. First, TB positivity rate increased by 15.0% from 65.2% to 80.2% (p<0.001). Secondly, RIF resistance rate decreased by 18.2% from 57.3% to 39.5% (p<0.001) among TB positive patients and by 6.8% from 38.5% to 31.7% (p<0.001) among all tested. The total number of RR-TB cases was similar. Thirdly, the proportion of RR-TB patients that started second-line treatment increased by 19.2% from 39.3% to 58.5% (p<0.001) and the proportion without information on treatment initiation declined by 19.4% from 52.4% to 31.0% (p<0.001). Medical staff confirmed that most patients without treatment information were contacted multiple times without response and had likely not started second-line treatment. No significant difference was found in the proportion of RR-TB patients that deceased, refused, returned to their local clinic, were lost before treatment start or not eligible for second-line treatment. Out of the 101 Xpert RR-TB patients that did not start treatment, 55 (54.5%) had a follow-on culture result. Among them, 43 (78.2%) were culture positive for TB; and of those 38 (88.4%) were DST RIF resistant. Delays in detection and treatment of RR-TB were compared among those tested with culture and DST in Year 1 versus those tested by Xpert in Year 2. One month after initial registration of a person at risk of MDR-TB, 6% of patients diagnosed with conventional DST and 42% of those diagnosed with Xpert were started on second-line treatment (Fig 2A). Expressed as median time delays, the time from presumptive client registration to treatment initiation was (Fig 2C). In Year 2, 70.8% (549/775) of Xpert TB positives and 57.4% (105/183) of Xpert TB negatives received culture and DST workup (Table 3). Concordance between the two diagnostic approaches to detect TB was 65.0% (425/654) and to detect RIF resistance 89.3% (300/336). Notably, 38.2% (210/549) of Xpert TB positives tested culture negative and among them 61.0% (128/210) had a negative smear microscopy result. This rate was highest among patients that had no smear conversion after three months of Category 1 or 2 TB treatment or who failed Category 1 treatment; in other words, patients that were recently treated (Table 4). Further, 18.1% Discussion Xpert was introduced in Indonesia with the main aim to shorten time to diagnosis and treatment of MDR-TB patients. This study found that post-intervention time to treatment was greatly reduced by almost 2.5 months, likely as a result of a shorter time for testing with Xpert than with culture and DST. Also, it took less time to send Xpert results back to clinicians, because the test was done in the PMDT hospital laboratory itself while culture and DST was performed in a reference laboratory located some distance away from the clinic. Another important finding was that after Xpert was introduced, a considerably larger proportion of RR-TB patients started second-line MDR-TB treatment. It is probable that this was an effect of the more rapidly available result of Xpert. Nevertheless, the treatment initiation rate remained below 60%. Some of the missing patients could have returned to local clinics and remained on first-line treatment, while others could have been switched from second-to first-line treatment on the basis of follow-on culture and DST results. These patients should have been classified as lost or referred, but this was often not recorded. This reflects an urgent gap between diagnosis and treatment and the need to strengthen patient registration, follow-up and monitoring alongside the introduction of Xpert. This finding adds onto recent concerns about patient management after Xpert diagnosis expressed in global guidance [3]. Concordance between Xpert and culture as a diagnostic workup post-intervention was limited due to a substantial proportion of Xpert TB positive culture TB negative test results. This may represent the occurrence of false-negative culture results caused by unviable bacilli in sputum samples as a result of over-decontamination or delays in transportation or inoculation. Laboratory staff confirmed that delays in culture inoculation after receiving samples were a problem throughout Year 2 due to shortage of staff in a reference laboratory serving two of the Table 3. Culture and drug-susceptibility testing results following Xpert MTB/RIF testing within the same individuals at risk of multidrug-resistant TB in three provincial hospitals in Java, Indonesia. [14] and could also explain why the TB positive rate of Xpert in Year 2 was higher compared to that of culture in Year 2 but the RIF resistance rate was not. Overall agreement between Xpert and phenotypic DST for detection of RIF resistance was good. In some cases, Xpert could have been false-negative for RIF as it has a sensitivity for RIF resistance of 95% [4]. Xpert false-positive RIF resistance results could also have occurred as was reported by previous studies [15,16]. However, recent work showed that over 10% of RIF resistance mutations detected by molecular methods like Xpert may not be detected by phenotypic DST, especially not with liquid culture, while they do result in low-level resistance and worse outcomes on first-line TB treatment for previously treated patients [17,18]. This means that the one-tenth of Xpert RIF resistant patients that were DST RIF sensitive could have been truly RIF resistant. DNA sequencing techniques would help clarify discordance between both diagnostic methods in the future. This study was unique in that it evaluated the effects of Xpert within a strictly programmatic setting: besides the physical placement of the new test, revision of registers and training of medical staff, national TB guidelines were unchanged. However, a programmatic intervention study involves inherent limitations. The main limitation was that the pre-post intervention design might have introduced selection bias and performance bias. Immediately and consistently after Xpert was introduced almost 50% more individuals at risk of MDR-TB were tested than the year before. Clinicians confirmed that the excitement of a new rapid test led to more attention for PMDT and increased referral of presumptive MDR-TB patients, which may have influenced the type of individuals being sent for testing as well as the treatment decision-making process. Although no significant shift in MDR-TB risk groups was observed post-intervention, it is still possible that clinicians sent presumptive MDR-TB clients earlier for diagnosis, e.g. presumptive relapse patients at the first suspicion of TB symptoms instead of at a second or third patient visit. This would explain why the total number of RR-TB diagnosed cases remained similar in Year 1 and 2. Further, it cannot be excluded that the increase in treatment initiation rate and reduction in time delays were partly caused by clinicians tracing patients more actively to start treatment, in addition to the rapid turn-around-time of Xpert results. A second limitation was the large proportion of missing follow-on culture and DST results among Xpert-tested individuals in Year 2, which could have introduced partial verification bias. If culture was considered as the gold standard and we would correct for bias using a population TB prevalence of 54.6%, a possible overestimation of concordance to detect TB of 4.6% (60.4% vs. 65.0%) was found. In particular, the proportion of Xpert TB negative culture TB positive patients may have been underestimated by as much as 15.8% (33.9% vs. 18.1%). Missing culture results were due to delays in culture inoculation as mentioned above and results becoming available only beyond the period of data collection. Conclusions The results of this study indicate that the introduction of Xpert has helped to start more RR-TB patients on second-line treatment and initiate treatment sooner as compared to using conventional culture and DST. Fast turn-around of results likely resulted in less drop-out during the diagnostic process. In addition, clinician's treatment decision-making could have been positively influenced by the excitement of a new rapid test. Still, the overall proportion of RR-TB patients that started second-line treatment was low and merits special attention from the national TB control program. Proper engagement of clients at high risk of MDR-TB is essential to prevent pre-treatment loss to follow-up. Further analysis is needed to clarify the increase in TB case detection and decrease in RIF resistance using Xpert compared to culture and DST. In conclusion, Xpert holds promise to improve RR-TB case finding and treatment in Indonesia, but its implementation should coincide with improved patient management to optimize PMDT services. Supporting Information S1 Table. Diagnosis and treatment of individuals at risk of multidrug-resistant pulmonary TB tested with culture and drug-susceptibility testing in Year 1, and culture and drug-susceptibility testing or Xpert MTB/RIF in Year 2 at three provincial hospitals in Java, Indonesia. (DOCX)
2017-04-01T01:36:57.980Z
2015-06-15T00:00:00.000
{ "year": 2015, "sha1": "167243de7852e3282973823030c5c497a076f78c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0123536&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "167243de7852e3282973823030c5c497a076f78c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216207635
pes2o/s2orc
v3-fos-license
Worlds apart! Environmental injustices in Mauritius, Peru and Sweden During the last few years, there has been a growing concern for environmental justice within international social work. This article connects to such concerns and aims to present and discuss environmental injustices faced by local communities in Mauritius, Peru and Sweden. Primary data were collected through face-to-face semi-structured interviews with a total of 25 key representatives of local communities in the three countries. Secondary materials were combined with the primary data in ATLAS-ti v.8.3 for a deductive critical discourse analysis. The findings describe the substantive, distributive and procedural environmental injustices faced by local communities in the three case studies. The article then considers the implications of the findings for international social work interventions in promoting environmental justice. The article concludes on the need for international social workers to continue their efforts and practices towards achieving environmental justice for all, in promoting global sustainable development. Introduction A decent biophysical environment is crucial for the well-being of all. Over the past few years, various forms of environmental injustices occurring almost all around the world have compelled social workers to reconsider their obligations and responsibilities towards biophysical environmental issues in promoting social justice (Dominelli, 2012). Consequently, there has been a growing environmental justice (EJ) movement within social work in various parts of the world and also at the global level. EJ requires the engagement and commitment of social workers towards achieving fairness and meaningfulness in people's interactions with their environment. The Global Agenda for Social Work and Social Development (hereinafter referred to as the Global Agenda) (International Association of Schools of Social Work [IASSW], International Council on Social Welfare [ICSW] and International Federation of Social Workers [IFSW], 2012), which is in line with United Nations' 2030 Agenda for Sustainable Development (hereinafter referred to as the 2030 Agenda), places particular emphasis on social work implications, commitments and interventions for EJ (Lombard, 2015). The commitment to promote the 2030 Agenda, through the Global Agenda, was taken in response to the profoundly unjust, unfair and above all unsustainable social, economic and political systems of the contemporary world that are posing a threat to the people and the planet (Truell, 2017). However, the SDG Index and Dashboards 2018 reports that 'no country is on track to achieve all the goals by 2030' (Sachs et al., 2018: viii). In pursuing the global responsibilities towards the 2030 Agenda, much more therefore remains to be done by international social workers. Healy (2008) defines international social work (ISW) as international professional social work practice and the capacity for international action by social workers. In particular, ISW promotes global and transnational knowledge, studies and experiences to foster social justice all around the world (Mohan, 2008). In this sense, international social workers act as citizens of the world in shaping an effective global practical response to global issues, such as EJ (Dominelli, 2011;Hawkins, 2010). As Erickson (2018) puts it, 'Social work believes in global equality, that is, in the right of all people of the world to share equally in Earth's bounty ' (p. 15). Environmental injustices are linked to structural inequalities, marginalities and vulnerabilities of poor and disenfranchised communities. In fact, environmental injustices exacerbate the vulnerable position of these communities, thereby reducing their capacity to mitigate and deter any risk arising from internal or external stressors (Kubanza and Simatele, 2016). Under such circumstances, international social workers are duty-bound by their ethical obligations to act on such global injustices (IASSW, ICSW, IFSW, 2018). According to Dominelli (2012), EJ compels social workers to tackle structural and individual forms of oppression that impact upon people and destroy the environment in the process of creating a privileged life for a few elites. This article presents three case studies on environmental injustices from three countries: Mauritius, Peru and Sweden. These three countries are worlds apart geographically, economically and culturally; however, they share a common challenge in terms of EJ. Neo-liberal economic models are exerting pressures on local communities in these countries. Through these conveniently selected case studies from Mauritius, Peru and Sweden, this article therefore considers the implications for ISW in promoting EJ in various parts of the world. Before presenting the findings and the discussion, the conceptual and theoretical framework is outlined, and the methodology used in gathering and analysing the data is explained. Conceptual and theoretical framework In this article, Bell's (2014) conceptual framework on EJ for cross-national analysis is used to structure and analyse the gathered data. Thereafter, Dominelli's (2012) green social work (GSW) theoretical perspective is applied in considering the implications of the findings for ISW interventions in promoting EJ. Bell (2014: 17-22) conceptualises EJ as having three overlapping dimensions: substantive EJthe overall quality of the environment; distributive EJ -the extent of environmental equalities; and procedural EJ -the fairness and inclusiveness of environmental decision-making. Substantive EJ is based on our fundamental functional right to enjoy a state of the environment that is pleasant and not harmful to the well-being of either humans or non-humans (Ako, 2013;Rambaree, 2017). Distributive EJ refers to the issue of equity, mostly in terms of the distribution of environmental burden, risks and benefits among different sections of the population. Finally, procedural EJ focuses on fairness, transparency and democratic participation in the process of decision-making related to the environment. Schlosberg (2013) opines that the conceptualisation of EJ has moved from being simply a general reflection of social injustice, to being a statement about the crucial nature of the relationship between the environment and the provision of justice itself. He therefore argues that EJ's theorisation is nowadays more focused on the material relationships between human disadvantage and vulnerability, and the condition of the environment and natural world in which that experience is immersed (Schlosberg, 2013). In this sense, contemporary discourses on EJ include arguments and issues related to the current unsustainable models of development, the unequal power dynamics within and across communities and states, as well as the unequal distribution of resources, which are central to the current global socio-economic systems of neo-liberalism (Dominelli, 2013;Kubanza and Simatele, 2016). Thus, the needs and rights of the poor, marginalised, disenfranchised and vulnerable communities are to be given due consideration by putting individuals and groups from such particular backgrounds at the centre in studying EJ. Within this context, Dominelli (2012) proposes GSW as an essential theoretical perspective and approach in social work. GSW obliges social workers to intervene for the cause of justice within its transformative politics and practice (Dominelli, 2012;Rambaree and Rock, 2018). In essence, GSW aims to protect the environment and enhance the well-being of individuals, groups and communities by analysing and addressing prevailing oppressions, structural inequalities and the unequal distribution of power and resources. In this way, by standing up for EJ, GSW further promotes the central identity of social work as a liberating and emancipatory profession (Rambaree and Rock, 2018). Methodology This study is based on primary data that were collected through face-to-face semi-structured interviews from Mauritius, Peru and Sweden. Secondary materials such as videographic and photographic evidence, as well as reports and documents related to the cases of injustices faced by the studied communities, were also gathered from the respective countries. The countries were chosen based on the researchers' convenience in having access to respondents for data collection. The cases were purposively chosen based on EJ issues and concerns that are frequently being raised in local media and EJ discourses in each of the selected countries. Data for Mauritius were collected from December 2017 to February 2018 through six interviews, each of about 45 minutes, in the Mauritian Creole language with representatives of community leaders from a local community. The interviewees were five men and a woman who had been representing community-based organisations and movements on behalf of the local residents. For the case in Sweden, three representatives -one woman and two men -from a Sámi reindeer herding community in the northern part of Sweden were interviewed. The interviewees had been active in organising protests against environmental injustices in the region. They were interviewed in the Swedish language. The interviews lasted on average 30 minutes each and were conducted during the month of December 2017. Data for Peru were collected through 16 interviews, each lasting for an average of 45 minutes. The interviews were conducted in the Spanish language with local community inhabitants in vulnerable areas of Lima during the period February to March 2017. The interviewees were 14 men and 2 women. They were purposively chosen from among those who had been raising their voices, through local organisations and media, on behalf of the local inhabitants, against environmental injustices in their locality. The gathered data were analysed with the help of ATLAS-ti v.8.3 to identify and select quotations related to environmental injustices. The selected quotations were coded into categories using Bell's (2014) Environmental Justice Indicator Framework (p. 31), with components such as substantive, distributive and procedural EJ. This framework was used for identifying and discussing environmental injustices in each of the cases (refer to Table 1). For each category that was created, a deductive critical discourse analysis was carried out with the help of ATLAS-ti (Rambaree, 2014). In particular, the analysis focused on coding and memoing meanings, motivations, ideologies and power related to the theorisation of environmental injustices. Some selected quotations from the gathered data are presented in the following section as support to the discussion. Thereafter, the analysis also drew on Dominelli's (2012) conceptualisation of GSW interventions and practices to consider the implications for ISW in promoting global EJ. In accordance with the Swedish law on ethics approval of research (SFS 2003(SFS :460, 2003, this study did not require ethics approval, since it did not involve any records of names or other details of interviewees persons, or any details that could connect a specific person to a crime or illegal activity. General social research ethical guidelines as outlined by Hardwick and Worsley (2011) were carefully followed in the research process. The participants were informed about the purpose of the research, and only volunteers were recruited as respondents. Steps were also taken to ensure anonymity in reporting the findings from the case studies. The researchers also sought permission for using the secondary data, such as images, videos and unpublished documents and reports, from the respective 'owners' of the materials. Some of the secondary materials were provided to the researchers by the research participants. Mauritius: A coastal community from locality 'X' Mauritius is a Small Island Developing State (SID) located in the southern part of the Indian Ocean off the east coast of Madagascar. It has a land surface area of about 2040 km 2 and a coastline of about 177 km. There is a growing concern in Mauritius regarding the negative impacts of climate change on coastal areas and communities (Ramessur, 2013). Mauritius does not have an indigenous population, in the traditional sense. Several waves of colonisation -Dutch, French and British -led Mauritius to become a multi-ethnic country. The Mauritian population consists of descendants of French settlers, African slaves, Indian indentured labourers and immigrant traders from China. Currently, the island has about 1.3 million inhabitants, and it has been having an average annual economic growth of about 4 percent over the past few decades. The island receives about 1.2 million tourists per year. Tourism together with the finance, service and textile sectors are important pillars of the Mauritian economy, while sugar production -the traditional pillar of the island's economy -is declining in relative importance (Svirydzenka and Petri, 2017). In order to deal with the decline in the sugar sector, in the year 2002, the then Government of Mauritius introduced the Integrated Resort Scheme (IRS), which was later somewhat modified and renamed as the Property Development Scheme (PDS) by the current government. The aim of these schemes has been to boost foreign direct investment in the tourism sector by allowing particularly the sugar estate owners to convert acres of agricultural land, mostly in coastal areas, into luxurious villas. Such resorts are being sold to foreign buyers through international estate agents and international property promoters for a minimum price of about US$500,000. In this article, a case of a PDS being planned in locality 'X' is presented and discussed. Locality 'X' is situated in a scenic location with clear turquoise sea. Locality 'X' is among few beach areas in Mauritius where people can still find mangroves, and where no major concrete construction currently exists. People from the neighbourhood describe the place as a heavenly space with tranquillity and calmness. Local people have been using the beach area for Sunday picnics with their extended families, which is a very common cultural practice in Mauritius. Moreover, key representatives from the local community report that some people use the place for spiritual prayers. One can find Christian and Hindu praying corners within the beach area. Locality 'X' therefore represents an existential and spiritual space for the local communities. Locality 'X' has always been accessible to the public, despite being surrounded by a private sugar estate. The private sugar estate acquired the land during colonial time for sugar cane/agricultural purposes. According to legislation referred as the 'Pas Géométriques Act of 1895', local inhabitants have been guaranteed public right to access the beach -at least 80 m counted from the high water mark (shoreline). The legislation stipulates that [t]he reserved lands along the sea coast commonly called the 'Pas Géométriques' and referred to in the Arrêté of Général Decaen of 5 May 1807, shall form part of the 'domaine public' and be inalienable and imprescriptible. (Ministry of Housing and Lands -Mauritius, n.d.: 1) Within this context, all the respondents state that the beach is a heritage that needs to be shared with all people in the community and should not become a property of a privileged elite. In a similar manner, some of the respondents have the following arguements: We come here to relax and have a good time with our families. Sitting by the beach under the trees, we forget about our stressful life . . . The way the promoters are doing, it will create massive inequality, frustrations, and injustices in our society. (Respondent X-5) This is a capitalist move, which will destroy our nature for creating their own wealth . . . they are stealing our beach from us . . . this (frustration) will lead to social problems such as thefts and drug dealings in this area . . . (Respondent X-1) The PDS project plan is to construct about 100 private luxury residential units in a land surface area of about 70,000 m 2 to be sold largely to foreign buyers. According to the interviewees, people from the neighbourhood are feeling threatened by the PDS project being planned in their locality. Over the past few years, all around Mauritius there has been growing public discontent regarding such types of property development programmes (Ramtohul, 2016). For instance, some sections of the Mauritian population are becoming increasingly worried about 'Mauritius being sold to foreigners', and representatives from coastal communities are anxious about the increasing number of privately owned gated communities that are blocking access to the beach for local people (Rambaree and Rock, 2018). Some respondents from the local community perceive that the promoters covertly started planning for the PDS project in locality X as early as a decade ago. They had observed several public rights of way being gradually blocked by huge rocks placed on the access road to locality 'X'. Respondents from the community make the following observations: . . . it is really sad what is happening here. Almost every Sunday, I bring my children to the beach here. I have been coming here for more than 45 years . . . I am very against such an approach . . . soon I will have to ask my children to swim in bath-tubs as there will be no beach left for us. . . . in the future our children will not have access to the beach . . . our heritage will become a property for the children of the ultra-rich . . . we are being pushed from our own local territories. (Respondent X-2) One of the interviewees from the region pointed out that the small-scale traditional fishermen and the small-scale recreational boat owners are being compelled to relocate towards a muddy coastal part of the locality X. Fishermen from the neighbouring areas have been using locality 'X' as an embarkation station for several decades. According to government records, there are about 20 registered small-scale traditional fishermen who are officially based at locality X. In addition, there are about four persons from the neighbouring areas who have tourist recreational boating licences from the government to operate at locality 'X'. According to the respondents, the promoters will put their own high-speed boats in the area, which will affect the ability of local people to continue using the space for earning a livelihood. Respondents from the community also feel that the promise of employment for the people in the resorts will be another form of oppression. They opine that people will not be directly employed by the resorts, as most of the jobs will be contracted out to agencies that may exploit people as cheap labour and have recourse to foreign workers from other countries such as Bangladesh and Madagascar. In this sense, they fear that the promoters of the PDS will obtain almost complete control of the ecosystem resources in locality 'X'. Two of the respondents argue as follows: They [the promoters] will take control of our livelihoods and exploit us. (Respondent X-4) Me, my father, my grandfather, we all have been fishermen based in this area . . . with this project we will not have access to the beach . . . this is our heritage . . . this project is a threat to our livelihoods . . . Another respondent states that . . . the promoters are luring the fishermen communities with a peanut sum as a compensation for giving away their fish landing station . . . many fishermen, who often have low level of education, will be tempted by the sum . . . the sum is nothing if you reckon in long term . . . In the long run, our marine ecosystem will be destroyed . . . The interviewees from the community unanimously voiced that the promoters have failed to have any consultation with the local people. The vast majority of the respondents from the local community state that they are not against 'economic and infrastructural development', but that they cannot accept injustices to people from the local communities within the process of such development. According to them, it was only in 2016 that some key representatives from the neighbouring communities were called to a presentation of the project as information sharing. The interviewees felt that the investors have avoided discussion related to many key issues, such as access to the 'Pas Geometric' and the negative impacts of such a project on the environment and the local communities. Consequently, some people from the local community have joined a national movement against beach grabbing in Mauritius. They are networking with several other national organisations to raise consciousness and advocate for sustainable development with respect for the ecosystem and the rights of local people. Locality 'X' is not an isolated case of environmental injustices in Mauritius. Currently, there are more than six cases of 'beach grabbing' in the country where local communities are trying to save their beaches from being taken by multinational companies for exploitation of ecosystem resources (see http://www.aknl.net/). Peru: The desert community from locality 'Y' Peru is located in South America and has a population of around 30 million. The country has previously been one of the poorest in Latin America, but through a recent economic boom, the national poverty rate has decreased from 50 percent in 2004 to around 24 percent in 2015 (Oxfam, 2015). Peru is one of the most biodiverse countries in the world (World Wildlife Fund [WWF], 2015). More than 65 percent of the country is covered by the Amazon rainforest, and the Andes mountainous region is home to more than 70 percent of the world's tropical glaciers (Takahashi and Martínez, 2017). The country also has a large area of coastal desert. Its capital -Lima -is one of the few megacities in the world that is located in a desert area. Lima's population has been growing from 600,000 inhabitants to 10 million inhabitants in the past few decades (García et al., 2014;Miyashiro, 2016). During the internal armed conflict period -from 1980 to 2000 -many people fled to Lima in order to escape from the violence and poverty that mostly affected people living in the countryside (García et al., 2015). Currently, almost one-third of the Peruvian population lives in Lima. In this article, a case study of a desert community from locality 'Y' in Lima is presented and discussed. During wintertime in Lima, the air is filled with fog, drizzling rain and humidity, and the sandy coastal hills are temporarily covered with native fauna (García et al., 2014). However, climate change is affecting Lima's ecosystems. Some research participants report that they can feel the effect of climate change in their locality. As respondent Y-8 puts it, 'during the summertime the heat is getting very strong and in the wintertime, the cold is getting too harsh'. Another major environmental problem faced by the local community is the lack of green spaces and trees. Consequently, many respondents mention that the area has severe dust and sand problems. According to a recent study by the World Health Organization, Lima has the worst air pollution of all Latin American cities (Peruvian Times, 2014). Thousands of people are dying because of the pollution level and lack of green spaces in Lima (Jolly, 2015). A vast majority of the respondents from locality 'Y' blame the nearby cement factory for aggravating the dust and causing respiratory problems. For instance, one of the respondents opines, It is a cement company that pollutes the environment with dust and powder of cement. All of our cloths are covered with that dust, gets dirty. People here are getting skin and allergy problems. (Respondent Y-10) Locality 'Y' is not adequately planned for either human settlement or urban growth. The uncontrolled urbanisation in this particular desert region of Lima has further contributed to pollution and loss of biodiversity in the area. Such conditions have also significantly increased challenges for the population to adapt to the effects of climate change, such as increased flooding, landslides, lack of water, and heat waves. In addition, the local inhabitants are also vulnerable to earthquakes. The housing areas in the coastal desert of Lima are most often divided into groupings of 30-40 families that are brought together into an 'asentamiento humano' (human settlement) (García et al., 2015). Such forms of human settlements on the fragile ecosystems in the coastal hills contribute to problems in the natural equilibrium. It increases human-induced pressures and decreases the vegetation that is needed to stabilise the soil for preventing landslides and mudslides. One respondent voiced his concerns: I am worried because, at this place here, we don't have trees . . . we breathe a lot of powder from the sand and the industry. In the winter time this place turns into mud and sludge. This place can bring infections. It is also an area of high risk in terms of earthquakes and landslides. The coastal desert communities are disproportionately exposed to higher levels of environmental risks compared to the wealthier areas of Lima (García et al., 2014). They are overcrowded and difficult to access and often they lack accessibility to water, drain/sewage systems and electricity. It is estimated that almost a million residents in Lima live without access to running water (Ioris, 2016), and most of them are from the coastal desert areas. The lack of adequate basic utility services -waste collection and disposal, and in some cases even accessibility to and from alleysfurther problematise the situation for the inhabitants in locality 'Y'. Access to water is a major cost and challenge for many. As one respondent reports, [w]e get water from a water company, before we had it in tanks and then we paid much more, because every bucket costed 5 soles (about 1.5 USD) for 2-3 days. But now, when we have water we are trying to decrease our use, to use it as little as we can . . . because of the problems of the landslides and flooding we have to pay much more for the water but this month we have used much less. Over the years, the local inhabitants in locality 'Y' have adapted to the areas by developing solidarity and obtaining support from various governmental and non-governmental organisations. Although there are some representations of the people in various different voluntary organisations, the demands of local inhabitants are often ignored by authorities and powerful industries like the nearby cement plant. As some of the respondents point out, . . . of course the situation here is not good. We have planted some trees but not all of them survived. There are only few left. We asked the municipality to plant trees, but there is still no response from them. When I was leader of the community, we sent a lot of proposals to the municipality for green areas, also a programme to have roads and footpaths, but we did not get, they ignored it . . . the garbage truck doesn't pass . . . (Respondent Y-10) As Dominelli (2012) puts it, 'industrial pollution and environmental degradation will be detrimental to the development of any community' (p. 82). Sustainable community development requires urban/spatial planning with consideration of the needs of all members of the community, and special consideration for those who are poor, marginalised and vulnerable. Local authorities need to take the politics of obligations within the frame of environment ethics and citizenship seriously (MacGregor, 2011). In this case study, the local inhabitants are clearly living in a hazardous environment. They are exposed to unequal risks and benefits in their local environmental area, and they feel powerless and voiceless vis-a-vis the authorities that are supposed to care and protect them. In this endeavour, social workers can play a vital role in assisting, lobbying and advocating for those disenfranchised within the process of 'development'. Sweden: Sámi indigenous community from locality 'Z' In Northern Scandinavia, some communities of Sámi people live in close connection with the nature -within a mesh of complex socioecological systems, linking humans, nature and animals (Sametinget, 2016). The Sámi population in Sweden is about 20,000 in number. They gained recognition as indigenous people in Sweden in the year 1977. The Sámi communities, as indigenous people, have long been struggling for their land, self-determination and traditional and cultural rights (Dominelli, 2012;Persson et al., 2017). For instance, the International Labour Organisation (ILO) convention number 169 that protect the rights of the indigenous people has still not been ratified by Sweden (Bell, 2014). As an indigenous group, Sámi communities have the right to preserve and develop -among others -their language, reindeer husbandry, traditions and identity. The Sámi Parliament reports that some laws in Sweden, such as the Mineral Act (1991: 45), actually protect the rights of the mining companies to exploit land more than they protect the Sámi lifestyle and culture (Sametinget, 2014). There is a common belief among the interviewees that the right to decide about land use historically falls on the Sámi people, and the right to affect land use, which is decided by the central government in Sweden, is in some cases detrimental to the survival of the Sámi culture and brings more vulnerabilities to their living and working conditions (Persson et al., 2017). This study considers EJ in the case of the Sámi community in locality 'Z', which is located in the northern part of Sweden. Reindeer as an animal and reindeer husbandry as an enterprise represent icons of Sámi culture and identity (Silvén, 2014). The reindeer herders from the Sámi communities have been using large land areas in the Northern part of Sweden, Finland, Norway and Russia for their livelihoods for over thousands of years. Various forms of oppression and injustices faced by the Sámi communities, such as their forced assimilation into the mainstream culture and threat to their traditional livelihoods, have been identified as an indigenous social work issue (Laitinen and Väyrynen, 2016). Currently, Sámi communities in locality 'Z' (like in other parts of the North) are facing vulnerabilities with the effect of climate change. The respondents report that they are regularly experiencing unpredictable changes in the climate. The winter is getting unusually warm or it is snowing more than it usually does. The effects of climate change are making reindeer herding difficult and stressful. The annual success of the livelihood of the reindeer is to a large extent dependent on the amount and quality of snow during winter (Turunen et al., 2016). Reindeer typically feed by digging into the snow and grazing on lichen during the wintertime. Climate change is creating a crust of hard snow, which makes the reindeer unable to reach the lichen they eat during wintertime. Lichen not only provides a good source of nutrition to the reindeer during the winter, but it also plays a beneficial role in the digestive system of these animals (Storeheier et al., 2002). Within this context, one of the respondents explained the following: I lost almost half of my reindeers. They starved to death. It was snowing unusually much, and we were in an area that did not have much lichen on the trees. One of the participants (Respondent Z-23) explained that they could see on the reindeer in the autumn if it has been a very hot summer. During a hot summer, the reindeer do not get enough to eat in order to create the necessary fat that is needed for facing the cold wintertime. This makes the animals less likely to survive during harsh winters. It is not only the warmer winters that are affecting the reindeer but also restricted access to pasture areas. During hot summers, reindeer are herded towards pasture areas located at higher altitudes. Land usage by others (non-Sámi) in Sápmi (the traditional Sámi land) for other purposes, such as mining and forestry, is causing a gradual decline in and access to pasture areas and food sources for the reindeer. The participants are worried and anxious about the impact of the mining being planned in the region on their livelihood and lifestyle. As one of the respondents states, [a]ll the transport vehicles for the mining industries will have to go through our Sámi community. This will affect us a lot . . . the reindeer herders have been in this area for more than thousands of years. Suddenly, other industries come and take over this place, and displace the Sámi people. The municipality in the region wants to have mining activities in the area because they believe it will generate work opportunities. Similar to corporations, local municipalities often carry forward the business discourses driven by a profit-focused worldview (Persson et al., 2017). However, the Sámi communities are arguing that they need support for their traditional livelihood and occupation, and not employment in multinational companies that are going to exploit them and most likely damage the ecosystem resources of the region. One of the participants talked about the protest against the mining being planned in the area: In particular, the respondents demand that both the forestry and mining industries must have consultation with the Sámi communities before any environmental action in their areas. However, they feel that their wishes (or rather their needs) are not taken into consideration in the decisionmaking processes concerning their working and living environments. Most of the time, the Sámi communities are just being informed. In this connection, respondents report the following: It feels like there is someone above my head who decides about my life. It feels like I get run over, no one listens to us. They pretend to listen, but they do as they want anyway. An increasingly warm climate and changing landscapes due to mining can damage the cultural heritage of indigenous people (Weaver, 2014). Non-indigenous people might not be affected by human intrusions to the same extent as Sámi people who are dependent on their land, and the distribution of access to safe working and living conditions is unequal compared to other groups in the society. Persson et al. (2017) opine that the perceived environmental conflict can be viewed as part of a larger struggle over social status and recognition of a minority indigenous group. 'For many years indigenous peoples, their needs, rights, culture and identity have either been neglected or eliminated altogether' (Szpak, 2017). Persson et al. (2017) therefore argue that the promotion of the mining business by state actors is yet another way of marginalising Sámi people, as it serves to introduce competing land uses at an increasingly rapid pace in the era of neo-liberalism. Discussion In all three cases -Mauritius, Peru and Sweden -there are indications of environmental injustices against local/indigenous communities. In Mauritius and Peru, some social workers are directly involved with local community movements fighting against environmental injustices. However, in Sweden, social workers play almost no active role in supporting EJ for the Sámi community. The role of structural social work (including community-based social work) has been marginalised and neglected in Sweden since the end of the 1980s (Sjöberg and Turunen, 2018). From the analysis of the gathered data, it could be said that all the three countries could benefit from ISW intervention and support. Before embarking on the implications of the findings for ISW, we presented a summary of the environmental injustices identification/discussion in Table 1. The case studies highlight that the ecosystem resources and services are gradually being centralised in the hands of powerful industries. As a result, there are growing concerns and emerging conflicts between local communities and industrial groups -such as the resorts industry in Mauritius, the cement industry in Peru and the mining and forestry industries in Sweden. Industrial expansion is commonly seen as a necessity for job creation and economic growth; however, it appears to contribute to environmental injustices often driven by the requirements of maximisation of profits (Bell, 2014). In a similar manner, authors from different parts of the world, such as Shajahan and Sharma (2018), Philip and Reisch (2015), Dominelli and Ku (2017) and Coates and Gray (2012), have all argued that the roots of environmental injustices in various parts of the world can often be traced to global economic systems. Moreover, the burden of environmental risks and degradations are being gradually shifted on, or are already borne by common local people/communities whose livelihoods are already being threatened by the effects of climate change (such as fishermen in Mauritius, micro-scale business owners in Peru and reindeer herders in Sweden). In this sense, ISW needs GSW strategies on community mobilisation for demanding distributive EJ from government authorities and corporate bodies (as outlined by Dominelli, 2012). Among others, there is an urgent need for GSW interventions in the case from Lima in order to establish/improve access to basic facilities such as water and sanitary services, for the preservation of the natural flora and fauna and the creation of more green spaces, and for reversing the migration of the people towards the rural parts by improving rural living conditions. GSW interventions can be used to focus on people's substantive environmental rights, such as freedom from environmental degradation that is threatening livelihoods and wellbeing. Resource mobilisation and advocacy strategies are also essential in such tasks. The voices of the local communities are often ignored or marginalised. Within the decisionmaking process, the say of the more powerful corporate actors and public authorities dominates. Vulnerable communities seldom have a say in how development initiatives are executed, even though they bear the consequences of environmental impact on their immediate working and living environment (Dominelli, 2013). The authorities, who are supposed to look at the interest of all, are often guided by the hope and promise of economic growth and employment. In all three case studies, interviewees agree on the need for economic investment, but they all demand a fair and a just form of industrialisation. Often, local communities have to protest against unfairness and their exclusion from environmental decision-making processes. In this sense, there should be respect for procedural EJ. Addressing environmental injustices also requires ISW to engage with multinational corporations and bring the voices of the vulnerable, marginalised and disenfranchised communities to them. In particular, sustainability and corporate social responsibility hold a prominent place within the agenda of most multinational corporations. At the global level, the United Nations' Global Compact, the ISO 26000 and the Global Reporting Initiative are some of the frameworks that can be used by international social workers to check the reliability and validity in monitoring, and reporting fair, ethical, honest and transparent approaches in doing sustainable business. International social workers need to become acquainted with such frameworks in their advocacy work for sustainable development. For instance, with reference to the global sustainability reporting frameworks, international social workers can highlight the lack of accountability/misreporting among multinational corporations in national and international fora (Dominelli, 2012(Dominelli, , 2013. In this endeavour, international social workers can liaise with global social work and social development organisations -such as the International Federation of Social Workers, the International Association of Schools of Social Work and the International Council of Social Welfare -to create pressure on multinational corporations to respect and work towards sustainable global business practices. However, it is to be noted that often global pressure increases only ceremonial commitment from companies, suggesting a pattern of organised hypocrisy, whereby discursive commitments are not followed by subsequent actions (Lim and Tsutsui, 2012). ISW, as a practicebased profession, can therefore make valuable contributions in partnership with multinational corporations and local communities through genuine, honest and concrete actions for sustainable development around the world. As Ross (2013) puts it, EJ cannot be achieved without open dialogue between the 'oppressed' and 'the oppressors ' (p. 201). Similarly, Dominelli (2011) suggests that social workers need to establish dialogue with policy makers and use the media to change unsustainable policies at local, national and international levels. In this sense, international social workers can make valuable contributions by bringing their skills and competencies as mediators/ negotiators into practice for achieving global sustainable development. Before moving to the conclusion, it is important to consider and discuss certain limitations of this study. In terms of methodology, the study is confined only to certain particular target groups from the purposively chosen communities. For instance, the community in Mauritius also includes members who, for various reasons, are supporting the PDS. The findings and discussion therefore need to be generalised with a certain degree of caution. In addition, GSW can be one of the suitable frameworks for ISW intervention; however, the possibilities for local social workers to get involved in GSW differ to a large extent. For instance, in Sweden, the state's support regarding community work within public bodies has been reduced, and private market-based mechanisms are arranged to deal with the needs and well-being of local communities (Sjöberg and Turunen, 2018). Conclusion This study highlights how poor, disenfranchised and marginalised communities are being pushed towards more environmental vulnerabilities, risks and disasters. Their basic rights to have a say in the process of national and local levels of decision-making are being ignored or marginalised. Throughout the world, the dominant global model of economic development based on neoliberal capitalism that prioritises profit-making for a few over the fulfilment of the basic needs and rights of the many is being questioned (Dominelli, 2013). In this sense, GSW provides a suitable framework for ISW interventions such as through advocacy, lobbying, pressure groups, resource mobilisation, building community resilience and collective empowerment. International social workers through GSW interventions need to act as guardians of people, the planet and prosperity as part of their overall 'responsibilities' towards global sustainable development. All over the world there are cases where the fundamental rights of local communities to enjoy a healthy environment are being violated (Bell, 2014). Meeting the goals of the 2030 Agenda therefore requires transformative societal changes where the input of international social workers is necessary. Through the Global Agenda, the global communities of social workers, as well as social work educators and researchers, have therefore rightly committed themselves to support, influence and promote global initiatives aimed at achieving environmental justice and sustainable societies (IASSW, ICSW and IFSW, 2012).
2020-03-12T10:16:13.694Z
2020-03-05T00:00:00.000
{ "year": 2020, "sha1": "a1d6b49ce4045c102b45f4cbb7a88553d52e165f", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0020872819889391", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "703beb533448b5871d230952a1b32107942f41d2", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
219421639
pes2o/s2orc
v3-fos-license
Flexural Exanthema From Enfortumab Vedotin Urothelial malignancies are commonly treated with platinum-based therapies. Newer trials have tested antimitotic therapies such as enfortumab vedotin as viable treatment therapy for refractory malignany. Enfortumab vedotin targets nectin-4, a member of a family of calcium-dependent, immunoglobulin-like adhesion molecules found in adherens junctions and expressed in various epithelial malignancies, including bladder, breast, lung, ovarian, head/neck, and esophageal cancers. We present a case of a patient with symmetrical drug-related intertriginous and flexural exanthema secondary to enfortumab. He was successfully treated with topical corticosteroids. Cutaneous toxicity appears to be a common adverse reaction in this growing class of antibody-drug conjugates. Introduction Enfortumab vedotin is an antimitotic antibody-drug conjugate that inhibits microtubule assembly [1]. It is currently approved to be utilized in urothelial carcinomas, ovarian cancers, and non-small cell lung cancers [2] . Common toxicities that have been attributed to enfortumab have been fatigue, peripheral neuropathy, skin rashes, gastrointestinal issues, and hematological suppression [3]. We present a case of a patient with symmetrical drug-related intertriginous and flexural exanthema secondary to enfortumab. Case Presentation A 64-year-old male with metastatic urothelial cancer presented to the emergency department with complaints of multiple areas of swollen, erythematous patches in bilateral armpits, groin regions, elbow folds, and dorsal aspects of feet. The patient was started on a new treatment with enfortumab vedotin about one month ago. He received a total of five doses with the last treatment received five days back. He denied any fevers, chills, nausea, vomiting, or diarrhea. He stated that the erythematous patches started two days ago, sudden in onset, in his right axillary region and by the end of the day it had appeared in all the other sites ( Figure 1). FIGURE 1: Bilateral flexural exanthema of feet The erythematous patches started swelling and caused him burning pain. The patient has baseline peripheral neuropathy from previous carboplatin-induced toxicity. The patient's labs showed a white count of 9,820 cells/uL, platelet count of 203 K/uL, and a normal comprehensive metabolic panel. A procalcitonin and lactic acid were procured which were negative for active infection. The patient was started on diphenhydramine and triamcinolone 0.1% cream. Over the span of seven days, he soon started feeling relief and the rash dissipated. His oncologist noted significant improvement of the urothelial cancer with enfortumab treatment. Since the patient had resolution of the rash, enfortumab was resumed at a 20% dose reduction for a span of three weeks. Over the three-week period, he did not have recurrence of the flexural exanthems. Discussion Nectin-4 arises from members of calcium-dependent immunoglobulin adhesion molecules located in adherens junctions. They are expressed in various epithelial cancers such as bladder, breast, lung, ovarian, oropharyngeal, and esophageal cancers. Enfortumab vedotin is designed to act on nectin-4 to disrupt the mitotic process [1]. Phase 1 data for enfortumab vedotin in the treatment of metastatic urothelialcarcinoma have promising results, but have noted treatmentrelated adverse events like rash, nausea, and decreased appetite [4]. Skin reactions, such as symmetrical drug-related intertriginous and flexural exanthemas, constitute a grade 3 or grade 4 reaction. Data have shown that these reactions can occur in 52%-54% of cases of patients on the medication, but they do not delineate duration prior to reaction. Some can progress to bullous dermatitis, exfoliative dermatitis, and/or palmar-plantar erythrodysesthesia. The median time to onset of skin reactions has been estimated to be one month. Of patients who experienced rash, nearly two-thirds experienced complete resolution and approximately onefifth experienced partial improvement [3]. As per guidelines, topical corticosteroids and antihistamine usage has been warranted. Also, withholding of the medication till the symptoms resolution was recommended. Conclusions Enfortumab vedotin is a newer antimitotic agent being used to treat urothelial malignancies. As in case with other chemotherapeutic agents, dermatological side effects can arise. This case elucidates potential flexural exanthemas that can result from the medication. These reactions should be treated by steroids and withholding of enfortumab vedotin. Re-initiation of treatment should be done with careful monitoring as these benign exanthemas can progress to more complex issues such as Stevens-Johnson syndrome. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2020-05-21T00:09:13.356Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "7ead15cba3459d011262e786f15fc4545d83db78", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/30758-flexural-exanthema-from-enfortumab-vedotin.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "75685528dc330c7f1d83cd4c060a21b12b6e7e59", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253831502
pes2o/s2orc
v3-fos-license
LIFRNet: A Novel Lightweight Individual Fish Recognition Method Based on Deformable Convolution and Edge Feature Learning : With the continuous development of industrial aquaculture and artificial intelligence technology, the trend of the use of automation and intelligence in aquaculture is becoming more and more obvious, and the speed of the related technical development is becoming faster and faster. Individual fish recognition could provide key technical support for fish growth monitoring, bait feeding and density estimation, and also provide strong data support for fish precision farming. However, individual fish recognition faces significant hurdles due to the underwater environment complexity, high visual similarity of individual fish and the real-time aspect of the process. In particular, the complex and changeable underwater environment makes it extremely difficult to detect individual fish and extract biological features extraction. In view of the above problems, this paper proposes an individual fish recognition method based on lightweight convolutional neural network (LIFRNet). This proposed method could extract the visual features of underwater moving fish accurately and efficiently and give each fish unique identity recognition information. The method proposed in this paper consists of three parts: the underwater fish detection module, underwater individual fish recognition module and result visualization module. In order to improve the accuracy and real-time availability of recognition, this paper proposes a lightweight backbone network for fish visual feature extraction. This research constructed a dataset for individual fish recognition (DlouFish), and the fish in dataset were manually sorted and labeled. The dataset contains 6950 picture information instances of 384 individual fish. In this research, simulation experiments were carried out on the DlouFish dataset. Compared with YOLOV4-Tiny and YOLOV4, the accuracy of the proposed method in fish detection was increased by 5.12% and 3.65%, respectively. Additionally, the accuracy of individual fish recognition reached 97.8%. Introduction With the continuous development of aquaculture and the expansion of the farming scale, the old farming model of relying on experience, great effort and the weather has become increasingly inappropriate for the needs of current agricultural production and management.The aquaculture industry is gradually transforming from extensive production to factory, large-scale and intelligent production.Because of the continuous expansion of the aquaculture scale and aquaculture categories, it is of great significance to effectively obtain and analyze some important information generated in the production process, which is also very important for reducing the risk of aquaculture, improving the economic benefits of enterprises and reducing the labor intensity of employees. In the process of fishery production, an accurate and real-time grasp of the body length, weight information, health status and behavior status of farmed fish could provide key analysis data for bait feeding, water quality management, disease control, feed formula management, etc., and then provide important data support for production management decision making.However, the acquisition of these data requires the realization of the individual recognition of fish.This means that, on the basis of the identified fish species, it is necessary to further determine the individual fish in order to bind the growth characteristics of the individual to the specific individual.To achieve this goal, people have tried a variety of different methods, including placing tags on fish bodies, using RFID technology and implanting tracking devices.However, these methods are highly invasive, which could cause great damage to individual fish and have high implementation costs, so these methods are difficult to popularize widely. There is no doubt that non-contact biometric feature extraction is an ideal way to solve individual fish recognition.Due to its poor robustness and generalization, traditional computer vision technology could not meet the actual production needs of individual fish recognition.However, deep learning and artificial intelligence technology have developed rapidly in recent years, and their combination with many fields has shown great technical advantages and application value, which fully proves their effectiveness and practicality.The use of artificial intelligence and deep learning technology has provided good applications in the fields of attitude estimation, object detection, autopilot and target recognition, which makes it possible to use it to solve the individual recognition problem of fish.In particular, large-scale deep learning algorithms have achieved high accuracy in face recognition and have been widely used.With the deepening of research, we find that there are many challenges in solving the problem of individual fish recognition by using computer vision technology.Unlike other fields, accurate and real-time individual fish recognition has its own characteristics.It is not feasible to apply the mature technology for individual fish recognition directly.Underwater real-time individual fish recognition faces many new situations and challenges (Figure 1): Agriculture 2022, 12, x FOR PEER REVIEW 2 of 20 In the process of fishery production, an accurate and real-time grasp of the body length, weight information, health status and behavior status of farmed fish could provide key analysis data for bait feeding, water quality management, disease control, feed formula management, etc., and then provide important data support for production management decision making.However, the acquisition of these data requires the realization of the individual recognition of fish.This means that, on the basis of the identified fish species, it is necessary to further determine the individual fish in order to bind the growth characteristics of the individual to the specific individual.To achieve this goal, people have tried a variety of different methods, including placing tags on fish bodies, using RFID technology and implanting tracking devices.However, these methods are highly invasive, which could cause great damage to individual fish and have high implementation costs, so these methods are difficult to popularize widely. Figure 1.Individual fish images taken from underwater.They contain extreme occlusion between individuals, chromatic aberration variation, a complex undersea environment and a high degree of visual similarity among fish species. There is no doubt that non-contact biometric feature extraction is an ideal way to solve individual fish recognition.Due to its poor robustness and generalization, traditional computer vision technology could not meet the actual production needs of individual fish recognition.However, deep learning and artificial intelligence technology have developed rapidly in recent years, and their combination with many fields has shown great technical advantages and application value, which fully proves their effectiveness and practicality.The use of artificial intelligence and deep learning technology has provided good applications in the fields of attitude estimation, object detection, autopilot and target recognition, which makes it possible to use it to solve the individual recognition problem of fish.In particular, large-scale deep learning algorithms have achieved high accuracy in face recognition and have been widely used.With the deepening of research, Figure 1.Individual fish images taken from underwater.They contain extreme occlusion between individuals, chromatic aberration variation, a complex undersea environment and a high degree of visual similarity among fish species.1. Complex underwater environment: The primary difficulty of underwater recognition task is that individual fish recognition needs to deal with the complex and changeable underwater environment.Due to poor underwater lighting conditions, compared to the data obtained under normal conditions, the quality of underwater video and image data is not high.In addition, some water bodies have large chromatic aberration changes, turbid water quality and the interference of many non-fish targets such as algae.It brings difficulties to effectively obtaining the visual characteristics of fish, so it is a great challenge to accurately recognize individual fish. 2. Serious occlusion between fish: The majority of underwater fish activity occurs in groups.Many fish swim fast and are small in size, and the individual fish shield each other.However, individual recognition needs to accurately separate individual fish from other individual fish and the surrounding environment, and then extract the visual feature information of its torso.Therefore, it is very difficult to effectively extract the visual feature information of a fish torso between severely occluded individual fish. 3. There is a significant visual similarity between each fish.Different fish species have unique visual characteristics.However, the visual differences between individual fish are very small and the visual similarities are strong.Some individual fish are challenging to distinguish directly with the naked eye, so the recognition algorithm must precisely capture the tiny visual variations between the individuals. Therefore, this paper proposes an individual fish recognition method (LIFRNet) based on a lightweight convolution neural network.The main contributions of this paper are as follows: 1. in the fish detection part, the CBAM attention mechanism module is added to greatly improve the accuracy of object detection at the cost of a small number of parameters by focusing on the two dimensions of channel and space; 2. in the fish recognition part, we use the combination of 1 × 1 convolution and BN layer to learn the edge features of fish, use deformable convolution which is more suitable for the swimming posture of fish, and use the Mish activation function instead of Relu to obtain smaller intra class distance and larger inter-class distance; 3. we collect and recognize the individual fish recognition dataset (DlouFish), which contains a total of 6950 images of 384 individual fish and is numbered by the individual, to facilitate the feature extraction, training and prediction of underwater individual fish using deep neural networks. The structure of this paper is as follows: the second part of the paper mainly introduces the related work of individual fish recognition, and the third part mainly introduces our proposed lightweight convolutional neural network for individual fish recognition.The fourth part mainly introduces the results of the simulation experiments, and the fifth part summarizes the proposed work and prospects for the future work. Related Work With the wider application of animal individual recognition, animal individual recognition has also attracted more and more attention from experts and scholars, and has become a hot topic of research in academia and industry.The characteristics are analyzed and studied in a targeted manner, different methods are designed to realize the individual recognition of animals and different data sets are also constructed to train and test the individual recognition algorithm.In order to fill the gap in the northeast tiger dataset, Li et al. [1] published a dataset of over 8000 video clips of 92 northeast tigers in 2019.Based on this dataset, Liu et al. [2] used the pedestrian re-recognition method, extracted features from various body parts of tigers, and used a partial pose-guided global learning approach to complete the re-recognition of the northeast tiger.In order to increase the precision and speed of cow detection in actual production scenarios, Xu et al. [3] used a facial recognition framework combining Retinaface and ArcFace.The automatic cow recognition technique proposed by Li et al. [4] used the Zernike matrix as a cow feature extractor, followed by linear discriminant analysis on the collected features and a support vector integration class approach for individual cow recognition.Ghosh et al. [5] analyzed the performances of various deep CNN-based models using an identical set of hyper parameters trained end-toend on a pig breed dataset and a goat breed dataset, respectively.The experimental results showed that MobileNetV2 was the best deep CNN model for goat breed classification and InceptionV3 was the best model for pig breed classification. In 2016, Villon et al. [6] analyzed and experimented with two different methods of deep learning and SVM classifiers to detect and identify fish, and discussed their advantages and disadvantages.Facts have proved that with the improvement of computer computing power and the arrival of the age of big data, the use of deep learning for fish detection and classification has become a major trend.Tamou et al. [7] used the pre-trained AlexNet network to extract features from the foreground fish images of an available underwater dataset and then used an SVM classifier to classify.The convolution neural network AlexNet is combined with transfer learning to realize the automatic classification of fish species.An accuracy of 99.45% was obtained in the Fish Recognition Ground Truth dataset.Blount et al. [8] proposed the Flukebook platform in the field of underwater species recognition, which combines photo algorithms with data management and infrastructure for whales and dolphins.Flukebook was trained on 15 different species to form 37 speciesspecific recognition processes, and then applied to cetacean photo recognition through ongoing collaboration between computer vision researchers, software engineers and biologists.By enhancing ResNet, enhancing the feature information output of identified objects, and enhancing the utilization of feature information, Zhao et al. [9] suggested a composite fish detection framework based on composite backbone networks and enhanced path aggregation networks.Nixon [10] proposed a neural network capable of identifying, categorizing and counting 11 fish species to track the reproductive activity of fish populations using YOLOv4 and Darknet as the infrastructure and architecture.Based on the spot features on shark skin, Arzoumanian et al. [11] developed a shark feature recognition library and employed feature matching for individual shark recognition. In contrast to terrestrial animal recognition, underwater species recognition is more challenging to train a high-performance recognition model due to the noisy nature of underwater imagery.To solve these issues, Kaur et al. [12] proposed the Atmospheric-Light-Enhancement Algorithm (ALE), which includes a preprocessing step for underwater images that acts on the intensity, contrast, and sharpness of the object to improve the visualization quality.In order to train precise deep neural network fish recognition models from noisy large-scale underwater photos using adaptive perturbation methods on confrontational perturbed images, Zhang et al. [13] introduced a unique deep adversarial learning framework for AdvFish.Deep et al. [14] employed single-image super-resolution approaches to tackle the issue of limited discriminative information from low-resolution images and deep learning methods to explicitly learn discriminative features from relatively low-resolution images.Syreen et al. [15] proposed the Iterative Grouped Evolution Network (IGCN) to divide all candidate areas into fish and non-fish entities.A hybrid fusion of optical flow and VGG16 at level one.In the LifeCLEF 2015 fish dataset, a detection accuracy of 94.05% was achieved.Villon et al. [16] used the GoogLeNet to extract features and adopted the Softmax classification method to detect reef fish.Rauf et al. [17] proposed a CNN architecture containing 32 depth layers, which can better obtain valuable features from images to complete fish species recognition.A data set named Fish Pak was also provided, including 915 images of six different kinds of fish.Xu et al. [18] applied the YOLO deep learning model to three different data sets recorded by real-world waterpower sites to identify fish in underwater videos, with a mAP score of 0.5392.Petrellis [19] proposed fish morphological feature recognition based on deep learning technology.First, the fish and background were separated by object detection and image segmentation, and then the size of fish and the position of key points were measured by aligning landmarks.The accuracy of fish size estimation was 80-91%.Rosales et al. [20] created a fish detector using Faster R-CNN to locate fish.The model achieved a mini-batch accuracy equal to 99.95 percent with an RPN mini-batch accuracy equal to 100 percent.Jalal et al. [21] combines luminous flux and a Gaussian mixture model with a depth neural network to obtain time information to identify fish moving freely in the background, thus proposing a method to classify fish in an unconstrained underwater video.Classification accuracies of 91.64% and 78.8% were achieved for the LifeCLEF 2015 and UWA datasets, respectively.Hossain et al. [22] proposed an automatic monitoring system for marine organisms, which uses GMM background subtraction for detection, and uses the Pyramid Histogram Of visual Words (PHOW) feature with an SVM classifier for classification.In the CLEF 2015 dataset, the still image can be classified with an accuracy of 91.7%.Ben Tamou et al. [23] used the transfer learning framework to propose a training loss curve method for targeted data enhancement.Additionally, Tamou proposed a hierarchical CNN classification method to classify fish first into family levels and then into species categories.In the LifeClef 2015 Fish dataset, 81.53% accuracy was achieved.Ben Tamou et al. [24] achieved 81.83% accuracy in the LifeClef 2015 Fish dataset again.They used a new strategy of incremental learning to train the network.First, they learned difficult species and then learned new species using knowledge distillation to complete the classification task of live fish species in underwater images. However, the purpose of some existing underwater fish recognition work is often to classify fish by category and not to accurately recognize different individual fish in the same species.At the same time, there are still some problems in the field of individual fish recognition, such as the lack of data resources and a large number of model parameters.In addition, the complexity and uncertainty of the underwater environment has led to a decrease in recognition accuracy. In response to the above issues, a lightweight convolution neural network based individual fish recognition method (LIFRNet) is proposed.The schematic diagram of individual fish recognition is shown in Figure 2. Agriculture 2022, 12, x FOR PEER REVIEW 5 of 20 identify fish moving freely in the background, thus proposing a method to classify fish in an unconstrained underwater video.Classification accuracies of 91.64% and 78.8% were achieved for the LifeCLEF 2015 and UWA datasets, respectively.Hossain [22] proposed an automatic monitoring system for marine organisms, which uses GMM background subtraction for detection, and uses the Pyramid Histogram Of visual Words (PHOW) feature with an SVM classifier for classification.In the CLEF 2015 dataset, the still image can be classified with an accuracy of 91.7%.Ben Tamou [23] used the transfer learning framework to propose a training loss curve method for targeted data enhancement.Additionally, Tamou proposed a hierarchical CNN classification method to classify fish first into family levels and then into species categories.In the LifeClef 2015 Fish dataset, 81.53% accuracy was achieved.Ben Tamou [24] achieved 81.83% accuracy in the LifeClef 2015 Fish dataset again.They used a new strategy of incremental learning to train the network.First, they learned difficult species and then learned new species using knowledge distillation to complete the classification task of live fish species in underwater images. However, the purpose of some existing underwater fish recognition work is often to classify fish by category and not to accurately recognize different individual fish in the same species.At the same time, there are still some problems in the field of individual fish recognition, such as the lack of data resources and a large number of model parameters.In addition, the complexity and uncertainty of the underwater environment has led to a decrease in recognition accuracy. In response to the above issues, a lightweight convolution neural network based individual fish recognition method (LIFRNet) is proposed.The schematic diagram of individual fish recognition is shown in Figure 2. The Proposed Work With the continuous development of computer vision technology, biometric technology has been widely used; especially, face recognition technology has been widely used in public security, cash payment and identity recognition, in addition to other fields.However, research on the individual recognition of underwater fish is still emerging.Individual fish recognition has great differences from other biometric recognition.First of all, individual fish recognition is primarily conducted using underwater equipment.Because of the complexity of the underwater environment, the water quality is frequently subpar, which results in poor image data and obscure fish biometric features, which severely The Proposed Work With the continuous development of computer vision technology, biometric technology has been widely used; especially, face recognition technology has been widely used in public security, cash payment and identity recognition, in addition to other fields.However, research on the individual recognition of underwater fish is still emerging.Individual fish recognition has great differences from other biometric recognition.First of all, individual fish recognition is primarily conducted using underwater equipment.Because of the complexity of the underwater environment, the water quality is frequently subpar, which results in poor image data and obscure fish biometric features, which severely hamper the extraction of fish biometric features.Secondly, the biological characteristics of some species of fish have only very small differences, and the similarity between biological individuals is very high.The feature extraction method must capture the small differences of biological individuals to accurately distinguish different individuals.Therefore, in view of the various problems in individual fish recognition, we must design individual recognition algorithms in a targeted manner. Committed to solving the problems of difficulty in extracting biological features, high visual similarity between individual fish and high real-time requirements in individual fish recognition, this paper designs a lightweight underwater fish real-time individual recognition method: LIFRNet (Figure 3).LIFRNet consists of three parts, namely, the underwater fish object detection module, individual recognition module and visualization module.The underwater fish object detection module can detect the fish in the data stream in real time and separate the individual fish from the surrounding environment.The individual recognition module can extract the biological features of the detected single fish target and obtain a feature map with fish body features.The final visualization module uses the optimal weights obtained by multiple iterations of training to visually identify the fish school and output the results. hamper the extraction of fish biometric features.Secondly, the biological characteristics of some species of fish have only very small differences, and the similarity between biological individuals is very high.The feature extraction method must capture the small differences of biological individuals to accurately distinguish different individuals.Therefore, in view of the various problems in individual fish recognition, we must design individual recognition algorithms in a targeted manner. Committed to solving the problems of difficulty in extracting biological features, high visual similarity between individual fish and high real-time requirements in individual fish recognition, this paper designs a lightweight underwater fish real-time individual recognition method: LIFRNet (Figure 3).LIFRNet consists of three parts, namely, the underwater fish object detection module, individual recognition module and visualization module.The underwater fish object detection module can detect the fish in the data stream in real time and separate the individual fish from the surrounding environment.The individual recognition module can extract the biological features of the detected single fish target and obtain a feature map with fish body features.The final visualization module uses the optimal weights obtained by multiple iterations of training to visually identify the fish school and output the results. Underwater Fish Detection Module After extensive experiments, we found that existing object detection methods are mainly used to detect objects in normal environments.However, the results of these methods are not satisfactory for fuzzy, low frame rate and small objects, so these methods cannot be directly applied to the recognition of underwater individual fish.This paper proposes the YOLO-CBAM method for underwater fish object detection.The method proposed in this paper uses YOLOV4-Tiny as the main framework and makes targeted improvements according to the characteristics of individual fish recognition.After a lot of repeated experiments, compared with the ordinary YOLO algorithm, we found that, although YOLOV4-Tiny has a lighter network structure, the detection accuracy drops significantly.In order to retain the advantages of the YOLOV4-Tiny network, which has a small number of structural parameters and fast detection speed, and to further improve the detection ability of the network while reducing the weight of the network, this paper further optimizes the backbone network of YOLOV4-Tiny according to the characteristics of individual fish recognition (Figure 4).The algorithm in this paper integrates the convolutional block attention module (CBAM) [25] into the object detection backbone network, so that the network can adaptively focus on the more important parts of the image and fuzzy image data, so that the network can learn the visual characteristics of underwater fish objects.The structure schematic of CBAM module is shown in Figure 5. Underwater Fish Detection Module After extensive experiments, we found that existing object detection methods are mainly used to detect objects in normal environments.However, the results of these methods are not satisfactory for fuzzy, low frame rate and small objects, so these methods cannot be directly applied to the recognition of underwater individual fish.This paper proposes the YOLO-CBAM method for underwater fish object detection.The method proposed in this paper uses YOLOV4-Tiny as the main framework and makes targeted improvements according to the characteristics of individual fish recognition.After a lot of repeated experiments, compared with the ordinary YOLO algorithm, we found that, although YOLOV4-Tiny has a lighter network structure, the detection accuracy drops significantly.In order to retain the advantages of the YOLOV4-Tiny network, which has a small number of structural parameters and fast detection speed, and to further improve the detection ability of the network while reducing the weight of the network, this paper further optimizes the backbone network of YOLOV4-Tiny according to the characteristics of individual fish recognition (Figure 4).The algorithm in this paper integrates the convolutional block attention module (CBAM) [25] into the object detection backbone network, so that the network can adaptively focus on the more important parts of the image and fuzzy image data, so that the network can learn the visual characteristics of underwater fish objects.The structure schematic of CBAM module is shown in Figure 5.The visual attention mechanism module used in this paper consists of two sub-modules, which are the channel attention module and the spatial attention module.The spatial attention module obtains weights by locating objects and performing some transformations, so it can find the most important parts of the image for learning.The channel attention module obtains the importance of each feature through modeling and can assign different features according to different tasks.In this paper, the attention mechanism and YOLOV4-Tiny are organically integrated, which not only ensures the lightweight quality of the network model, but also improves the accuracy of the model to a certain extent.In the feature extraction network, CBAM is added to the two feature layers extracted by the backbone network and the up-sampled result. The implementation of the channel attention mechanism is divided into two parts, as shown in Figure 6.Firstly, global max pooling and global average pooling are performed on the feature layer, respectively, and then the shared fully connected layer is used for processing.Then, the two results are added and put into the Sigmoid function to obtain the weights of each channel in the feature layer.Finally, the weights are multiplied by the original input feature layer to generate the input features required by the spatial attention module.The spatial attention mechanism takes the maximum value and average value of the channel of each feature point for the input feature layer, and then stacks the two results and uses a 7 × 7 convolution to adjust the number of channels to 1.After being processed by the Sigmoid function, the weights in the feature layer are obtained, and finally, the weights are multiplied by the original input feature layer to obtain the final generated features (Figure 7). Figure 8 shows the results of the fish object detection of the method The visual attention mechanism module used in this paper consists of two sub-modules, which are the channel attention module and the spatial attention module.The spatial attention module obtains weights by locating objects and performing some transformations, so it can find the most important parts of the image for learning.The channel attention module obtains the importance of each feature through modeling and can assign different features according to different tasks.In this paper, the attention mechanism and YOLOV4-Tiny are organically integrated, which not only ensures the lightweight quality of the network model, but also improves the accuracy of the model to a certain extent.In the feature extraction network, CBAM is added to the two feature layers extracted by the backbone network and the up-sampled result. The implementation of the channel attention mechanism is divided into two parts, as shown in Figure 6.Firstly, global max pooling and global average pooling are performed on the feature layer, respectively, and then the shared fully connected layer is used for processing.Then, the two results are added and put into the Sigmoid function to obtain the weights of each channel in the feature layer.Finally, the weights are multiplied by the original input feature layer to generate the input features required by the spatial attention module.The spatial attention mechanism takes the maximum value and average value of the channel of each feature point for the input feature layer, and then stacks the two results and uses a 7 × 7 convolution to adjust the number of channels to 1.After being processed by the Sigmoid function, the weights in the feature layer are obtained, and finally, the weights are multiplied by the original input feature layer to obtain the final generated features (Figure 7). Figure 8 shows the results of the fish object detection of the method The visual attention mechanism module used in this paper consists of two submodules, which are the channel attention module and the spatial attention module.The spatial attention module obtains weights by locating objects and performing some transformations, so it can find the most important parts of the image for learning.The channel attention module obtains the importance of each feature through modeling and can assign different features according to different tasks.In this paper, the attention mechanism and YOLOV4-Tiny are organically integrated, which not only ensures the lightweight quality of the network model, but also improves the accuracy of the model to a certain extent.In the feature extraction network, CBAM is added to the two feature layers extracted by the backbone network and the up-sampled result. The implementation of the channel attention mechanism is divided into two parts, as shown in Figure 6.Firstly, global max pooling and global average pooling are performed on the feature layer, respectively, and then the shared fully connected layer is used for processing.Then, the two results are added and put into the Sigmoid function to obtain the weights of each channel in the feature layer.Finally, the weights are multiplied by the original input feature layer to generate the input features required by the spatial attention module.The spatial attention mechanism takes the maximum value and average value of the channel of each feature point for the input feature layer, and then stacks the two results and uses a 7 × 7 convolution to adjust the number of channels to 1.After being processed by the Sigmoid function, the weights in the feature layer are obtained, and finally, the weights are multiplied by the original input feature layer to obtain the final generated features (Figure 7). Figure 8 shows the results of the fish object detection of the method proposed in this paper.In Figure 8a, we found that the algorithm only detected two individual fish by YOLO-tiny, and all individual fish are detected by the proposed method. Agriculture 2022, 12, x FOR PEER REVIEW 8 of 20 proposed in this paper.In Figure 8.a, we found that the algorithm only detected two individual fish by YOLO-tiny, and all individual fish are detected by the proposed method. Individual Fish Recognition Module The effective detection of individual fish in images is the premise of individual fish recognition.After accurately detecting the individual fish in the images, effectively extracting the visual feature information of the individual fish is the core problem which needs to be resolved in order to achieve accurate individual fish recognition.The differences in visual similarity between individuals of the same species of fish are small, and the recognition network needs to accurately capture the subtle feature differences between individuals.In order to adapt to the characteristics of individual fish recognition, this paper designs a lightweight and deformable convolution individual fish recognition network structure, as shown in Figure 9. Individual Fish Recognition Module The effective detection of individual fish in images is the premise of individu recognition.After accurately detecting the individual fish in the images, effectiv tracting the visual feature information of the individual fish is the core problem needs to be resolved in order to achieve accurate individual fish recognition.The ences in visual similarity between individuals of the same species of fish are sma the recognition network needs to accurately capture the subtle feature differen tween individuals.In order to adapt to the characteristics of individual fish recog this paper designs a lightweight and deformable convolution individual fish reco network structure, as shown in Figure 9. proposed in this paper.In Figure 8.a, we found that the algorithm only detected two in dividual fish by YOLO-tiny, and all individual fish are detected by the proposed method Individual Fish Recognition Module The effective detection of individual fish in images is the premise of individual fis recognition.After accurately detecting the individual fish in the images, effectively ex tracting the visual feature information of the individual fish is the core problem whic needs to be resolved in order to achieve accurate individual fish recognition.The diffe ences in visual similarity between individuals of the same species of fish are small, an the recognition network needs to accurately capture the subtle feature differences b tween individuals.In order to adapt to the characteristics of individual fish recognition this paper designs a lightweight and deformable convolution individual fish recognitio network structure, as shown in Figure 9. Individual Fish Recognition Module The effective detection of individual fish in images is the premise of individual fish recognition.After accurately detecting the individual fish in the images, effectively extracting the visual feature information of the individual fish is the core problem which needs to be resolved in order to achieve accurate individual fish recognition.The differences in visual similarity between individuals of the same species of fish are small, and the recognition network needs to accurately capture the subtle feature differences between individuals.In order to adapt to the characteristics of individual fish recognition, this paper designs a lightweight and deformable convolution individual fish recognition network structure, as shown in Figure 9. proposed in this paper.In Figure 8.a, we found that the algorithm only detected two individual fish by YOLO-tiny, and all individual fish are detected by the proposed method. Individual Fish Recognition Module The effective detection of individual fish in images is the premise of individual fish recognition.After accurately detecting the individual fish in the images, effectively extracting the visual feature information of the individual fish is the core problem which needs to be resolved in order to achieve accurate individual fish recognition.The differences in visual similarity between individuals of the same species of fish are small, and the recognition network needs to accurately capture the subtle feature differences between individuals.In order to adapt to the characteristics of individual fish recognition, this paper designs a lightweight and deformable convolution individual fish recognition network structure, as shown in Figure 9.Among them, the function of the distance calculation is to compare the features of the input pictures to calculate the similarity of individual fish.The smaller the value obtained after the distance calculation, the higher the similarity between individual fish, the larger the value and the lower the similarity.After repeated tests with different input images, we found that the values between individuals of the same fish were much less than 1, and the values between individuals of different fish were maintained at 1.4-1.7,which is greater than 1.Therefore, we use 1 as the cut-off point to determine whether it is the same individual fish.If the resulting value is less than 1, the model considers the input different pictures as the same individual fish; if it is greater than 1, it considers them as different individual fish.In this way, the task of recognizing underwater individual fish is completed. Backbone Network The discussion in this work is focused on making the model more lightweight while improving its capacity to identify fish bodies.We made three improvements to the Mo-bileNetV1 [26] backbone network for the individual recognition module: convolution kernel, activation function and average pooling layer.The improved backbone network is shown in Figure 10.A picture 112 × 112 × 3 in size is used as input, and a 1 × 1 × 512 feature vector is obtained after the neural network.The network has 29 layers, 14 layers of size 3 × 3 deformable convolutions, 13 layers of size 3 × 3 deep deformable convolutions, 1 layer of size 1 × 1 standard convolution and 1 fully connected layer.Among them, dark blue represents the normal deformable convolution and light purple represents the deeply separable deformable convolution. Agriculture 2022, 12, x FOR PEER REVIEW 9 of 20 Among them, the function of the distance calculation is to compare the features of the input pictures to calculate the similarity of individual fish.The smaller the value obtained after the distance calculation, the higher the similarity between individual fish, the larger the value and the lower the similarity.After repeated tests with different input images, we found that the values between individuals of the same fish were much less than 1, and the values between individuals of different fish were maintained at 1.4-1.7,which is greater than 1.Therefore, we use 1 as the cut-off point to determine whether it is the same individual fish.If the resulting value is less than 1, the model considers the input different pictures as the same individual fish; if it is greater than 1, it considers them as different individual fish.In this way, the task of recognizing underwater individual fish is completed. Backbone Network The discussion in this work is focused on making the model more lightweight while improving its capacity to identify fish bodies.We made three improvements to the Mo-bileNetV1 [26] backbone network for the individual recognition module: convolution kernel, activation function and average pooling layer.The improved backbone network is shown in Figure 10.A picture 112 × 112 × 3 in size is used as input, and a 1 × 1 × 512 feature vector is obtained after the neural network.The network has 29 layers, 14 layers of size 3 × 3 deformable convolutions, 13 layers of size 3 × 3 deep separable deformable convolutions, 1 layer of size 1 × 1 standard convolution and 1 fully connected layer.Among them, dark blue represents the normal deformable convolution and light purple represents the deeply separable deformable convolution. Deformable Convolution The trajectory and pose of the fish body swimming in the water vary depending on the characteristics of fish activities underwater, but the standard convolution kernel can only sample the input feature map at a fixed position, which is weak in generalization ability and poorly adaptable to unknown changes. Deformable Convolution The trajectory and pose of the fish body swimming in the water vary depending on the characteristics of fish activities underwater, but the standard convolution kernel can only sample the input feature map at a fixed position, which is weak in generalization ability and poorly adaptable to unknown changes. This paper uses deformable convolution in place of standard convolution to address this issue.Deformable convolution differs from standard convolution by adding a direction parameter to each element, allowing the convolution kernel to be extended to a wider range during training.Instead of using a regular convolution, a deformable convolution of size 3 × 3 is employed in this study [27].This allows the convolution kernel to change its shape to the actual situation and more effectively extract input information without increasing the number of parameters.The comparison of the sampling position of fish body images after the addition of deformable convolution is given in Figure 11. Agriculture 2022, 12, x FOR PEER REVIEW 10 This paper uses deformable convolution in place of standard convolution to add this issue.Deformable convolution differs from standard convolution by adding a d tion parameter to each element, allowing the convolution kernel to be extended to a w range during training.Instead of using a regular convolution, a deformable convolu of size 3 × 3 is employed in this study [27].This allows the convolution kernel to cha its shape to the actual situation and more effectively extract input information with increasing the number of parameters.The comparison of the sampling position of body images after the addition of deformable convolution is given in Figure 11.By comparing a great number of fish pictures, we found that the mouth, tail, fins other parts of the fish have distinctive characteristics.As shown in Figure 12, the tr features of the two fish are similar, and the texture features of the mouth and tail pl key role in distinguishing different individuals.Therefore, we believe that when b features cannot be used for effective recognition, the learning of fish body edge feat is particularly important.In order to improve the network's ability to learn edge features, we added 1 × 1 st ard convolution, and discarded the pooling layer commonly used in convolution ne networks.Although the pooling layer has the advantage of preventing overfitting downsampling, it reduces the learning ability of the network for edge features.The rea for this is that, in a feature map, although the sensing ranges of the center point and corner point are the same, the sensing area of the center point contains the complet formation of the whole picture, while the sensing area of the corner point only cont Edge Feature Learning Image edge refers to the collection of pixels whose gray level changes are discontinuous around them.Edges widely exist between objects and backgrounds, and between objects.Therefore, edges are important features of image segmentation, image understanding and image recognition.In the task of underwater individual fish recognition, the changes of light and background environment often lead to occlusion and blurring of the body features of fish individuals.At this time, it is difficult to accurately complete the recognition task using the main features of the body of fish.At the same time, individuals of the same species usually have similar trunk characteristics, which also brings challenges to the recognition task. By comparing a great number of fish pictures, we found that the mouth, tail, fins and other parts of the fish have distinctive characteristics.As shown in Figure 12, the trunk features of the two fish are similar, and the texture features of the mouth and tail play a key role in distinguishing different individuals.Therefore, we believe that when body features cannot be used for effective recognition, the learning of fish body edge features is particularly important.Image edge refers to the collection of pixels whose gra uous around them.Edges widely exist between objects an objects.Therefore, edges are important features of image standing and image recognition.In the task of underwater i changes of light and background environment often lead to body features of fish individuals.At this time, it is difficu recognition task using the main features of the body of fish. of the same species usually have similar trunk characteri lenges to the recognition task. By comparing a great number of fish pictures, we foun other parts of the fish have distinctive characteristics.As features of the two fish are similar, and the texture feature key role in distinguishing different individuals.Therefore features cannot be used for effective recognition, the learn is particularly important.In order to improve the network's ability to learn edge ard convolution, and discarded the pooling layer common networks.Although the pooling layer has the advantage downsampling, it reduces the learning ability of the networ for this is that, in a feature map, although the sensing rang corner point are the same, the sensing area of the center p In order to improve the network's ability to learn edge features, we added 1 × 1 standard convolution, and discarded the pooling layer commonly used in convolution neural networks.Although the pooling layer has the advantage of preventing overfitting and downsampling, it reduces the learning ability of the network for edge features.The reason for this is that, in a feature map, although the sensing ranges of the center point and the corner point are the same, the sensing area of the center point contains the complete information of the whole picture, while the sensing area of the corner point only contains part of the picture.At this time, the weight of each point should be different, but the pooling layer treats them as the same weight [28].Therefore, when identifying fish bodies with fuzzy trunk features and similar texture features that can only be distinguished by detailed information, the disadvantages of pooling layer will be further amplified. The 1 × 1 convolution was first used in the Network in Network technique [29], and its calculation method is the same as the other convolution kernels; the only difference is the size.The authors concluded that the operation of 1 × 1 convolution + Relu can increase the nonlinearity of the network, thus improving the nonlinear fitting ability of the network and the classification effect of the network without increasing the number of network parameters. In this paper, a 1 × 1 convolution kernel with 512 channels is added after the 7 × 7 convolutional kernel to replace the average pooling layer.Since 1 × 1 convolution is performed on the channels, the correlation information between channels can be extracted.The 1024 channels are linearly combined across channels to 512 channels, thus increasing cross-channel information interaction and reducing computational load. In addition, we added a BN layer and the Mish activation function.Using this combination, the advantages of the pooling layer in preventing overfitting are retained to the greatest extent, and the model learning speed can be accelerated.The nonlinearity can be greatly increased without losing the resolution of the characteristic map. Mish Activation Function The generalization ability and adaptability of the network can be significantly enhanced with the use of an appropriate activation function.Relu6 serves as the network's activation function in the conventional MobileNetV2 network to ensure good numerical resolution, even at low precision [30].However, the issue of neuron death is not resolved by Relu or Relu6.The gradient of the function becomes zero when the input is close to zero or negative, making it unable to learn using backpropagation. In order to avoid such issues in individual fish recognition networks, the Mish function-a self-norming non-monotonic function whose smoothing property allows better penetration of information into the neural network, resulting in higher accuracy and stronger generalization-was adopted as the activation function in this paper, instead of Relu6 [31].The Mish function expression is shown in Equation ( 1) and the image is shown in Figure 13.y = x tanh(ln(1 + exp(x))), fuzzy trunk features and similar texture features that can only be distinguished by tailed information, the disadvantages of pooling layer will be further amplified.The 1 × 1 convolution was first used in the Network in Network technique [29] its calculation method is the same as the other convolution kernels; the only differen the size.The authors concluded that the operation of 1 × 1 convolution + Relu can incr the nonlinearity of the network, thus improving the nonlinear fitting ability of the netw and the classification effect of the network without increasing the number of networ rameters. In this paper, a 1 × 1 convolution kernel with 512 channels is added after the convolutional kernel to replace the average pooling layer.Since 1 × 1 convolution is formed on the channels, the correlation information between channels can be extra The 1024 channels are linearly combined across channels to 512 channels, thus increa cross-channel information interaction and reducing computational load. In addition, we added a BN layer and the Mish activation function.Using this bination, the advantages of the pooling layer in preventing overfitting are retained t greatest extent, and the model learning speed can be accelerated.The nonlinearity ca greatly increased without losing the resolution of the characteristic map. Mish Activation Function The generalization ability and adaptability of the network can be significantly hanced with the use of an appropriate activation function.Relu6 serves as the netw activation function in the conventional MobileNetV2 network to ensure good nume resolution, even at low precision [30].However, the issue of neuron death is not reso by Relu or Relu6.The gradient of the function becomes zero when the input is clo zero or negative, making it unable to learn using backpropagation. In order to avoid such issues in individual fish recognition networks, the Mish f tion-a self-norming non-monotonic function whose smoothing property allows b penetration of information into the neural network, resulting in higher accuracy stronger generalization-was adopted as the activation function in this paper, instea Relu6 [31].The Mish function expression is shown in Equation ( 1) and the image is sh in Figure 13.y = x tanh(ln(1 + exp(x))), As seen in Figure 14, when the input value is negative, it is not truncated, as Relu and Relu6; instead, a lesser gradient is allowed to flow in order to ensure the flo As seen in Figure 14, when the input value is negative, it is not truncated, as with Relu and Relu6; instead, a lesser gradient is allowed to flow in order to ensure the flow of information, successfully resolving the issue of neuron death.The Mish function also avoids the gradient saturation issue because it is unbounded.It was found that the Mish activation function is 0.494% better than Swish and 1.671% better than Relu. Agriculture 2022, 12, x FOR PEER REVIEW 12 of 20 information, successfully resolving the issue of neuron death.The Mish function also avoids the gradient saturation issue because it is unbounded.It was found that the Mish activation function is 0.494% better than Swish and 1.671% better than Relu. Loss Function The traditional target recognition is more inclined to be treated as a classification problem.The categories are labeled with categories and the results are given by Softmax, and Softmax loss [32] is shown in Equation ( 2).However, as the dataset expands and the categories change, it is necessary to retrain the model.For this type problem, Deng [33] proposed Arcface loss based on Softmax loss to improve the inter-class separability while reducing the intra-class distance, as shown in Equation (3). Specifically, Arcface loss fixes the bias b in Softmax loss to 0, and transforms into ∥ ∥⋅∥ ∥ by dot product transformation, where represents the angle between the weights and the features ; after normalization makes ∥ ∥=∥ ∥= 1, the normalized prediction only depends on the angle between the features and the weights ; then, it multiplies the features by the constant , at which time, the learned features are distributed on the hypersphere with the radius ; finally, the direction the angle penalty is added in the direction of to achieve the purpose of increasing the inter-class distance and reducing the intra-class distance. In this paper, Arcface loss is used as the loss function of LIFRNet, and the loss converges to 0.001 after 300 epochs. Real-Time Visualization Module In the real-time visualization module, LIFRNet integrates the optimal weight of the detection and recognition modules to display individual fish information in real time.In fact, different sides of a fish body have different texture features.Therefore, we treat the different sides of the same fish body as two different fish in the training of the recognition Loss Function The traditional target recognition is more inclined to be treated as a classification problem.The categories are labeled with categories and the results are given by Softmax, and Softmax loss [32] is shown in Equation ( 2).However, as the dataset expands and the categories change, it is necessary to retrain the model.For this type of problem, Deng et al. [33] proposed Arcface loss based on Softmax loss to improve the inter-class separability while reducing the intra-class distance, as shown in Equation (3). Specifically, Arcface loss fixes the bias b in Softmax loss to 0, and transforms W T yi f i into W j • f i cos θ j by dot product transformation, where θ j represents the angle between the weights W j and the features f i ; after normalization makes W j = f i = 1, the normalized prediction only depends on the angle θ j between the features f i and the weights W j ; then, it multiplies the features by the constant S, at which time, the learned features are distributed on the hypersphere with the radius S; finally, the direction the angle penalty m is added in the direction of θ j to achieve the purpose of increasing the inter-class distance and reducing the intra-class distance. In this paper, Arcface loss is used as the loss function of LIFRNet, and the loss converges to 0.001 after 300 epochs. Real-Time Visualization Module In the real-time visualization module, LIFRNet integrates the optimal weight of the detection and recognition modules to display individual fish information in real time.In fact, different sides of a fish body have different texture features.Therefore, we treat the different sides of the same fish body as two different fish in the training of the recognition module.In the visualization module, we want the same fish to have unique identity information.Therefore, we propose two solutions: (1) Cameras are installed on different sides of the water.Each camera only recognizes fish swimming in the same direction, and then summarizes the information to obtain accurate fish information.(2) Recode the individual fish information, and use the numbers 1 and 2 as the basis to distinguish different sides of the same fish.The individual fish information is obtained through only one camera. In the actual underwater fish body recognition, we found that the fish body swam in a wide range, the swimming posture was irregular and the swimming direction often changed, so we chose the recoding method to complete the fish body information visualization.The rendering of the visualization module is shown in Figure 14, where the code of the fish head swimming toward the right is 1, such as Fish_ 5_ 1.The code of the fish head swimming towards the left is 2, for example, Fish_ 13_ 2. The purpose of this is to prevent different side texture features from affecting the training of the model, and when the same fish swims past the underwater camera in different directions, we can also intuitively and accurately grasp the fish body information through the real-time visualization module. In the processes of aquaculture, the real-time visualization module is helpful for the aquaculture personnel to pay attention to the individual information of fish at any time, and adjust the aquaculture strategy according to the actual situation to achieve the goal of precise aquaculture. After the detection and recognition of the two modules, LIFRNet integrates the functions of the two modules to form a real-time visualization module, which can detect and identify individual fish in real time through underwater cameras, which enables to aquaculture personnel to pay attention to individual fish information at any time and adjust their aquaculture strategies according to the actual situation. The Dataset In the field of underwater fish recognition, there is no publicly accessible dataset for individual fish recognition.Therefore, we developed a fish recognition dataset (DlouFish), as shown in Figure 15, through extensive collection and collation.The dataset consists of 6950 labeled individual fish photographs and is numbered according to the individuals.It contains 2100 images of koi, 1850 images of puffer fish, 1800 images of clown fish and 1200 images of grass carp.These images are from the internet and photography.Considering that the underwater reference object is fuzzy, it was difficult to identify the fish body through the background environment.Therefore, we extracted the frame of the video, artificially labeled the identity information of the fish body according to the continuity of the video, and made a data set after the disruption. We divided the data set into a training set and a test set at a ratio of 9:1, which included different kinds of fish bodies, such as brocade carp with obvious patterns and puffer fish with high similarity.At the same time, the lighting conditions are were quite different.The purpose of this was to improve the learning ability of the model during training and verify the generalization of the model during testing. In order to facilitate the analysis of the experimental results, we formulated the naming rules of the data in the data set.The numbering rule is individual fish number + image number.For example, the picture named "000101" represents the first individual fish picture with the ID number 1, and the image named "001111" represents the 11th individual fish picture with the ID number 11.The advantage of this rule is that we can intuitively judge whether the predicted individual fish is the same one by using the number assigned.We divided the data set into a training set and a test set at a ratio of 9:1, which cluded different kinds of fish bodies, such as brocade carp with obvious patterns puffer fish with high similarity.At the same time, the lighting conditions are were q different.The purpose of this was to improve the learning ability of the model du training and verify the generalization of the model during testing. In order to facilitate the analysis of the experimental results, we formulated the n ing rules of the data in the data set.The numbering rule is individual fish number + im number.For example, the picture named "000101" represents the first individual fish ture with the ID number 1, and the image named "001111" represents the 11th indivi fish picture with the ID number 11.The advantage of this rule is that we can intuiti judge whether the predicted individual fish is the same one by using the number assig Experimental Setup In this research, the experiments were conducted using the Pytorch framework un Ubuntu 20.04, and the computer GPU configuration was GeForce RTX 3090Ti.The function was Arcface, the optimizer was adam, momentum was 0.9, batchsize was 64 initial learning rate was 0.001 and the minimum learning rate was 0.0001 The algori evaluation metric was mAP, and the descent method was step, with 300 epochs of tr ing. Performance Comparison of YOLOV4-Tiny Incorporating Different Attention Mechanis In this research, the mechanism modules of mainstream attention in recent years cluding SE [34], ECA [35] and CBAM, were added to YOLOv4 tiny and compared w the traditional YOLOv4 tiny and YOLOv4 [36].The experimental results (Table 1) s that, by incorporating the attention mechanism module, the accuracy of the model ca Experimental Setup In this research, the experiments were conducted using the Pytorch framework under Ubuntu 20.04, and the computer GPU configuration was GeForce RTX 3090Ti.The loss function was Arcface, the optimizer was adam, momentum was 0.9, batchsize was 64, the initial learning rate was 0.001 and the minimum learning rate was 0.0001 The algorithm evaluation metric was mAP, and the descent method was step, with 300 epochs of training. Performance Comparison of YOLOV4-Tiny Incorporating Different Attention Mechanisms In this research, the mechanism modules of mainstream attention in recent years, including SE [34], ECA [35] and CBAM, were added to YOLOv4 tiny and compared with the traditional YOLOv4 tiny and YOLOv4 [36].The experimental results (Table 1) show that, by incorporating the attention mechanism module, the accuracy of the model can be significantly increased.In our DlouFish dataset, compared with the traditional YOLOv4, the accuracy of YOLOV4-Tiny after CBAM fusion was improved by 3.65%, and the parameter amount was nearly 10 times smaller than YOLOv4.At the same time, when the parameters were similar, the model we used performed the best and achieved an accuracy of 88.6%.This research used deep convolutional networks to learn the differences between fish visual features for individual fish recognition.Therefore, when the network predicts the same individual fish, the distance between the pictures given is as small as possible; that is, the similarity is high.When the network predicts different individual fish, the distance result given is as large as possible; that is, the similarity is low.The performance of LIFRNet in recognizing different individual fish is shown in Table 2.We adopted two different methods when using deformable convolution.The first one (Method 1) was to add a deformable convolution of size 3 × 3 to the 1 × 1 convolution, plus a BN layer with the Mish activation function, and not to change the standard convolution in the backbone network.The significance of this was to increase the number of network layers by adding a convolution, and at the same time, allow the activation function to play a bigger role.The other method (Method 2) is to replace all 3 × 3 standard convolutions with 3 × 3 deformable convolutions.The experimental results of these two methods are shown in Table 3.The results of the studies demonstrate that the distance between distinct fish bodies may be enhanced by adding deformable convolution and that this has a better overall impact than the other method, which is essentially identical when recognizing the same fish body.However, this has the unintended consequence of dramatically increasing the number of parameters.The original network had 4,231,976 parameters; the number of parameters obtained using this method was 55.75 percent more. In this research, we finally choose the method of using deformable convolution, instead of standard convolution, without increasing the number of parameters, for the following reasons: 1. YOLOV4-Tiny with a fused CBAM attention mechanism is utilized for this purpose, instead of YOLOV4 for object detection, as our goal is to create a lightweight solution for individual fish recognition.2. While adding deformable convolution improves the effect, it is not particularly helpful for actual fish detection.The reason is that we artificially set a threshold value when the network returns a prediction result, and when the distance is larger than the threshold value, this predicted image is deleted from the list of alternatives, which does not affect the recognition accuracy. 3. When recognizing the same fish, there is almost no difference in the effects of the two methods, which means that when the distance is less than the threshold value, the effect of the two methods on the recognition accuracy is equal. Analysis of Experimental Results of Different Background Environment In this experiment, we performed background elimination for different pictures of individual fish.The effect of the background environment on the distance of fish similarity was tested by this method when the model recognized different individual fish of the same category.The experimental results are shown in Figure 16.The experimental results showed that the similarity distance of individual fish changed slightly when we performed background elimination on one of the pictures.The similarity distance became smaller, but the value change was extremely small, which means that the difference in the background environment has a slight effect on the recognition ability of the model.When we eliminated the backgrounds of the two pictures at the same time, the similarity distance value was almost the same as the value without background elimination.This indicates that the difference in background environment color had only a minimal effect on the model.The model focuses more on the extraction of texture features from the individual fish than on learning the features of the pictures from the background environment. Analysis of Experimental Results under Different Backbone Networks In the following part of our research, we used Resnet50 [37], Iresnet50 [38], Mobilefacenet, MobilenetV2 and our proposed methods for experiments.The experimental results are shown in Tables 4 and 5.The experimental results showed that the similarity distance of individual fish changed slightly when we performed background elimination on one of the pictures.The similarity distance became smaller, but the value change was extremely small, which means that the difference in the background environment has a slight effect on the recognition ability of the model.When we eliminated the backgrounds of the two pictures at the same time, the similarity distance value was almost the same as the value without background elimination.This indicates that the difference in background environment color had only a minimal effect on the model.The model focuses more on the extraction of texture features from the individual fish than on learning the features of the pictures from the background environment. Analysis of Experimental Results under Different Backbone Networks In the following part of our research, we used Resnet50 [37], Iresnet50 [38], Mobilefacenet, MobilenetV2 and our proposed methods for experiments.The experimental results are shown in Tables 4 and 5.It can be seen that the distance of our method is smaller; the average distance decreased by 0.284, when testing the same fish.Additionally, the distance is larger when recognizing different individuals. In addition, the resnet50 network with parameters six times larger than ours has 5.86% less accuracy than On the DlouFish dataset, our Acc 1 reached 97.8%. Conclusions In this paper, we proposed a lightweight algorithm for individual fish recognition that can lessen the negative effects of fish swimming irregularly and the complex underwater environment.We also constructed and labeled a fish recognition dataset (DlouFish), which contains 6950 images of 384 fish and is numbered by the individual, to fill the dataset gap in the field of underwater live fish recognition.The experimental results demonstrate that the algorithm suggested in this study performs both fish detection and fish recognition tasks with considerably higher accuracy and is capable of handling the underwater fish recognition challenge.We will keep working on underwater object detection in our upcoming studies and enhance the performance of the model in more difficult environments. Figure 2 . Figure 2. Schematic diagram of individual fish recognition. Figure 2 . Figure 2. Schematic diagram of individual fish recognition. Figure 6 . Figure 6.Schematic diagram of the structure of the channel attention model. Figure 7 .Figure 8 . Figure 7. Schematic diagram of the structure of the spatial attention model. Figure 9 . Figure 9. Schematic diagram of the network structure of the individual fish recognition module. Figure 6 .Figure 6 . Figure 6.Schematic diagram of the structure of the channel attention model. Figure 7 .Figure 8 . Figure 7. Schematic diagram of the structure of the spatial attention model. Figure 7 . Figure 7. Schematic diagram of the structure of the spatial attention model. Figure 6 . Figure 6.Schematic diagram of the structure of the channel attention model. Figure 7 .Figure 8 . Figure 7. Schematic diagram of the structure of the spatial attention model. Figure 9 . Figure 9. Schematic diagram of the network structure of the individual fish recognition module. Figure 8 . Figure 8. Performance of individual fish detection with CBAM: (a) is the performance of normal detection method; (b) is the performance of detection method with CBAM. Figure 6 . Figure 6.Schematic diagram of the structure of the channel attention model. Figure 7 .Figure 8 . Figure 7. Schematic diagram of the structure of the spatial attention model. Figure 9 . Figure 9. Schematic diagram of the network structure of the individual fish recognition module.Figure 9. Schematic diagram of the network structure of the individual fish recognition module. Figure 9 . Figure 9. Schematic diagram of the network structure of the individual fish recognition module.Figure 9. Schematic diagram of the network structure of the individual fish recognition module. Figure 10 . Figure 10.The network architecture is used for the recognition network.Each intermediate tensor is labeled filter size, channels and stride.Activation layers and batch normalization layers are inserted after each convolution but are not pictured here. Figure 10 . Figure 10.The network architecture is used for the recognition network.Each intermediate tensor is labeled filter size, channels and stride.Activation layers and batch normalization layers are inserted after each convolution but are not pictured here. Figure 11 . Figure 11.Standard convolution and deformable convolution: (a) is standard convolution and deformable convolution.The circle in the figure represents the change of the convolution rang 3.2.3.Edge Feature Learning Image edge refers to the collection of pixels whose gray level changes are discon uous around them.Edges widely exist between objects and backgrounds, and betw objects.Therefore, edges are important features of image segmentation, image un standing and image recognition.In the task of underwater individual fish recognition changes of light and background environment often lead to occlusion and blurring o body features of fish individuals.At this time, it is difficult to accurately complete recognition task using the main features of the body of fish.At the same time, individ of the same species usually have similar trunk characteristics, which also brings c lenges to the recognition task.By comparing a great number of fish pictures, we found that the mouth, tail, fins other parts of the fish have distinctive characteristics.As shown in Figure12, the tr features of the two fish are similar, and the texture features of the mouth and tail pl key role in distinguishing different individuals.Therefore, we believe that when b features cannot be used for effective recognition, the learning of fish body edge feat is particularly important. Figure 12 . Figure 12.The fish with similar body features and distinct edge point features. Figure 11 . Figure 11.Standard convolution and deformable convolution: (a) is standard convolution and (b) is deformable convolution.The circle in the figure represents the change of the convolution range. Figure 11 . Figure 11.Standard convolution and deformable convolution: (a) deformable convolution.The circle in the figure represents the ch Figure 12 . Figure 12.The fish with similar body features and distinct edge p Figure 12 . Figure 12.The fish with similar body features and distinct edge point features. Figure 15 . Figure 15.Example images of DlouFish dataset: (a) different individual fish; (b) the percenta the number of various classes in the dataset. Figure 15 . Figure 15.Example images of DlouFish dataset: (a) different individual fish; (b) the percentage of the number of various classes in the dataset. 20 Figure 16 . Figure 16.Experimental results of different background environments. Figure 16 . Figure 16.Experimental results of different background environments. Table 1 . Accuracy comparison of the incorporation of different attention mechanisms. Table 2 . The effects of different stages of improvement on the recognition effect. Table 3 . The influence of deformable convolution on individual fish recognition. Table 4 . Comparison of fish distance performance in different networks. Table 4 . Comparison of fish distance performance in different networks. Table 5 . Performance comparison of different networks.
2022-11-24T16:06:01.293Z
2022-11-22T00:00:00.000
{ "year": 2022, "sha1": "eb7a71db92dc1dd95ecc8a3e5eca3f2f39047230", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0472/12/12/1972/pdf?version=1669274191", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4c0e5a4f2bafa644eb8eeb1c1325521d446a6e01", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
259778770
pes2o/s2orc
v3-fos-license
Randomized evaluation of an online single-session intervention for minority stress in LGBTQ+ adolescents Background LGBTQ+ youth face myriad adverse health outcomes due to minority stress, creating a need for accessible, mechanism-targeted interventions to mitigate these minority stress-related risk factors. We tested the effectiveness and acceptability of Project RISE, an online single-session intervention designed to ameliorate internalized stigma and improve other outcomes among LGBTQ+ youth. We hypothesized that youth assigned to RISE (versus a control) would report significantly reduced internalized stigma and increased identity pride at post-intervention and at two-week follow-up and would find RISE acceptable. Methods We recruited adolescents nationally through Instagram advertisements in May 2022 (N = 538; M age = 15.06, SD age = 0.97). Participants were randomly assigned to RISE or an information-only control and completed questionnaires pre-intervention, immediately post-intervention, and two weeks post-intervention. Inclusion criteria included endorsing: (1) LGBTQ+ identity, (2) age 13–16, (3) English fluency (4) Internet access, and (5) subjective negative impact of LGBTQ+ stigma. Results Relative to participants in the control condition, participants who completed RISE reported significant decreases in internalized stigma (d = −0.49) and increases in identity pride (d = 0.25) from pre- to immediately post-intervention, along with decreased internalized stigma (d = −0.26) from baseline to two-week follow-up. Participants rated both RISE and the information-only control as highly, equivalently acceptable. Conclusions RISE appears to be an acceptable and useful online SSI for LGBTQ+ adolescents, with potential to reduce internalized stigma in both the short- and longer-term. Future directions include evaluating effects of Project RISE over longer follow-ups and in conjunction with other mental health supports. Introduction LGBTQ+ youth routinely face discrimination and stigma related to LGBTQ+ identity (e.g., violence, victimization), which increases their risk for experiencing mental health problems compared to their cisgender, heterosexual peers (Chodzen et al., 2019;Guz et al., 2021;Marshal et al., 2011;Plöderl and Tremblay, 2015). Minority stress theory (MST) serves as a framework for understanding how cisheterosexism-related experiences exacerbate these problems, such as depression, hopelessness, and self-hate, over time (Brooks, 1981;Chodzen et al., 2019;Fulginiti et al., 2021;Meyer, 1995;Nappa et al., 2022). MST posits several pathways through which chronic, identitybased stressors undermine LGBTQ+ health, including discrimination and stigma because of one's identity, internalization of stigmatizing attitudes towards one's own identity (i.e., internalized stigma), expectations of interpersonal rejection due to one's identity, and LGBTQ+ identity concealment (Hatzenbuehler, 2009). Growing literature has also suggested identity pride as a pathway to resilience in the face of minority stress, and the importance of increasing pride (Delozier et al., 2020). Potential targets of intervention in minority stress theory Although structural or systemic interventions (e.g., via legislation or policies) are needed to address the myriad mechanisms by which minority stress can undermine LGBTQ+ health, there are other mechanisms, including intrapersonal factors, which may be targeted via individual-level support. For instance, intrapersonal stressors such as internalized stigma may be more immediately modifiable than those related to structural or interpersonal levels, such as anti-LGBTQ+ policies or cultural attitudes. At the intrapersonal level, interactive interventions beyond didactic educational interventions remain largely understudied (Chaudoir et al., 2017;Mandel, 2014;Proujansky and Pachankis, 2014). Internalized stigma has been minimally addressed in evidence-based therapies for LGBTQ+ youth, with most LGBTQ+tailored interventions being designed for sexual minority adult men and young adults (Israel et al., 2019;Layland et al., 2020;Lin and Israel, 2012;Pachankis et al., 2020). As such, it remains critical to gauge whether targeting modifiable intrapersonal factors, such as internalized stigma, can help improve mental health and well-being for LGBTQ+ youth. Accessibility of interventions and single-session interventions There has been a longstanding dearth of accessible, evidence-based mental health supports for LGBTQ+ youth (Salerno et al., 2020). In addition to traditional barriers to healthcare (e.g., logistical, financial), LGBTQ+ youth experience additional barriers (e.g., familial rejection, invalidation; Roe, 2017) as well as a lack of affirming care options (Salerno et al., 2020). Affirming, evidence-based mental health supports for LGBTQ+ youth must be made broadly, easily accessible. Digital single-session interventions (SSIs) are well-positioned to provide scalable, low-cost, effective tools for improving short-term risk factors for adverse health outcomes such as hopelessness, perceived control, and agency, in addition to longer-term well-being, as evidenced through reductions in symptoms related to depression, anxiety, and trauma (Schleider and Weisz, 2018). SSIs targeting effective coping strategies have demonstrated acceptability and effectiveness in LGBTQ+ adolescents nationwide . Additionally, online SSIs are particularly well-suited to increase access to affirming, evidence-based care for LGBTQ+ youth, as they are self-guided, not location-dependent, relatively brief (e.g., 15-25 min), and eliminate premature treatment dropout problems. Previous research suggests that these SSIs are equally acceptable and helpful for LGBTQ+ and cisgender/heterosexual youth (McDanal et al., 2022). Building on existing online SSIs for LGBTQ+ adults which address factors related to understanding and coping with minority stress and its sequelae (e.g., Israel et al., 2019;Israel et al., 2021a;Israel et al., 2021b), the current study aimed to establish the efficacy of an online SSI designed specifically for LGBTQ+ youth. The current study We tested the immediate and 2-week effects of a novel online, selfguided SSI, 'Project RISE.' Drawing from best-practices in affirming treatment for LGBTQ+ populations (Pachankis et al., 2022), SSI design , and co-creation models of intervention development, we designed Project RISE to systematically reduce internalized stigma, an intraindividual, and potentially modifiable, facet of minority stress. We hypothesized that, compared to participants assigned to a minority stress information-only control condition, participants assigned to RISE would show: significantly reduced internalized stigma (co-primary outcome), increased identity pride (co-primary outcome), and hopelessness (secondary outcome) both immediately post-intervention and at two-week follow-up; as well as reduced self-hate at post-intervention and reduced depression symptoms at two-week follow-up (both secondary outcomes). We also hypothesized that participants assigned to RISE would rate the intervention as acceptable based on established user-feedback metrics for online SSIs. Method Study procedures were approved by the University of Denver Institutional Review Board (IRB) and were pre-registered on Open Science Framework (OSF; https://osf.io/es3zb). 3 Participants were recruited online through advertisements on social media (i.e., Instagram; see Supplementary Fig. 1) within a one-week period in May 2022. Inclusion criteria included: (1) LGBTQ+ identity, (2) Age 13-16, (3) English fluency, (4) consistent Internet access, and (5) positive endorsement (i. e., ≥1 on a zero to ten scale) to the question, "Has LGBTQ+ stigma had a negative impact on your life?" Individuals who did not meet inclusion criteria, who exited the study prior to condition randomization, or who did not meet quality-check criteria (i.e., participants marked as providing duplicate responses per IP address, marked at bots by Qualtrics' internal bot-detection software, and who failed an attention-check item in the screening survey; see CONSORT diagram in Fig. 1) were excluded from analysis. Given the minimal risk posed by this study, as well as the barriers associated with requiring parental permission (particularly for LGBTQ+ minors) to participate in research, a waiver of parental permission was obtained from the University of Denver IRB. At baseline (after completion of pre-intervention questionnaires), each participant was randomly assigned to RISE or to an informationonly control condition. Participants also completed questionnaires immediately post-intervention and two weeks later in an optional follow-up assessment. Participants earned $10 for completing each of two assessment surveys. Project RISE is a 20-30 min self-guided SSI explicitly targeting internalized stigma and minority stress reactions. RISE includes five general content sections: (1) an introduction to minority stress, privilege, and marginalization; (2) psychoeducation on the effects of minority stress; (3) stories from other youths about their experiences with minority stress; (4) interactive components wherein participants reflect on their identities and experiences with minority stress, identify related emotions and cognitions, and determine actionable, values-based needs; and (5) an exercise in which participants identify a coping statement to help them get through minority stress. Finally, participants receive an action card including their coping statement; their emotions, cognitions, and needs when minority stress arises; and strategies to act on their needs (see Supplementary Fig. 2). The SSI also includes an optional exercise where youths can share advice based on what they learned from the SSI. The full intervention is viewable here: https://osf.io/ktcd9. In the information-only control condition, participants received an illustrated document discussing the concept of minority stress. The document included an age-appropriate explanation of MST, open-ended questions to guide participants in reflecting on how the theory applies to their own lives, and links to additional online educational resources on minority stress. Regardless of condition, participants were provided with a comprehensive resource guide, and participants assigned to the control condition were informed that they would receive delayed access to RISE. Sexual orientation We assessed sexual orientation using the question, "How do you 3 Because the primary outcome in this study (internalized stigma) was not health-related, and because we were not focused on a high-symptom sample of youth, we did not register this study as a "clinical trial" on clinicaltrials.gov. All methods were pre-registered on OSF prior to enrollment of the first participant in the study. identify your sexual orientation? … Please choose which one best fits how you identify." Mutually exclusive response options were: heterosexual/straight, gay/lesbian/homosexual, bisexual, pansexual, queer, asexual, other, unsure/questioning, and "I do not use a label." Gender identity We assessed gender using the question, "What is your current gender? Check all that apply." Response options were: man/boy, woman/girl, transgender, female to male transgender/FTM, male to female transgender/MTF, trans male/transmasculine, trans female/ trans feminine, genderqueer, gender expansive, androgynous, nonbinary, two-spirited, third gender, agender, not sure, and other. Sex assigned at birth We measured sex assigned at birth using the question, "What sex were you assigned at birth?" Mutually exclusive response options were: female, male, intersex, other, and "prefer not to say." Racial/ethnic identity We measured racial/ethnic identity using the question, "How do you identify your race/ethnicity? Check all that apply." Response options were: American Indian or Alaska Native, Asian (including Asian Desi), Black/African American, Hispanic/Latinx, Native Hawaiian or Other Pacific Islander, White/Caucasian (non-Hispanic; includes Middle Eastern), and Other. Participants who indicated more than one response were coded as Multi-racial/Multi-ethnic. Internalized stigma Internalized stigma was assessed at pre-intervention, immediately post-intervention, and two-week follow-up using a modified version of the Lesbian, Gay, and Bisexual Identity Scale (LGBIS; Mohr and Kendra, 2011). The LGBIS is a 27-item self-report measure that assesses dimensions of lesbian, gay, and bisexual identity, yielding scores on eight subscales. We used the Internalized Homonegativity subscale, which comprises three items; participants rate their endorsement of these items on a 6-point Likert scale, ranging from 1 ("disagree strongly") to 6 ("agree strongly"), and an average score is calculated, ranging from 1 to 6, with higher scores indicating greater internalized stigma. For the purposes of this study, items were updated to include additional LGBTQ+ identities. Internal consistency was α = 0.85 at baseline, α = 0.89 at post-intervention, and α = 0.87 at two-week follow-up. Identity pride Identity pride was assessed at pre-intervention, immediately postintervention, and two-week follow-up using a modified version of the LGBIS (see above). Specifically, we used the Identity Affirmation subscale of the LGBIS, which comprises three items; participants rate their endorsement of these items on a 6-point Likert scale, ranging from 1 ("disagree strongly") to 6 ("agree strongly"), and an average score is calculated from 1 to 6, with higher scores indicating greater identity pride. Internal consistency was α = 0.91 at baseline, α = 0.92 at postintervention, and α = 0.92 at two-week follow-up. Hopelessness Hopelessness was assessed at pre-intervention, immediately postintervention, and two-week follow-up using the Beck Hopelessness Scale-4 (BHS-4;Perczel Forintos et al., 2013). The BHS-4 is a validated, abbreviated version of the original 20-item measure used to assess hopelessness in youth and has been included as an outcome in previous studies of SSI efficacy (e.g., Schleider et al., 2020Schleider et al., , 2022. Respondents indicate their current level of hopelessness through rating their agreement with four statements (e.g., "My future seems dark to me") on a 4point Likert scale (from 0 = "absolutely agree" to 3 = "absolutely agree"). An average score is calculated from 0 to 3, with a higher score indicating greater levels of hopelessness. Internal consistency was α = 0.84 at baseline, α = 0.87 at post-intervention, and α = 0.88 at two-week follow-up. Depression symptoms Past-week depression symptoms were assessed at pre-intervention and two-week follow-up using the Children's Depression Inventory, Second Edition: Self-Report Short version (CDI 2:SR[S]; Kovacs, 2015). The CDI 2:SR[S] is a 12-item reliable and valid measure for depression severity in youth which queries a range of depression symptoms, such as irritability and fatigue. Questions are phrased as "Pick the sentence that best describes the way you have been feeling for the past two weeks," and include a range of depression symptoms, such as irritability and fatigue. Response items for each symptom range in severity on a 3-point Likert scale ranging from 0 (e.g., "I am almost never cranky"), 1 (e.g., "I feel cranky many times"), and 2 (e.g., "I feel cranky all the time"). The CDI 2:SR[S] is measured continuously, and average scores range from 0 to 2, with higher average scores indicating more severe depression symptomatology. Internal consistency was α = 0.83 at baseline and α = 0.84 at follow-up. Anxiety symptoms Anxiety symptoms were assessed at pre-intervention using the Generalized Anxiety Disorder -7 (GAD-7; Spitzer et al., 2006). The GAD-7 comprises 7 items and asks participants to rate how frequently they were bothered by each item over the last two weeks on a Likert scale from 0 ("not at all") to 3 ("nearly every day"). Response options are on a Likert scale from 0 ("not at all") to 3 ("nearly every day"). The GAD-7 is measured continuously, and average scores range from 0 to 3; higher average scores reflect greater anxiety symptom severity. Internal consistency was α = 0.87 at baseline and α = 0.89 at follow-up. Self-hatred Self-hatred was assessed at pre-and post-intervention using an adapted version of the Self-Hate Scale (SHS; Turnell et al., 2019). The full SHS assesses an individual's level of self-hate over the past year by indicating agreement with seven statements (e.g., "I hate myself"). Our adapted version of the SHS comprised 3 of these 7 items (Schleider et al., 2020). Response options are on a 7-point Likert scale, ranging from 1 ("not at all true for me") to 7 ("very true for me"). Total scores are calculated using the average of all items, and range from 1 to 3 for this adapted version. The SHS is measured continuously, and higher scores indicate higher levels of self-hate. Internal consistency was α = 0.91 at baseline and α = 0.93 at post-intervention. Program feedback All participants were administered the Program Feedback Scale (PFS) at post-intervention (Schleider et al., 2019). The PFS assesses perceived acceptability and feasibility of the SSI by indicating agreement with 7 statements (e.g., "I enjoyed the activity") on a 5-point Likert scale from 1 ("really disagree") to 5 ("really agree"). The PFS is measured continuously, and average scores range from 1 to 5; higher average scores reflect greater perceived acceptability. Statistical analysis The RStudio Statistical Program was used for analyses (Allaire, 2012). All data and analytic code are available at https://osf.io/kxv4w. To gauge randomization success, we ran ANOVAs to test conditionbased differences in continuous variables (age and pre-randomization levels of identity pride, internalized stigma, hopelessness, self-hatred, depression, and anxiety) and Chi-Square Tests to test condition-based differences in categorical variables (gender, sex assigned at birth, sexual orientation, primary language, race, and grade in school). To examine the acceptability of Project RISE, we calculated itemlevel mean post-SSI scores on the Program Feedback Scale. Per previously-employed benchmarks in online SSI trials , a pre-registered cutoff of >3.5 on any given program feedback scale item was indicated to reflect endorsement of that item (i.e., positive feedback/adequate acceptability). To test our main hypotheses (between-group intervention effects), we conducted a series of regressions that examined whether Project RISE, relative to the information-only condition, led to differential changes in identity pride and internalized stigma from baseline to twoweek follow up (primary outcomes); hopelessness and depression symptoms from baseline to two-week follow up (secondary outcomes); and identity pride, internalized stigma, hopelessness, and self-hatred from baseline to immediate post-intervention (secondary outcomes). To examine the robustness of intervention impacts at two-week follow-up, we analyzed and compared regression results using three different approaches to handling missing data. We employed multiple approaches to missing data analysis in order to ensure robustness of results, given the unique strengths and weaknesses inherent to each approach. First, we imputed participant-level missing data using the expectation-maximization and bootstrapping algorithm implemented with Amelia II in R. We imputed as many datasets as there were percentage points of missing data, rounding to the next-highest percentage (for example, if 10.5 % of data was missing, we created 11 imputed datasets), which allowed us to retain high power despite missing data. Next, we ran two sets of completers-only analyses (these were not preregistered and were included post-hoc to gauge robustness of results). One set of analyses used listwise deletion for participants lacking followup data and the other used multiple imputation to estimate follow-up outcomes for all participants, but this latter method excluded those who did not complete their assigned intervention condition (either Project RISE or information-only control). In addition to between-group effects, we also computed within-group intervention effects via paired t-tests (these were added as exploratory, non-pre-registered tests to further characterize each condition's individual-specific impacts on outcomes of interest). We implemented several best-practices to prevent and exclude fraudulent participants (e.g., bots). These included several layers of external validation (checking for consistent email addresses, IP addresses, and/or false addresses or phone numbers; permitting only one survey per IP address; limiting survey access from non-US IP addresses), embedding CAPTCHAs into surveys (which bots cannot complete); including a screener to ascertain study eligibility prior to allowing participants to consent; employing bot-detection software built into Qualtrics; and excluding participants who provide nonsensical free-text responses within the single-session interventions. Sample characteristics A total of 538 participants were randomized to a condition. Fig. 1 demonstrates the flow of participants in the study. Table 1 Three participants preferred not to identify a race/ethnicity (0.56 %). No group differences emerged on demographic factors or baseline psychological outcomes, indicating successful randomization. Intervention completion rates and acceptability Among adolescents randomized to Project RISE, 87.02 % (N = 228) fully completed the intervention, 8.78 % (N = 23) dropped-out prior to completing the psychoeducation, 3.05 % (N = 8) completed psychoeducation but dropped-out prior to completing a personalized coping plan, and 1.15 % (N = 3) completed a personalized coping plan but dropped-out before finishing the intervention in full. Both participants who completed the intervention and those who completed the active control each found their assigned condition acceptable across all Program Feedback Scale (PFS) items, as indicated by the item-level means being above 3.5/5 for both conditions (Table 2). Differential SSI and follow-up survey completion Of the participants who were randomly assigned to an experimental condition, 59.85 % fully completed the baseline outcomes and the twoweek follow-up survey. Participants in Project RISE completed the follow-up at a significantly lower rate (χ2(1) = 3.96, p = 0.04; 55.34 %) than participants in the information-only control (64.13 %), potentially because participants in the comparison group were told they would receive access Project RISE after the completion of the follow-up assessment (though we could not test this prospect directly). Hopelessness, self-hate, depression, and anxiety outcomes Participants in Project RISE reported greater decreases in hopelessness from baseline to immediately post-intervention compared to participants in the control condition (t(466) = − 3.92, p < 0.001, d = − 0.36; However, in analyses using multiple imputation for missing data from completers-only, condition showed no effect on changes in hopelessness scores from baseline to follow-up (t(330) = 01.75, p = 0.08, d = 0.19; 95 % CI: − 0.02, 0.41), and within-group effects (described below) suggested numerical decreases in hopelessness scores from baseline to twoweek follow-up in both conditions. Analyses using multiple imputation for missing data from the entire sample (including individuals who did not finish the intervention) also suggested no differences in two-week hopelessness outcomes by condition (t(337) = 1.74, p = 0.08, d = 0.19; 95 % CI: − 0.03, 0.40). Using listwise deletion data from completers-only, there did not appear to be a significant effect of condition on anxiety scores at follow up (t(326) = 0.12, p = 0.91, d = 0.01; 95 % CI: − 0.20, 0.23). Analyses using multiple imputation for missing data from completers-only suggested similar findings (t(326) = 0.13, p = 0.09, d = 0.01; 95 % CI: − 0.20, 0.23). Analyses using multiple imputation for missing data from the entire sample (including individuals who did not finish the intervention) also suggested similar findings (t(333) = 0.01, p = 0.09, d = 0.001; 95 % CI: − 0.21, 0.22). However, within-group effects suggested decreases in anxiety for participants in both experimental conditions. There were statistically significant within-group reductions in anxiety among participants in the Project RISE condition (t (144) Discussion We evaluated a novel online SSI ('Project RISE') designed to combat minority stress by reducing internalized stigma in LGBTQ+ young people. Overall, RISE successfully improved post-intervention (internalized stigma, identity pride, hopelessness) and two-week (internalized stigma) outcomes, relative to a psychoeducational control. RISE was also acceptable to, and completed at a high rate (89 %) by, participants. RISE did not outperform the control in reducing secondary outcomes of interest at 2-week follow-up (hopelessness, self-hate, depression, and anxiety symptoms); instead, levels of secondary 2-week outcomes tended to improve in participants regardless of condition assignment. Overall, results suggest the promise of Project RISE to help mitigate internalized stigma in LGBTQ+ young people, at least in the short-term. Project RISE outperformed the control condition across all outcomes at immediate post-intervention and was rated as highly acceptable. At two-week follow-up, in both sets of analyses using completers-only data (but not in full-sample analyses, which included intervention noncompleters), youth assigned to Project RISE reported greater decreases in internalized stigma from baseline to two-week follow up. Betweengroup differences were attenuated at two-week follow-up for other outcomes, with no significant effects of condition (i.e., RISE vs. information-only control) on identity pride, self-hatred, depression, or anxiety symptoms. However, within-group effect sizes suggested significant improvements in identity pride, depression, anxiety, and selfhate across both conditions at two-week follow-up. The discrepancy between these non-significant between-group effects and these significant within-group effects across conditions may reflect the strength of our control condition (which included psychoeducation on minority stress theory). Notably, using listwise deletion data from completersonly, participants in Project RISE reported greater hopelessness scores at two-week follow-up compared to participants in the control condition, but this effect did not hold across the two other missing data approaches, and within-group effects suggested numerical reductions in hopelessness across both conditions. Reductions in hopelessness in this study were numerically larger than those observed in a recent study involving a high-symptom youth sample and non-internalized stigma focused online SSIs (d = 0.16-0.28; Schleider et al., 2022). While these findings do not necessarily index clinical significance, it is notable that improvements in hopelessness in this study were greater than for other SSIs with documented efficacy in reducing adolescent depression symptoms. Although the differences in outcomes between intention-to-treat and completers-only analyses were unexpected, our findings highlight the strength of Project RISE: participants who completed the intervention tended to benefit from it. Additionally, our findings also underscore potential processes related to attrition; namely, fewer participants who received the RISE than who received the control program completed the follow-up. This difference may be because these participants already received the intervention, and thus the immediate intervention-derived benefits may have reduced the perceived need to complete the followup. Conversely, participants in the control condition, who had not yet received the intervention, may have been more likely to complete the follow-up survey, as they were aware that the follow-up would include new resources and forms of support. Regardless, future research may examine ways to boost engagement with Project RISE among youths who access it, given benefits observed for full-completers. On the other hand, it is likely that individuals who only completed a portion of the SSI received an intervention that equated to higher similarity to the control condition, which comprised psychoeducation on minority stress, relative to those who completed the RISE SSI in full. Findings suggest that Project RISE may have utility in reducing internalized consequences of minority stress and increasing identity pride among LGBTQ+ young people. By mitigating many barriers that traditionally preclude LGBTQ+ young people from accessing mental health interventions, online SSIs have the potential to rapidly expand access to services for this population. Moreover, interventions providing psychoeducation on minority stress processes may help reduce the severity of minority stress consequences for LGBTQ+ young people. Immediate versus two-week outcomes Overall, analyses revealed within-group improvements across all outcome variables, both for participants who completed Project RISE and for those who completed the control condition. Both conditions were rated as highly acceptable by participants. Notably, both conditions included psychoeducation regarding minority stress theory, given the aforementioned ethical importance of sharing this information with participants and the methodological importance of using a strong control condition to test our intervention. These findings thus suggest that learning about minority stress theory, in and of itself, may be beneficial for LGBTQ+ young people. When outcomes were assessed immediately post-intervention, Project RISE outperformed the control condition on all measures. Some of these initial differences in outcomes may have been due to the interactivity of RISE (e.g., activities soliciting participant engagement and encouraging participants to tie content to examples from their own lives), compared to the more passive nature of the information-only control (although we could not test this possibility directly). However, at two-week follow-up, we found no significant differences by condition on measures of identity pride, self-hatred, or depression. Why might Project RISE's effects on internalized stigma, in particular, have persisted at two-week follow-up relative to the information-only control? One interpretation of these findings is that RISE targeted internalized stigma more directly than did the control condition. Education on the effects of minority stress may have challenged longer-term self-stigmatizing cognitions among participants who completed Project RISE by teaching participants that adverse consequences stemming from minority stress experiences are not their fault, but are to some extent within their power to mitigate (i.e., restoring a degree of agency, and doing so through narratives, which have been shown to be an impactful means of messaging, Schleider et al., 2020). Given this study's design as a randomized controlled trial with a robust control condition, the fact that differences still emerged speaks to the strength of the intervention and the efficacy of psychoeducation on minority stress processes for LGBTQ+ young people. Importance of waiving parental permission Adolescents self-referred into the SSI. There is existing legislation across many states in the US that enable adolescents aged ≥12 to consent to mental healthcare without a parent or guardian, such as to increase the likelihood of seeking and obtaining care among adolescents. Allowing adolescents to participate in minimal-risk online SSIs without requiring parental consent may be critical in increasing treatment access and use. This study demonstrates that adolescents can safely participate in online SSIs and that SSIs may improve both short-term and longerterm mental health-related outcomes. Requiring parent or guardian approval for adolescents to try online interventions could prevent thousands of youth from receiving minimal-risk, free, and evidencebased support (Samargia et al., 2006;Wilson and Deane, 2012) and may disproportionately impact LGBTQ+ youth and youth of color, for whom concerns about caregiver stigma or dismissal (e.g., as related to the rejection of one's identity, beliefs about seeking mental health treatment, or both) often prevent youth from disclosing treatment needs to family (Brown et al., 2016). The SSI tested in this study posed minimal risk to safety and may offer an effective path to reducing the impact of minority stress and increasing access to mental health support for adolescents. Limitations and future directions Results should be considered in context of this study's limitations. Participation in the study relied on adolescents having the interest and time to complete the study, their comfort with English, and their Internet access. Additionally, adolescents assigned female at birth represented the majority of the sample. Our findings thus may not apply uniformly across adolescents. However, previous trials of psychosocial interventions for adolescents have largely required parent or guardian consent; by eliminating this barrier, this study may have included adolescents underrepresented in such prior work, including adolescents who are not open with caregivers about mental health difficulties or LGBTQ+ identities. Moreover, 273 adolescents in our recruitment sample completed the assent but did not complete the baseline survey and were thus not randomized to a condition, which may reflect sampling biases. In addition, given the apparent purpose of the intervention (i.e., mitigating effects of LGBTQ+ minority stress), it is possible that treatment effects may reflect study demand characteristics, where adolescents may have overreported positive outcomes and underreported negative outcomes due to being aware of the study's purpose. Further, we did not formally evaluate adolescents' use of other mental health supports during the study period. Some research suggests external mental health supports may not impact SSI utility: for instance, a previous randomized trial including 96 high-symptom adolescents demonstrated no effect of external mental health treatment (e.g., psychotherapy and/or medication) on SSI response across nine-month follow-up (Schleider and Weisz, 2018). Future trials may consider assessing whether benefits of SSIs for LGBTQ+ youth shift in the context of other mental health supports they use. Finally, this study employed a relatively brief follow-up period (i.e., two weeks). This study was intended as a pilot study to examine initial efficacy of the tested SSI; thus, we were interested in whether this SSI could modify proximal, short-term mechanisms underlying mental health trajectories. Future work will include a larger-scale examination of the efficacy of this intervention, using a longer follow-up period, to examine changes in outcomes over a longer period of time. Strengths Our study has some notable strengths. In particular, results demonstrate the utility of an easily accessible intervention targeting the effects of minority stress for LGBTQ+ youth, which is critically needed in light of the minority stressors and resultant health disparities this population faces, and particularly so for transgender youth. Additionally, the online nature of our intervention increases its accessibility for LGBTQ+ youth, a population which literature suggests may particularly benefit from online delivery of mental health supports (Perry et al., 2018;Schrager et al., 2019). Finally, throughout our study, we incorporated methodologically rigorous and open-access strategies, including pre-registration of our study design and open access to our data from this study. Conclusions Our study demonstrates that Project RISE, a novel online SSI, immediately decreases internalized stigma, hopelessness, self-hatred, and depression; significantly increases identity pride in LGBTQ+ adolescents; and yields sustained reductions in internalized stigma two weeks later. Overall, results indicate that future interventions should incorporate developmentally-appropriate psychoeducation on the impacts of minority stress in conjunction with engaging activities to enhance participant learning. Future work may assess this SSI among adolescents using other languages and those who may not have stable Internet access, and other work may examine the implementation of this SSI in conjunction with other mental health supports for LGBTQ+ youth. Declaration of competing interest JLS serves on the Scientific Advisory Board for Walden Wise and the Clinical Advisory Board for Koko; is Co-Founder and Co-Director of Single Session Support Solutions. Inc.; and receives book royalties from New Harbinger, Oxford University Press, and Little Brown Book Group.
2023-07-12T16:15:43.047Z
2023-06-07T00:00:00.000
{ "year": 2023, "sha1": "99da1b11a15e300302bbc6eb78820f43f344975a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "38af7a7449c18145bab7a3bfaed1244792aee934", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
266735271
pes2o/s2orc
v3-fos-license
Minimally Invasive Medial Patellofemoral Ligament Reconstruction With Patellar-Sided Tensioning Using All-Suture Anchors Medial patellofemoral ligament (MPFL) reconstruction is a commonly performed procedure to reestablish the checkrein to the lateral patellar translation in patients with recurrent patellofemoral instability. Graft tensioning is one of the most critical aspects of the procedure. Most surgical methods for MPFL reconstruction involve tensioning and securing the graft on the femoral side. In this article, we describe a technique for patellar-sided tensioning of the graft using all-suture anchors, which provides the surgeon with the ability to finely control graft tension with two independent graft limbs, while preserving patellar bone stock. P atellofemoral instability is a common injury among young athletes, accounting for 3% to 7% of all acute knee injuries. 1The medial patellofemoral ligament (MPFL) is a biomechanically important restraint to lateral patellar dislocation and provides 50% to 60% of the patellar stability during the first 30 of knee flexion. 2 MPFL rupture occurs with almost all patellar dislocations, and the resultant patholaxity is a frequent contributor to recurrent dislocations, chondral injuries, and long-term functional limitations. 3lthough nonoperative treatment of a first-time patellar dislocation is often successful, surgery is recommended for patients with recurrent instability or patients with persistent symptoms despite conservative treatment. 4,53][14] As such, there is currently no consensus on the optimal method for MPFL reconstruction. This article describes a minimally invasive approach to MPFL reconstruction using all-suture anchor fixation and tensioning on the patellar side.This technique allows fine adjustments to the intraoperative tension of the MFPL and minimizes the risk of patellar fracture. Surgical Technique Video 1 demonstrates our technique for MPFL reconstruction.The procedure begins by placing the patient supine on a regular operating table.A foot and lateral thigh post is used to help position the leg at 90 of knee flexion (Fig 1 ).A large fluoroscopic C-arm machine is positioned on the contralateral side. A standard anterolateral portal is made to assess chondral damage and loose bodies.During this time, an assistant can prepare a semitendinosus allograft by whipstitching both ends of the graft with a #2 nonabsorbable suture.The graft is doubled over and sized to a diameter of 7 mm.A minimum doubled graft length of 9 cm is recommended to ensure that the graft can reach from the proximal superomedial patella to Schöttle's point on the femur.The graft is then covered in a wet gauze sponge to prevent desiccation. After the graft has been prepared, a 3-cm incision is made along the superomedial border of the patella.Dissection is carried along the bone down to the level of the capsule (Fig 2).The proximal half of the patella is prepared with a rongeur to remove any remaining soft tissue.A 1.6-mm Kirschner wire is inserted into the patella under lateral fluoroscopic guidance to ensure that the wire is located at the midpoint of the patella in the anteroposterior direction (Fig 3A).The drill guide for a 2.3-mm suture anchor is then placed over the wire.The wire is subsequently removed while holding the drill guide steady.A 2.3-mm, double-loaded, all-suture anchor (Iconix; Stryker, Kalamazoo, MI) is then inserted after drilling through the drill guide.A second all-suture anchor is then placed approximately 10 mm proximal to the first anchor (Fig 3B).The suture limbs from each anchor are clamped with a separate hemostat to keep the sutures from becoming intertwined. Next, a 2.4-mm Beath pin is then positioned at Schöttle's point on the medial femur with lateral fluoroscopic guidance (Fig 4).It is crucial that the pin is positioned on a true lateral fluoroscopic view where the posterior femoral condyles overlap, as slight malrotation will lead to graft malposition.The pin is then aimed slightly proximal and anterior and drilled out the lateral femoral cortex and skin.A 3cm incision is then made centered around the Beath pin and carried down to bone.An acorn reamer with a diameter equal to the diameter of the graft (7 mm) is then inserted over the pin and drilled down to a depth of 25 mm.The graft is then folded in half over a no. 2 nonabsorbable passing suture.The free ends of this passing suture are placed in the Beath pin until the graft was w1 cm away from the pin (Fig 5A).This ensures that the passing suture ends do not get lost in the femoral tunnel.The pin is subsequently pulled out of the lateral thigh along with the passing suture.The passing suture ends are then pulled tight to bring the doubled-over graft into the blind femoral tunnel.With maximal tension placed on both suture ends, a polypropylene sheath followed by a 7-mm screw (Intrafix Advance PP Sheath and PEEK Screw; DePuy Mitek, Raynham, MA) is inserted into the anterior aspect of the femoral tunnel to fix the graft to the femur (Fig 5B).The passing suture is removed by pulling on one limb or by cutting it flush with the skin. At this point, a Kelly clamp is passed just superficial to the capsule from the patellar incision to the femoral incision (Fig 6).The clamp is opened to create a soft tissue tunnel.The sutures whipstitched to the ends of the graft are grasped, and the graft is passed through the soft tissue tunnel and out the patellar incision.The knee is then placed in 30 of flexion with a towel bump.An assistant can then hold the ends of the graft taut along the medial patellar border (Fig 7A).The patella should be held firmly in the center of the trochlear groove with neutral tilt.A free needle with a suture limb from the inferior anchor is used to pass 3 Krakow stitches through the graft at the level where the graft meets the anchor.It is critical not to pass these stitches too far from the anchor, as it could Advantages and disadvantages (Table 1), as well as pearls and potential pitfalls (Table 2), of this technique are summarized. Postoperative Rehabilitation After surgery, the patient is allowed to weight bear as tolerated with a hinged knee brace locked in extension.The brace can be unlocked from 0 to 40 flexion for range of motion exercises during the first 2 weeks.Subsequently, knee flexion can be increased by 10 each week with a goal of reaching 90 flexion by 6 weeks after surgery.At this phase, the knee brace and crutches are weaned.Range of motion is further increased to full extension and flexion between 6 to 12 weeks.After 3 months, the patient can advance to strengthening exercises and low-impact exercise.After 6 months, the patient can return to sport. Discussion Over the last decade, there have been multiple techniques described to reconstruct the MPFL.Most techniques utilize femoral-sided tensioning, in which the graft is secured on the femoral side with the use of an interference screw, button, or anchor. 9,10,15etermining the appropriate graft tension at the time of femoral fixation is difficult, as insertion of an interference screw or anchor can advance the graft into the tunnel and increase tension. 16Other techniques with adjustable loop button fixation allow for sequential tensioning of the femoral side of the graft, but is not reversible once the tension is applied. 17emoral-sided tensioning is further complicated by the fact that both graft limbs are secured at the same time, limiting the surgeon's ability to make fine adjustments to graft tension with each individual limb. In contrast to these described techniques, our current method of patellar-sided tensioning allows the surgeon to finely calibrate the amount of tension by passing sutures into the graft at the desired level.This allows both limbs of the graft to be tensioned independently and adjusted on the basis of patellar tracking and lateral translation intraoperatively.We believe this method minimizes the technical errors of undertightening and overtightening the graft, which can lead to recurrent lateral patellar instability and the development of stiffness and patellofemoral arthrosis, respectively. 18,19n our review of the literature, there were previously described MPFL reconstruction techniques utilizing a patellar-sided tensioning method. 11,20owever, the strategies for surgical exposure and graft fixation varied substantially from the technique presented here.In our current technique, the MPFL origin and insertion are exposed using small incisions, which improves cosmesis and limits the amount of soft tissue dissection.Moreover, femoral fixation was obtained using a sheath-and-screw construct, which has been demonstrated to have higher yield strength and lower cyclical displacement for soft tissue graft fixation compared to the interference screws used in the other techniques. 21Finally, our use of 2.3-mm allsuture anchors obtains strong initial fixation and preserves bone stock, which may potentially minimize the risk of patellar fractures.Satalich et al. described a patellar-sided tensioning technique that utilizes 4.75-mm knotless SwiveLock anchors (Arthrex, Naples, FL) for patellar fixation. 20In a recent systematic review of MPFL patellar fixation techniques, no patellar fractures were observed when sockets were less than 4 mm. 22n conclusion, we present a minimally invasive MPFL reconstruction technique with patellar-sided tensioning.This simplified technique improves surgeon flexibility by allowing for fine adjustments to the tension of each graft limb.Further biomechanical and clinical studies are warranted to evaluate the outcomes of MPFL reconstruction with patellar-sided tensioning compared to femoral-sided tensioning. Disclosures The authors report the following potential conflicts of interest or sources of funding: A.Z. reports consulting fees from Stryker Corporation and DePuy Mitek, outside the submitted work.Full ICMJE author disclosure forms are available for this article online, as supplementary material.Use fluoroscopy to ensure that the anchors are in the superior half of the patella and in the midpoint of the patella in the anteroposterior direction to minimize the risk of fracture.Pass the sutures from the anchors into the graft at the level of the anchor with the patella held in the center of the trochlear groove and in neutral tilt. Tension the inferior graft limb first and assess patellar translation and tracking.Fine adjustments to the tension of the superior graft limb can be made based on the assessment. Avoid passing the sutures from the anchors into the graft too far from the anchor to avoid undertightening and overtightening the graft.Asymmetric graft limb length at the time of femoral fixation may result in one limb being too short to reach MPFL attachment on patella.A graft length of 18 cm is recommended.Drilling the two suture anchors in a convergent fashion may result in breakage of the bone bridge between the two anchors.Fluoroscopic guidance is recommended to ensure the anchors are parallel. Fig 1 . Fig 1. Positioning of the right knee in the supine position on a regular operating room table with a foot (*) and lateral thigh (triangle) post (underneath drapes).The C-arm fluoroscope is placed on the patient's contralateral side. Fig 2 . Fig 2. A 3-cm incision over the superomedial aspect of the patella (*) is sharply made down to the knee capsule (triangle).A forceps is holding onto the medial patellofemoral ligament remnant and medial retinaculum (arrow). e2 overtension or undertension the graft.The suture post of the corresponding color is then passed once through the graft.The other 2 suture limbs from the anchor are passed in a simple fashion through the same end of the graft.This process is repeated for the superior anchor suture limbs and the other end of the graft (Fig 7B).After all the sutures are passed, the post of the sutures used for the running Krakow from each anchor are pulled to reduce the graft to the patella at the desired tension.The 4 sets of sutures are then tied (Fig 8).Patellar stability is confirmed by having a firm endpoint to lateral translation of the patella at 30 of flexion and normal tracking through the entire range of motion.Excess graft is then trimmed. Fig 3 . Fig 3. Patellar fixation of the 2.3-mm, allsuture anchors.(A) Anchor placement of the patella is confirmed under lateral fluoroscopic guidance.The first anchor (*) should be placed in the superior half of the patella and in the center of the patella in the anteroposterior direction.(B) A second anchor (triangle) is deployed w1 cm proximal to the first anchor. Fig 4 . Fig 4. Confirmation of the femoral insertion of the MPFL.A 2.4-mm Beath pin is placed on Schöttle's point using a true lateral fluoroscopic view of the femur. Fig 5 . Fig 5.After a 7-mm cannulated reamer is used to drill a 25-mm femoral tunnel over the Beath pin.(A) The passing suture (triangle) of a doubled-over semitendinosus graft is placed into the Beath pin eyelet until the graft is w1 cm away from the pin.(B) Tension is applied to the passing suture (arrow), while an Intrafix Advance PP Sheath (*) and PEEK screw (not shown) are inserted to secure the graft (DePuy Mitek, Raynham, MA). Fig 6 . Fig 6.A large Kelly clamp is passed from the patellar incision (*) between capsule and medial retinaculum layers to grasp the semitendinosus graft from the femoral incision (triangle). Fig 7 . Fig 7. (A) The graft limbs are held taut by an assistant next to the suture anchors.The knee is held in 30 flexion using a bump.(B) Sutures from the proximal anchor (*) are passed through the superior graft limb using a free needle, while sutures from the distal anchor (triangle) are passed through the inferior graft limb. Fig 8 . Fig 8.The graft is brought down to the patella and secured.Patellar stability was assessed by confirming a firm endpoint to lateral translation at 30 flexion and normal tracking through the entire range of motion.Excess graft was then trimmed (not shown). Table 1 . Advantages and Disadvantages Table 2 . Pearls and Pitfalls
2024-01-03T16:06:31.927Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "73f704afe00d651420ec3abe96e60d8c0feddee1", "oa_license": "CCBYNCND", "oa_url": "http://www.arthroscopytechniques.org/article/S2212628723003353/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "97ccf2b1154d48782586557981ba1a80b5965fcb", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
235716988
pes2o/s2orc
v3-fos-license
Predicting Mortality Risk in Older Hospitalized Persons With COVID-19: A Comparison of the COVID-19 Mortality Risk Score with Frailty and Disability Objectives To assess the association of pre-morbid functional status [Barthel Index (BI)] and frailty [modified Frailty Index (mFI)] with in-hospital mortality and a risk scoring system developed for COVID-19 in patients ≥75 years diagnosed with COVID-19. Design Retrospective bicentric observational study. Setting and Participants Data on consecutive patients aged ≥75 years admitted with COVID-19 at 2 Italian tertiary care centers were collected from February 22 to May 30, 2020. Methods Overall, 221 consecutive patients with COVID-19 aged ≥75 years were admitted to 2 hospitals in the study period and were included in the analysis. Clinical, functional (BI), frailty (mFI), laboratory, and imaging data were collected. Mortality risk on admission was assessed with the COVID-19 Mortality Risk Score (COVID-19 MRS), a dedicated score developed for hospital triage. Results Ninety-seven (43.9%) patients died. BI, frailty, age, dementia, respiratory rate, Pao2/Fio2 ratio, creatinine, and platelet count were associated with mortality. Analysis of the area under the receiver operating characteristic (AUC) indicated that the predictivity of age was modest and the combination of BI, mFI, and COVID-19 MRS yielded the highest prediction accuracy (AUCCOVID-19MRS+BI+mFI vs AUCAge: 0.87 vs 0.59; difference: +0.28, lower bound–upper bound: 0.17-0.34, P < .001). Conclusions and Implications Premorbid BI and mFI are associated with mortality and improved the accuracy of the COVID-19 MRS. Functional status may prove useful to guide clinical management of older individuals. population is limited. 5,6 Italy was the first country outside Asia to be heavily plagued by the virus, with more than 1 million confirmed cases since January 31, 2020, 7 with many older individuals involved. Aim of this study was to assess the association of functional profile on mortality in patients !75 years admitted for COVID-19 to 2 tertiary care centers located in Lumbardy and Tuscany, and to analyze whether it may help stratify prognosis according to the COVID-19 Mortality Risk Score (COVID-19 MRS), a scoring system developed for rapid triage evaluation. 2 Study Design This is a retrospective observational study. The clinical history, laboratory, and imaging variables of patients consecutively admitted with proven COVID-19 8 to 2 Italian tertiary hospitals located respectively in Northern and Central Italy from February 22 to May 30, 2020, were collected on admission and reviewed. Only patients aged !75 years were included in the present analysis. Overall, 616 patients with COVID-19 were admitted to the 2 hospitals over the selected period, and the 221 aged !75 years constituted our study population. Patient Characteristics Hospital characteristics and organization during the pandemic wave, as well as methods used to collect clinical, laboratory, and imaging variables for each patient into a unique database, have been previously described. 2 Variables assessed on hospital admission for each patient were collected from electronic charts and included demographics, number of drugs prescribed prior to admission, cardiovascular risk factors (smoking history, hypertension, diabetes), and data on comorbidities (including information on active and nonactive cancer and cardiovascular and pulmonary diseases). Functional status 2 weeks prior to hospitalization was routinely assessed with the Barthel Index by interviewing the patient and relatives by phone calls, in which lower values correspond to poorer functional status 9 and to poorer prognosis in the general older population. 10 Briefly, the Barthel Index summarizes functional independence in feeding, bathing, grooming, dressing, bowels, bladder, toilet use, transfers, mobility, and stairs. Frailty was assessed based on the modified Frailty Index (mFI) created by Saxton and Velanovich by mapping 11 variables (nonindependent functional status, history of diabetes mellitus, chronic obstructive pulmonary disease or pneumonia, heart failure, myocardial infarction, angina or coronary revascularization, hypertension, peripheral vascular disease, presence of impaired sensorium, TIA or cerebrovascular event without or with deficit) present in the Canadian Study of Health and Aging Frailty Index. 11 Frailty was defined by a score, equal to the ratio between present on total conditions, >0.36. 11 Information on respiratory support and drugs prescribed during hospital stay were collected as well. Six medical doctors collected the data into a unique database and independently reviewed their consistency. Data were last updated on May 30, 2020. In accordance with Ethics Committees' indications at both hospitals, which approved data collection and granted a waiver of informed consent from study participants, patients' identity was anonymized, and information protected by password. Clinical Severity on Admission Baseline clinical severity was assessed with the COVID-19 Mortality Risk Score (COVID-19 MRS), a rapid, operator-independent clinical tool developed to stratify mortality risk at triage. 2 The 6 items of the score are age, number of comorbidities, respiratory rate, PaO 2 /FiO 2 , serum creatinine, and platelet count; each item is scored from 1 to 3 according to tertiles of phenotype severity. As previously described, mortality risk is classified as low ( 10), intermediate (11)(12)(13), and high (!14). 2 Study Outcomes Predictive accuracy of the COVID-19 MRS and the association of disability (defined as a Barthel Index <75) and frailty with in-hospital mortality and their impact on the COVID-19 MRS risk stratification capability were the primary outcomes. Statistical Analysis Continuous variables, reported as mean AE standard deviation or as median with interquartile range, respectively for normal and nonnormal distributions, were compared between groups ("survivor" vs "nonsurvivor" status) with t test or nonparametric tests, as appropriate. Categorical variables, reported as counts and percentages, were compared between groups with c 2 test, or Fisher exact test when the expected cell count was less than 5. Cox multivariable regression analysis (with backward stepwise deletion) was used to assess determinants of mortality. All variables with P < .10 were entered into the multivariable models, and a 2-sided P < .05 was considered statistically significant. Receiver operating characteristic analysis was used to compare prediction performance of the COVID-19 MRS with and without disability (as expressed by the Barthel Index) and frailty. Statistical analysis was performed using the SPSS, version 27.0, statistical package for Macintosh (IBM, Armonk, NY). The demographic and clinical characteristics of nonsurvivors and survivors are reported in Table 1. Nonsurvivors were significantly older, with no differences between men and women. Cardiovascular risk factors and comorbidities were similarly distributed in the 2 study groups. Nonsurvivors presented a higher degree of functional impairment (lower Barthel Index), frailty (as mFI), and dementia. Previous use of angiotensin-converting enzyme inhibitors or angiotensin receptor blockers was similar in both groups. At triage, nonsurvivors presented a higher COVID-19 MRS and more frequently reported preadmission insomnia. Other symptoms before admission were similarly prevalent in the 2 groups. Laboratory and Imaging Findings Laboratory findings are presented in Supplementary Table 1. In the population as a whole, the median PaO 2 /FiO 2 ratio was 260 (interquartile range 204-406), and values < 200 were associated with a higher mortality. Lymphocytopenia was present in 69% of the population. Nonsurvivors had a lower platelet count, higher levels of serum creatinine, lactate dehydrogenase, and C-reactive protein. Furthermore, nonsurvivors presented with worse baseline inflammatory response. Chest radiograph was abnormal in 92.5% of cases. Medical Management and Clinical Outcomes Overall, 79.6% of patients received liberal oxygen and only 11.8% and 5.5% received, respectively, noninvasive and invasive ventilation, more frequently nonsurvivors (Supplementary Table 1). Although antibiotics had been prescribed more frequently to nonsurvivors, prescription of heparin, hydroxychloroquine, and antiviral agents (combination of lopinavir/ritonavir) were all more frequently prescribed to survivors. Notably, there was no association of Barthel Index with treatment strategies (Supplementary Table 2). Determinants of Mortality and Outcome Prediction by the COVID-19 MRS Cox multivariable regression analysis ( Table 2, Model 1) indicated that absence of disability (higher Barthel index), P a O 2 /FiO 2 ratio, and platelet count were positively associated, whereas age, presence of dementia, and higher respiratory rates and serum creatinine levels were negatively associated with survival. Similarly, a higher Barthel Index and lack of frailty were associated with a better outcome after adjusting for COVID-19 MRS risk category (Table 2, Model 2). Analysis of the area under the receiver operating characteristic (AUC) indicated that the predictive power for mortality of age alone was modest. Comparison of AUCs ( Figure 1A) revealed that the overall prediction quality increased by using the COVID- 19 Figure 1B). Discussion In this study, almost 50% of patients aged >75 years admitted for COVID-19 died during hospitalization. Case fatality rates have been reported variably and are approximately 0.1% in children, but as high as 15% in old Chinese patients and even higher in older Italians or US citizens. 12e14 Viral shedding, atypical symptoms, lower cardiorespiratory reserve, and a proinflammatory status have been all postulated as potential causes of such an age-associated poor prognosis. 15,16 In our study, worse functional profile (moderate to severe disability as expressed by the Barthel Index), age, dementia, respiratory rate, platelet count, serum creatinine, and PaO 2 /FiO 2 ratio, but not the number of comorbidities, were associated with in-hospital mortality. Furthermore, although age had a modest predictive role, with an AUC of 0.59, frailty (as expressed as the mFI) and functional profile were closely associated to the outcome and added to the predictive power of the COVID-19 MRS, with a final AUC of 0.87. This confirms the relevance of overall physical functioning, above and beyond disease severity and level of comorbidity, in determining the risk of death in older populations. 6,17,18 This message has direct clinical implications when choosing therapeutic strategies at hospital admission: older patients should be routinely assessed for frailty and disability in order to identify appropriate therapeutic strategies. The burden of COVID-19 pandemic in Italy was unique and overwhelming, posing the healthcare system into strain and presenting with difficult challenges. Overall, our results underscore the importance of an integrated assessment to avoid misplaced health priorities and ageism. 19 Compared with other series of patients with COVID-19 that included younger individuals, our patients presented with an average greater burden of chronic comorbidities and, accordingly, of prescribed drugs. 20e22 Advanced age per se and associated chronic comorbidities have been identified as the strongest predictors of mortality in patients diagnosed with COVID-19. 4 In our patients older than 75 years, functional profile 2 weeks prior to hospitalization and the mFI predicted in-hospital mortality and increased the predictive power of the COVID-19 MRS, confirming the importance of comprehensive geriatric assessment as part of the admission evaluation. As a case in point, in older patients hospitalized for pneumonia, functional status and frailty were independently associated with short-and long-term mortality. 23 Frailty, although difficult to define and quantify objectively, is generally intended as an impairment in muscular function associated with reduced homeostatic capacity in front of acute stressors 24 and is reported as an accurate predictor of adverse health outcomes, both in acute care settings 25 and in elective procedures. 26 More recently, a report from the COPE cohort study showed that in individuals with COVID-19, length of hospital stay and mortality were associated with frailty. 27 Our results extend this concept by showing that the definition of the functional profile prior to COVID-19 may refine the assessment of prognosis defined by a disease-specific prognostic score such as the COVID-19 MRS. Limitations Some limitations of our study have to be acknowledged. First, the observational nature of our analysis does not allow to draw any firm conclusion about clinical determinants of mortality and associations with therapeutic strategies that, moreover, were clearly adapted over time. In addition, some laboratory parameters, which proved to be of prognostic relevance in other studies, 28,29 were not collected for all individuals in our sample, possibly as a consequence of variable severity of some clinical pictures (ie, very mildly affected vs extremely critical patients at presentation). Last, there are 2 main operational definitions of frailty, the physical phenotype and the multidomain phenotype. The physical phenotypeddescribed by Fried et al 30 as the presence of unintentional weight loss, exhaustion, weakness, slow walking speed, and low level of physical activitydwas difficult to derive in our acute hospital patients. For this reason, we assessed frailty using the mFI. 11 Conclusions and Implications Almost 1 in 2 patients !75 years diagnosed with COVID-19 died during hospitalization. Functional profile at 2 weeks before disease and assessment of frailty seem to be important factors in determining the in-hospital prognosis irrespective of age and comorbidities and help to increase accuracy of the COVID-19 MRS. Older patients diagnosed with COVID-19 should be reassessed in light of their personal history, fitness, frailty, and disability so that more focused and dedicated care can be provided. .
2021-07-03T13:18:23.493Z
2021-07-02T00:00:00.000
{ "year": 2021, "sha1": "c74faaca7ec1c1e45bd426efdc71d86f702a73b8", "oa_license": null, "oa_url": "http://www.jamda.com/article/S1525861021005120/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "3e95101a3acc716c7c33a929a25a89a49fa40d54", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237445156
pes2o/s2orc
v3-fos-license
Case Report: Lymphohistiocytic Myocarditis With Severe Cardiogenic Shock Requiring Mechanical Cardiocirculatory Support in Multisystem Inflammatory Syndrome Following SARS-CoV-2 Infection Multisystem Inflammatory Syndrome (MIS) is a novel hyperinflammatory syndrome associated with SARS-CoV-2 infection. It predominantly affects children (MIS-C) a few weeks after a usually asymptomatic SARS-CoV-2 infection and is only rarely seen in adults above 21 years (MIS-A). Only scarce data on histological findings in both pediatric and adult patients has been published so far. An 18-year-old male patient was admitted to hospital in a febrile state, which progressed to severe cardiogenic shock and multi-organ failure requiring extracorporeal life support. Myocardial biopsy revealed small vessel-associated immune cell infiltrates. Diagnosis of MIS-C was made after ruling out all potential differential diagnosis. Use of immunosuppressive treatment with steroids, interleukin-1 blockade and high-dose intravenous immunoglobulins resulted in the patient's full recovery. Multisystem Inflammatory Syndrome (MIS) is a new differential diagnosis of cardiac dysfunction in pediatric and adult patients. The lack of myocardial necrosis differentiates the disease from other viral myocarditis and offers an explanation for the fast response to immunomodulatory therapy and the favorable prognosis. The preceding SARS-CoV-2 infection might only have been mildly symptomatic or even asymptomatic. INTRODUCTION Coronavirus disease 2019 (COVID- 19) with respiratory failure is the primary complication of an infection with the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in adults. Here, diagnosis and treatment is progressively better understood (1). In pediatric patients however, a novel hyperinflammation syndrome called Multisystem Inflammatory Syndrome in Children (MIS-C) is a serious pathology caused by a SARS-CoV-2 infection (2). The awareness and knowledge on this hyperinflammation syndrome are steadily growing among pediatricians, but the more uncommon adult variant of this syndrome, Multisystem Inflammatory Syndrome in Adults (MIS-A), is widely unknown in adult medicine. The threshold between the pediatric and the adult variant is 21 years as defined by the CDC (3). Only scarce data on histological findings in both pediatric and adult patients has been published so far. Here, we report the case of a young adult with severe cardiogenic shock diagnosed with severe MIS-C backed by myocardial biopsy and rapid recovery following initiation of immunosuppressive treatment. CASE DESCRIPTION An 18-year-old male patient presented to the emergency department with hyperpyrexia (42 • C), chills and tachycardia. Physical examination and chest X-ray revealed no pathological findings. Laboratory tests showed elevated C-reactive protein (CRP; 105.9 mg/l, reference range <5 mg/l) as well as interleukin 6 serum levels (IL-6; 128 pg/ml, reference range <7 pg/ml), but only modestly elevated procalcitonin (PCT; 0.12 ng/ml, reference range <0.05 ng/ml) (Figure 1). The patient was admitted to a standard care ward and an empiric antibiotic therapy was initiated. The patient's medical history was unremarkable. Approximately 2 months prior to admission, the patient was exposed to Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) and went into quarantine. A few days after this exposure, he complained he had lost his sense of smell, but he experienced no other symptoms. Neither during his quarantine nor after his initial admission to the hospital was an active SARS-CoV-2 infection ever proven, despite repeated nasopharyngeal swabs. Following admission, the patient's condition steadily deteriorated. After 3 days he was transferred to the intensive care unit (ICU) due to arterial hypotension with suspected septic shock. Initially, intravenous fluid resuscitation and a low rate of noradrenaline (0.01 µg/kg/min) were sufficient to stabilize the patient's blood pressure. A generalized rash affecting the abdomen and all limbs occurred. On day 4 following hospital admission, transthoracic echocardiography revealed a severely impaired left ventricular cardiac function (left ventricular ejection fraction, LVEF, 25%, Supplementary Video 1). No relevant ECG pathologies were seen beside sinustachycardia. At that time, Pulse Contour Cardiac Output (PiCCO; Getinge, Rastatt, Germany) measurement confirmed marginal cardiac output of 4.4 l/min (reference range: 4-8 l/min). Computed tomography showed enlarged abdominal lymph nodes, wall thickening of the colon and polyserositis with pericardial and pleural effusions and ascites. Respiratory failure due to pulmonary edema required non-invasive ventilation (NIV). Four days after initial hospital admission, the patient was transferred to our hospital, a tertiary care center in Freiburg, Germany. Upon admission, levosimendan infusion was started. The following day, endomyocardial biopsy (EMB) was performed. Signs of hypoperfusion end organ failure persisted (elevated lactate, renal function, Figure 1). For this reason, a percutaneous ventricular assist device (Impella R , Abiomed, Danvers, NJ, United States of America) was implanted. Subsequently, cardiac output improved from 3.6 to 4.9 l/min and pulmonary capillary wedge pressure decreased from 26 to 21 mmHg. However, due to both progressive hypoxemia under NIV with a required fraction of inspired oxygen (FiO 2 ) of up to 100% and to worsening neurological symptoms (sopor), invasive mechanical ventilation was indicated. Within just a few hours after endotracheal intubation, additional venoarterial extracorporeal membrane oxygenation (V-A ECMO) support was required, because of worsening hypoperfusion and severe end-organ failure (Figure 1). EMB showed a significant infiltration of immune cells into the heart. Especially CD68+ macrophages but also CD3+ T cells were found to be located primarily around small vessels within the myocardium, as shown by immunohistochemical stainings (see circle). Masson Trichrome and HE stainings further demonstrated the presence of perivascular fibrosis in serial tissue sections, but no myocyte necrosis (Figures 2A-D In addition, qRT-PCR did not detect SARS-CoV-2 RNA in the myocardium. Following interdisciplinary discussion (pediatrics, rheumatology, cardiology, and infectious disease), Multisystem Inflammatory Syndrome in children (MIS-C) following preceding SARS-CoV-2 infection was diagnosed and immunosuppressive therapy including high-dose intravenous immunoglobulin (IVIG), dexamethasone and IL-1-blockade (anakinra) was initiated (Figure 1). Clinical and laboratory parameters improved within 3 days and 1 day, respectively (Figure 1). As cardiac function recovered, this enabled discontinuation of extracorporeal cardiocirculatory support (V-A ECMO, Impella R ) on day seven after initiation. Cardiac necrosis parameters were only moderately elevated during the shock phase (max. TroponinT 341 ng/L, ref <14 ng/L; CK-MB max 54 U/L, ref < 24 U/L) indicating only a minor myocardial damage has occurred. Cardiac function did indeed fully recover (Supplementary Video 2). Renal function only was able to fully recover after 30 days to full recovery. The patient was able to be discharged 32 days after initial hospital admission. DISCUSSION Early during the SARS-CoV-2 pandemic, a novel hyperinflammatory syndrome was described. Initially, only pediatric cases were identified with symptoms and clinical findings, which in many respects resembled features of Kawasaki disease and Toxic Shock Syndrome (4,5). Two synonymic terms-Multisystem Inflammatory Syndrome in Children (MIS-C) and Pediatric Inflammatory Multisystem Syndrome temporarily associated with SARS-CoV-2 (PIMS-TS)-were established (3,6,7). Later, a similar syndrome was reported in adults (MIS-A) (8,9). Our patient fulfilled the diagnostic criteria of MIS-C with fever, rash, lymphadenopathy, shock, myocardial injury, colitis, and positive SARS-CoV-2 serology, as well as severe inflammatory response. For this reason, immunomodulatory therapy based on clinical recommendations of the American College of Rheumatology (ACR) was initiated (3). ACR recommends steroid treatment with methylprednisolone (20-30 mg/kg a day, for 1-3 days up to 1 g per day followed by tapering doses−2 mg/kg a day, maximum 60 mg a day); high-dose intravenous immune globulin therapy (2 g/kg a dose) in moderate to severe cases; and cytokine receptor (IL-1 or IL-6) blockade (3). As with myocardial involvement following other viral infections, cardiac injury in MIS-C may occur either due to direct cardiac invasion by the virus (10-13) or else following accompanying cytokine storm (2). Since EMB is only rarely performed, the reported cases of myocardial injury in the context of the SARS-CoV-2 infection are largely based upon clinical symptoms, laboratory results and imaging findings (e.g., electroand echocardiography, magnetic resonance). Arrhythmias, decreased LVEF and high prevalence of cardiogenic shock were reported (14). Histopathological investigations of EMB in patients with COVID-19 revealed multi-focal lymphocytic and interstitial macrophage infiltrates (15) without substantial myocyte necrosis. Despite the fact that SARS-CoV-2 can infect macrophages but also myocytes (16), this virus is obviously not cytolytic as e.g., coxsackievirus B3 (17). So far, the exact molecular mechanisms by which the infiltration of many macrophages and less T cells are induced in MIS patients are not known. It is likely that SARS-CoV-2 rather induces an inflammatory response by cytokine release, thus resulting in a kind of indirect myocardial injury (18). Further investigations are required to investigate why in MIS patients but not in other patients with myocardial SARS-CoV-2 infections the inflammation is associated with the vessels (19). It has to be discussed whether the presence of extensive perivascular lympho-histiocytic infiltrates without myocyte necrosis may explain the rapid response to immunosuppressive therapy in our patient. This is similar to other reported cases (9,14,20). CONCLUSION Even following asymptomatic SARS-CoV-2 infection, children and young adults may develop severe Multisystem Inflammatory Syndrome in Children or Adults (MIS-C/A). In our case report, myocardial involvement (verified by endomyocardial biopsy) caused severe cardiogenic shock requiring medical as well as mechanical cardiocirculatory support. Early immunomodulatory treatment with glucocorticoids, intravenous immunoglobulin and cytokine receptor blockade helped control symptoms and interrupt uncontrolled inflammatory response. The patient's cardiac function recovered after 7 days on mechanical cardiocirculatory support with Impella R and V-A ECMO. Prompt diagnosis of MIS-C is critical, as swift use of intense immunosuppressive therapy may lead to a better prognosis for the patient. Therefore, we advise critical care clinicians to consider this differential diagnosis early on when confronted with patients suffering from severe inflammatory response and impaired cardiac function. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. AUTHOR CONTRIBUTIONS XB, AS, and IJ conceived and designed the case report, collected the data, and wrote the manuscript. KK contributed to the pathology diagnosis. IJ, MH, DS, and AJ contributed to the clinical diagnosis. CB supervised the conception, analysis, design of the work, and manuscript drafting. All authors critically revised the manuscript for important intellectual content and provided approval of the final version. ACKNOWLEDGMENTS A sincere thank you to Natalie Diffloth for her diligent proofreading of this paper.
2021-09-09T13:20:26.802Z
2021-09-09T00:00:00.000
{ "year": 2021, "sha1": "522f17926370137bc99afa775a0a0836e737a6b2", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2021.716198/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "522f17926370137bc99afa775a0a0836e737a6b2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55351885
pes2o/s2orc
v3-fos-license
Demands for Short-Run Assets and Liabilities in Brazil : a Portfolio Approach ∗ The study explores the behavior of the short run demands for assets and liabilities in the Brazilian industry during the 1990–1998 period based on portfolio theory. The implied set of demand equations is estimated for different sub-periods indicating that short-term demand patterns do not substantially differ when one compares the overall sample for 1990-98 and the sample pertaining a more stable period during 1994-98. Good support was found for explanatory factors referring to the stock of long term (nonchoice) items and variables approximating the activity level of the firm and the economy. The responsiveness of short-run assets and liabilities to relative returns (costs) of those items was weaker than previous studies for the U.K (Hay and Louri (1989, 1991, 1996)). The study explores the behavior of the short run demands for assets and liabilities in the Brazilian industry during the 1990-1998 period based on portfolio theory.The implied set of demand equations is estimated for different sub-periods indicating that short-term demand patterns do not substantially differ when one compares the overall sample for 1990-98 and the sample pertaining a more stable period during 1994-98.Good support was found for explanatory factors referring to the stock of long term (nonchoice) items and variables approximating the activity level of the firm and the economy.The responsiveness of short-run assets and liabilities to relative returns (costs) of those items was weaker than previous studies for the U.K (Hay and Louri (1989, 1991, 1996)). INTRODUCTION Mean-variance modeling is a still influential framework in contemporary Corporate Finance [see e.g.Elton and Gruber (1995) and Brealey and Myers (1996) for comprehensive accounts].In particular, the portfolio approach has led to a vast, predominantly normative literature pertaining firm's short-run assets and liabilities [see e.g.Gentry (1988)]. 1 The application of portfolio theory to industrial firms was pursued in a handful of papers applied to quoted and unquoted firms in the U.K. [see e.g.Hay and Louri (1989, 1991, 1996)].Those studies also considered segmentations in terms of firm size and growth rate and obtained moderate support to the portfolio theory in terms of coefficients significance, expected signs and satisfaction of theoretical restrictions. The emphasis of the aforementioned literature in terms of developed and stable economies can provide a significant motivation for undertaking similar studies in the context of developing countries.In particular, it is interesting to assess to what extent significant regimes shifts can induce different patterns in the short-run demands for assets and liabilities.In fact, the Brazilian economy has been subjected to strong regime shifts in the 90s.Most notably the trade liberalization and the price level stabilization [see e.g.MendonçadeBarros and Goldenstein (1997) and Moreira and Correa (1997) among others]. The present paper estimates a system of demand equations for short-run assets (liabilities) as a function of returns (costs) of choice items and the stock of non-choice items as implied by a portfolio model.It is worth investigating if the expected theoretical implications are tenable in the Brazilian case and whether the patterns of demands for short-run assets and liabilities possess distinctive features in a more stable environment.Furthermore, it is relevant to verify whether one can discern comparable results with respect to developed countries. The paper is organized as follows.The second section presents a brief digression on portfolio theory in the context of balance sheet analysis.The third section discusses data construction and describes the empirical model to be implemented.The fourth section presents the empirical results pertaining the econometric demand analysis.The fifth and last section brings some final comments. PORTFOLIO SELECTION MODELS: A BRIEF DIGRESSION Portfolio selection analysis can provide a useful framework for assessing interdependencies among components of a firm's balance sheet. 2 Previous applications considered by Hay and Louri (1989, 1991, 1996) followed a similar setup as the adopted here.Essentially, a system of interdependent demands for aggregate classes of balance sheet components, choice and non-choice items, is formulated and estimated. The negative exponential utility function in profits initially proposed by Freund (1956), depicts a conservative agent and has become the most usual standard in the previous studies.The implied (equivalent) expression for the expected utility in the case of normally distributed profits is given by: 3 where b and c denote positive parameters and π stands for profits, and with µ π and σ 2 π denoting the mean and variance of profits, respectively. 1 A different strand of the literature, however, emphasizes the relevance of positive models for the portrayal of firm's behavior and recognize the interdependence among the different items of the firm's balance sheet [see e.g.Parkin (1970) and Courakis (1974Courakis ( , 1975Courakis ( , 1988))]. 2 Courakis (1988, 1989) provide a critical overview of the relevant literature. 3 See e.g.Mood et al. (1987) for useful results on the log-normal distribution that justify expression 1 Next, one needs to establish the distinction between choice and non-choice components of a balance sheet that will basically reflect the decision horizon associated with a given balance sheet component. 4e define the vector denoting the total value of the portfolio as V that can thought in terms of the concatenation of the sub-components V Z and V K that respectively denote the choice and non-choice terms.The portfolio decision can be summarized in terms of the maximization of expression 1 subject to the restriction that assets must be equal to liabilities: where m e represents the vector of expected returns, that can be partitioned in terms of the choice sub-component m Z and non-choice sub-component m K , m e V denotes the expected profits, S denotes the variance-covariance matrix of returns, V SV stands for the variance-covariance matrix of expected profits, and I Z and I K represent vectors of ones. The solution of the previous program leads to the following expression:5 It defines a set of linear demand equations of balance sheets' components that comprise two blocks.First, the demand for a given choice asset depends on the expected returns on all choice assets (m Z ) with the G matrix indicating the sensibility of the former with respect to the latter.G is Z by Z matrix where Z denotes the number of choice components.Second, the demand for each asset depends on the stock of non-choice assets with the H matrix indicating the corresponding response patterns is a Z by K matrix where K represents the number of non-choice components. 6In addition to the basic structure implied by expression 3 a set of restrictions from demand theory is considered, specifically (see Hay and Louri (1989), p. 155, for details): (a) Simmetry of matrix G: this follows from the symmetry of the S matrix; (b) Homogeneity: the relative demands of choice components should not be affected by equal changes in the corresponding ratios of returns; (c) Cournot aggregation: all components' adjustments should be consistent with keeping the balance sheet in balance after any changes in returns; (d) Non-negative own rate coefficients: the increase in the expected return of a choice component will not cause a reduction in the stock of the referred component; (e) Engel aggregation: any change in the stock of a non-choice component implies changes in the demands for choice components which add up to the initial change. Cournot and Engel aggregation restrictions are direct consequences of the balance sheet constraint and therefore are not testable, whereas the remaining restrictions are associated with specific coefficients restrictions to be assessed in terms of the econometric estimation.In the next section, we will consider an empirical variant of the basic theoretical model just considered together with the restrictions described above.The exact meaning of the aforementioned restrictions will be made clear later in section 3.2 after specific notation is established. Data Sources The main data source for the present study is the databank of the 1000 largest companies in Brazil [Center for Entrepreneurial Studies-Getulio Vargas Foundation] which includes annual balance sheets and results accounts data for both quoted and unquoted firms. 7The chosen sample period was 1990-98, for which an unbalanced panel of firms was generated for the transformation industry.The overall sample included a maximum of 260 firms in a given year with a total of 2248 observations. Table 1 indicates the structure of an aggregate balance sheet.We follow mainly the account specificities of Brazilian system of accounts.A basic difference from Hay and Louri studies concerns the treatment of investment in physical capital as a choice component.8 Empirical Model The set of equations to be estimated is presented by expression 3 where a set of additive stochastic disturbances can be considered.The set of explanatory variables is listed next.The classification in terms of choice and non-choice items directly relates to the notion of a planning period that given annual data availability is set at 1 year.Items that have a broader decision horizon will be denominated as non-choice.The specification of the system below closely follows the previous literature.Hay and Louri (1996), however, contrasts with Hay andLouri (1989, 1991) by considering investment as a nonchoice item given the planning period rationale just mentioned.In the present study, we also exclude investment as a choice item.9 where a set of other economic control variables as given by vector x (and with corresponding coefficients' matrix J) augments the theoretical equation system given by expression 3. The coordinates of vectors V , m Z , V K and x follow the ordering indicated in the list of variables that is presented below.Moreover, the imposed theoretical restrictions previously discussed assume the particular forms described below, where i denotes a unit vector: • Cournot and Engel aggregation: i G = 0i, i H = i and i J = 0.Those restrictions indicate that for each column the coefficients add up to 0 and 1 in the G and H matrices respectively. • Symmetry and homogeneity: for every row of matrix G. • Non-negative own rate coefficients: the restriction requires that the elements in the diagonal of the G matrix should be non-negative.Next the variables used in expression 4 are described. Balance sheet choice items (dependent variables) .DST change in inventories; .LC short run loans contracted through the financial system; .TD trade debt, essentially refers to credit extended to customers; .TC trade credit; .WK working capital. Balance sheet non-choice items .SL inventories balance at the beginning of the period; .KL long run assets; .LL long run liabilities; .SH shareholders funds; Returns (Costs) of choice items .DIF return on inventories measured as the difference between the inflation in terms of the general price index (IGP-DI) and the one associated with the wholesale price index (both obtained from the Getulio Vargas Foundation); .CL costs of loans: measured by the real interest rate, charged by banks, for loans relating to horizons between 7 and 30 days.The rates were obtained from several issues of Jornal do Commercio and Gazeta Mercantil (annual rates generated upon end of month averages); .CD return on net trade credit as measured by real return on 60 days pre-fixed certificates of deposits obtained from Boletim do Banco Central do Brasil (several issues) and Conjuntura Econômica, 53 (10), 1999; .RTB buying rate for trade bills averaged over the year less inflation, where the rates were obtained from several issues of Jornal do Commercio and Gazeta Mercantil (annual rates generated upon end of month averages); .HM return on working capital measured in terms of the real interest rate for short-term bank loans ("hot money"), where the rates were obtained from several issues of Jornal do Commercio and Gazeta Mercantil (annual rates generated upon end of month averages). Other Control Variables .BC business cycle variable capturing the capacity utilization of the firm defined as the ratio between gross revenue and total assets; .GDP dummy variable relating to the behavior of the gross domestic product of the economy, which assume a value of 1 if one observed positive growth in comparison with the previous year and 0 otherwise; .EP dummy variable that assumes value 1 if any stabilization plan took place in that year and 0 otherwise.This variable captures uncertainties associated with sudden change of rules in the economy. .NPD number of non-performing debts as provided by SERASA (a private rating agency in Brazil).This variable may approximate an environment that is favorable to credit rationing. Finally, it is worth mentioning that all balance sheets components were expressed as ratios of total assets.Furthermore assets are entered with a positive sign and liabilities with a negative sign. ECONOMETRIC ESTIMATION AND RESULTS The system of equations just described is characterized by potentially interrelated errors in the different equations and therefore it is important to consider a system estimator.Engel and Cournot aggregation restrictions directly follow from the equilibrium in the balance sheets.Those restrictions imply that the equations are not independent and In the present estimation the W K equation was excluded.We consider the full information maximum likelihood estimator (FIML) that possesses the convenient property that the estimates are invariant to the choice of the excluded equation.The coefficients of the omitted equation can be recovered upon the aforementioned aggregation restrictions and the linearity of the restrictions facilitates the calculation of the standard errors upon the relevant elements of the variance-covariance matrix for the parameters.The system of 4 equations is estimated with 10 less parameters that would prevail under an unrestricted estimation in terms of 6 restrictions related to symmetry and 4 restrictions referring to homogeneity.Given the former restrictions the related coefficients are recovered by means of the homogeneity restriction for the particular equation. 10he estimation was implemented for two sample periods.First, we considered the overall sample with 1935 observations during the 1990-98 period.This estimation takes advantage of a larger sample but includes periods of economic instability and stability.Second, we consider a smaller sample with 802 observations during the 1994-98 period.This period was characterized by the price level stability brought by the Real stabilization plan, the continuation of a more liberal environment following previous trade liberalization movements and yet the beginning of the privatization processes and related regulatory institutional setup.This presents a sharp contrast with the previous inflationary scenario and therefore provides an interesting opportunity for examining the optimal portfolio allocation of industrial firms under different degrees of uncertainty.All estimations were implemented with the software Eviews 5.0.Next, we present the empirical results for those two sample periods. Overall sample It is worth mentioning that the system of equations was estimated with equation specific intercepts and sectoral dummies.The latter coefficients were all highly significant, but the sectoral dummies were, as a rule, non-significant.This point is interesting as sector-specific portfolio allocation patterns could be a source of concern that led authors of previous studies to pursue sector-specific estimations despite the limited number of observations.The evidence in this study, however, does not seem to ascribe any evident role for sector-specific factors in determining portfolio choice. 11he first preliminary verification refers to the validity of the symmetry and homogeneity restrictions.For that purpose, we consider a likelihood ratio test.The test statistic was χ 2 (10) = 14.158 (p-value: 0.1659).One cannot therefore reject the validity of the symmetry and homogeneity restrictions.In order to obtain a first crude approximation of the goodness of fit for the model it is worth mentioning the values for the adjusted R 2 .The values were respectively 0.99, 0.41, 0.54 and 0.30 for the DST, LC, TD and TC.The overall adjustment, however, does not preclude the prevalence of a large number of non-significant coefficients as can be noted from the inspection of the table 2. The results are presented in table 2. Starting with the matrix G, one observes a weak support for portfolio theory.In fact, 5 out of 15 coefficients are significant at the 5% level.The main expected result accruing from the theoretical model is that the demand for a choice asset should move in the same direction of the corresponding return and that the demand for a choice liability should vary in opposite direction to the associated cost.The coefficients in the main diagonal of matrix G should therefore be non-negative.In the forthcoming comments we will consider the 5% as the reference for the analysis.Only one diagonal coefficient referring to return in inventories (DIF) was significant but displayed, however, a negative sign. The off-diagonal coefficients of matrix G represent the interaction between assets and liabilities.A negative coefficient for the rate of return of asset j in the equation of asset i indicates that both are substitutes in terms of risk and return.The results with that respect were mixed in terms of the significant coefficients, as two of those are positive and two are negative.Let's examine some specific coefficients that revealed to be statistically significant (G 12 = G 21 , G 13 = G 31 , G 24 = G 42 and G 25 = G 52 ).The first referred coefficient indicates that the cost of loans (CL) negatively impacts changes in inventories (DST) and the return to inventories (DIF) negatively impacts short-run loans (LC).The second coefficient indicates that return on net trade credit (CD) exerts a positive impact on changes in inventories and that returns to inventories positively affects the willingness to extend credit to customers (TD).The third coefficient shows a negative effect trade credit cost (RTB) on the amount of short-run loans and yet a negative impact of the cost of loans on the willingness to obtain credit from suppliers (TC).The fourth coefficient indicates a positive influence of return on working capital (HM) on the willingness to rely on loans and a positive effect of the cost of loans on the willingness of increasing working capital.The former refer to current assets minus current liabilities and once more the effects are sensible.The results for the significant coefficients make sense but the links between the sales and marketing behavior (especially in terms of trade credit and debt) and funding sources like loans and working capital were not as clear as in the U.K. case.In fact, even for that developed country, the G matrix results was indeed the weak point of the estimation with several non-significant coefficients.The results for the Brazilian case were somewhat worse indicating low responsiveness of short-run assets (liabilities) to relative movements in returns (costs). The coefficients of the H matrix captures the responsiveness to the stocks of non-choice assets.One needs to exercise extra care in interpreting the coefficients of that matrix as negative values were attributed to the liabilities variables and therefore those should be interpreted in reverse form The results are more satisfactory from a statistical point a view, as in that case, 15 out of 20 coefficients are significant with mixed signs.In conceptual terms the expected signs of the coefficients are not as clear cut as those from the G matrix.Some results are predictable like the positive dependence of changes in inventories to the initial stock of inventories.Another interesting results are that the availability of shareholder funds (SH) negatively affect credit extended to customers.No drastic contrasts emerge in comparison to the studies for the U.K. The J matrix indicates the effects of the additional control variables.In that case, one obtained 10 out of 20 significant coefficients that displayed a plausible sign in different occasions.The variables proxying business cycle and stabilization plan (BC, GDP and EP) appear to be relevant in many cases, but the proxy for non-performing debts (NPD) does not seem to exert any influence in the portfolio decisions.In particular, it is worth mentioning the positive effect of the activity level of the firm (BC) on and on the willingness to extend credit to customers.These results are consistent with previous empirical evidence but the evidence for the economy activity level variable (GDP) exhibits some contrast with the studies for the U.K. The significance of 3 coefficients related to the stabilization plan variable (EP) indicates that macroeconomic uncertainty can have some effect on portfolio decisions.This point will motivate the consideration of the sub-sample mostly characterized by price-level stabilization and a more competitive environment. In general, we can observe a satisfactory performance of the portfolio model in terms of the stock of non-choice items (as reflected in the H matrix) and other control variables (as reflected in the J matrix).The results, however, are somewhat poor in the case of the returns (costs) of choice assets (liabilities) as indicated by the estimated parameters for the matrix G.In fact, even in earlier studies for the U.K. the results were only regular with respect to that matrix.The authors advanced the possibility of measurement errors with reference to the relevant interest rates faced by firms.For example, the cost of loans (CL) might not be reflecting the true cost of obtaining loans since collaterals charged by banks are not included in that measure, these additional requirements can become important in a credit rationing setting.Moreover, since the data set used in this study refers to large firms one can suspect that the relevant interest rates are not perfectly observable. 12ext, we consider the estimation of that portfolio model in the context of a more stable period and the corresponding results are presented in table 3. 1994-98 Period In principle, we expected that a more stable environment should favor the performance of the portfolio model and eventually generate results that are closer to those obtained for developed countries.A first assessment for the portfolio framework relates to the test of the symmetry and homogeneity restrictions.The results are not encouraging in this case as one obtains the test statistic χ 2 (10) = 234.244(p-value: 0.000) that would recommend the rejection of those restrictions.It is worth mentioning that previous works only provided partial support for those restrictions as indicated by the works of Hay andLouri (1989, 1991) and outright rejection in the study of Hay and Louri (1996).Even though, the partial results did not discourage previous researchers, one must exercise caution on interpreting the results for this sub-sample and the analysis will be more cursory Once more as a rough approximation, the adjusted R 2 were respectively 0.22, 0.37, 0.58 and 0.30 for the DST, LC, TD and TC. Starting with the G matrix, one observes 7 out of 15 coefficients that are statistically significant but with regard to the diagonal coefficients only 3 are significant.Once more, return on inventories (DIF) has an unexpected negative effect on the change in inventories (DST) whereas one obtains an expected positive effect of the return on net trade credit (CD) on trade debt (TD) and of the buying rate for trade bills (RTB) on trade credit (TC).One observes, a larger number of significant off-diagonal coefficients and for example it is worth mentioning the negative effects of the proxy for trade credit cost on trade debt In other words if it becomes more expensive to obtain credit from suppliers one will be more reluctant to extend credit to customers. The effects of the stocks on-choice items on portfolio allocation are captured by the H matrix.One can observe that 18 out of the 20 estimated coefficients were significant and with many plausible results.The results for this sub-samples are qualitatively similar to those from the overall sample in terms of the coefficients' signs. Finally, additional control variables are considered in the J matrix that had 13 out of 20 coefficients as statistically significant.Once more, the variables referring to the activity level of the firm and the economy (BC and GDP) and to the stabilization plan (EP) appear to have relevant effects.In many case, the signs were as expected.For example, the change in inventories and the level of trade debt positively responded to those variables.As a rule, however, non-performing debt does not seem to exert relevant effects on portfolio allocation.The results, in terms of the coefficients' signs, are partially coincident with the estimates from the overall equation.An important difference, however, is the effect of activity level in trade debit and trade credit. The results in terms of statistical significance and sensible coefficients' signs are slightly better for this sub-sample but one cannot detect a markedly distinct pattern of portfolio allocation under a more stable environment as would in principle be expected. FINAL COMMENTS The present paper aimed at approaching the balance sheet of Brazilian industrial firms in terms of a portfolio selection model.The investigation was motivated by the possibility that the patterns of portfolio allocation in a developing country like Brazil could differ if compared with stable economies like the U.K..In that sense, one also considered the estimation during a more stable and competitive period like 1994-98. The evidence indicated a good support for portfolio theory in terms of the dependence on the stocks of non-choice items and other control variables.The responsiveness short-run assets to assets returns was even weaker than the partial result obtained by Hay and Louri (1989, 1991, 1996). Relevant extensions to the present analysis relate to the possible relevance of additional control variables that refer to sector specific characteristics as for example the import tariff level.The possible heterogeneity among different industrial sectors awaits further investigation in the context of portfolio framework adopted in the present paper. Another possible route for future research relates to verifying to what extent the empirical results are robust to the functional form assumption.In fact, the linearity of the demand system is implied by the negative exponential utility function.Additional exploration with alternative functional forms in the context of balance sheets is essentially absent from the literature.Those extensions would imply not only more complex estimation procedures but also more intense data availability requirements that are beyond the scope of the present work. Table 1 - Simplified Balance Sheets Structure
2018-12-12T11:15:38.849Z
2006-01-01T00:00:00.000
{ "year": 2006, "sha1": "cbf72e033e00dfce21e299e977922c97869e354d", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/rbe/a/FfbjZPgXgPBjQBCy6yh7jrF/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "cbf72e033e00dfce21e299e977922c97869e354d", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
49385809
pes2o/s2orc
v3-fos-license
Impact of Vegetative Treatment Systems on Multiple Measures of Antibiotic Resistance in Agricultural Wastewater Wastewater is an important vector of antibiotic resistant bacteria and antibiotic resistance genes (ARB/G). While there is broad agreement that ARB/G from agricultural (ag) wastewaters can be transported through the environment and may contribute to untreatable infectious disease in humans and animals, there remain large knowledge gaps surrounding applied details on the types and amounts of ARB/G associated with different agricultural wastewater treatment options and different ag production systems. This study evaluates a vegetative treatment system (VTS) built to treat the wastewater from a beef cattle feedlot. Samples were collected for three years, and plated on multiple media types to enumerate tetracycline and cefotaxime-resistant bacteria. Enterobacteriaceae isolates (n = 822) were characterized for carriage of tetracycline resistance genes, and E. coli isolates (n = 673) were phenotyped to determine multi-drug resistance (MDR) profiles. Tetracycline resistance in feedlot runoff wastewater was 2-to-3 orders of magnitude higher compared to rainfall runoff from the VTS fields, indicating efficacy of the VTA for reducing ARB over time following wastewater application. Clear differences in MDR profiles were observed based on the specific media on which a sample was plated. This result highlights the importance of method, especially in the context of isolate-based surveillance and monitoring of ARB in agricultural wastewaters. Introduction Antibiotic resistance is a growing global health threat, with a projection of 10 million deaths per year attributable to previously-treatable infections by 2050 [1]. Although antibiotic resistant bacteria and antibiotic resistance genes (ARB/G) are found naturally in soils and water around the world [2][3][4][5], there is growing concern that input of large numbers of ARB/G into the environment via fecal wastes has adverse impacts for human, animal, and environmental health [3,4,6]. Sanitation and water quality are an important component of efforts to slow the spread of antibiotic resistant bacteria and antibiotic resistance genes (ARB/G) [7,8] and in the United States, public wastewater treatment plants have been shown to be effective for reducing ARB in the human waste stream [9]. In contrast to the standard primary, secondary, and tertiary treatments associated with human wastewater treatment, there is a great diversity of wastewater treatment options available in agricultural systems [10]. On-farm agricultural wastewaters include runoff from biosolids such as manure, litter, and compost, or liquid manure sludges and slurries [10]. The primary concerns surrounding agricultural wastewater are the release of nutrients and pathogens into surface waters [11,12]. In addition to these contaminants, there is growing awareness that manure-impacted wastewaters introduce additional antibiotic resistant bacteria and antibiotic resistance genes into the environment [4,10,13], and there is a growing concern that manure-associated ARB/G could indirectly impact human health, via transport in surface waters and soil [6,13,14]. Although there is great interest in the potential of ARB/G in food animal production to be transported through the environment and contribute to untreatable infectious disease in humans and animals, the data required to establish causal links, and for risk assessment in this area remain sparse [13,15,16]. There is a growing body of work characterizing and enumerating ARB/G in agricultural production settings [17][18][19][20][21][22][23][24][25], however multiple data gaps remain [16], including a specific need for information on number and type of ARB/G associated with land application of animal manures [13]. Complicating the efforts to inform environmental antibiotic risk assessment is the growing realization that specific answers depend, to a large extent, on the particular target that was measured [25,26], and the need to inform monitoring and surveillance efforts [8,15]. In order to provide information on ARGs associated with land application of agricultural wastewaters, this study examines ARB/G in a Vegetative Treatment System (VTS) used to treat beef cattle feedlot runoff [27]. A VTS works by collecting runoff, and then distributing it onto land with vegetation, where the plants will use the nutrients and water. It differs from a vegetative buffer strip in that a VTS is sized and graded specifically for the operation, and the release of the runoff is controlled. The relatively new design of VTS systems are seeing greater application in animal production operations, but their general function of repeated wastewater application to a defined treatment area could potentially enrich for ARB/G in their treatment areas. This study specifically investigates this concern. Site Description This study was conducted on a central Nebraska vegetative treatment system (VTS) demonstration project, used to treat beef cattle feedlot runoff [27]. The initial feedlot capacity was slightly over 1000 head of feeder calves, and the feedlot owners were interested in expanding the operation to 1200 head while maintaining the same area footprint. A north lot was constructed to handle additional animals, and a pump-based VTS was installed to manage feedlot runoff, and prevent manure nutrients from contaminating nearby surface waters. The VTS consisted of holding ponds designed to collect the runoff from a 25-year/24 h storm, a series of pipes and pumps that distributes runoff, and a set of eight vegetative fields to which the runoff is applied. The feedlot runoff was drained into a sump, conveyed uphill through 8-inch underground pipe to the eight vegetative treatment areas (VTAs), and applied at the top of the fields through irrigation pipe ( Figure 1). The eight VTA cells at this site were built on Hord Silty Loam and Wann Fine Sandy Loam soil, planted to Meadow Brome, Tall Fescue, Orchard Grass, Smooth Brome, Intermediate Wheat, and Pubescent Wheat [27]. Experiments were started after the grasses had had three years to become established. Each VTA was 244 m × 19.5 m (4.5 ha), and they were separated by earthen berms. An additional berm was built along the bottom of the VTA to collect any excess runoff. Pipes channeled excess runoff back to the sump, creating a closed system. Following a rainfall event, the runoff from the feedlot pens was collected in unlined settling basins at the bottom of each pen. Cattle were excluded from the basins by fencing. After allowing solids to settle for 24-48 h, the feedlot wastewater was channeled into the pumping station, for distribution onto the VTAs. Rain that fell on the grassy VTA cells, but that did not infiltrate into the soil, was collected at the bottom of the VTA cells. After collection of the rainfall runoff, valves at the bottom of the VTA were opened, allowing the VTA rainwater runoff to also drain into the pumping station. Experimental Design In order to evaluate the efficacy of the VTS and the dynamics of runoff-associated antibiotic resistant bacteria and antibiotic resistance genes (ARB/G), samples were collected following individual spring and summer rainfall events, for three consecutive years, for a total of six collection time points (Figure 2). The original design included fall rainfall collection as well, but for the three years of the study, no fall rainfall events occurred that were large enough to require VTA application. Three types of samples were collected: rainfall runoff, feedlot runoff, and excess runoff. Rainfall runoff consisted of rain that had fallen on the VTA cells, but that did not infiltrate into the soil. VTA cells were paired (see Figure 1), and rainfall runoff was pooled for sets of two cells. The ARB/G in the VTA rainfall runoff represent organisms and genes that were still available for transport since the previous feedlot runoff application. Feedlot wastewater consisted of material that collected in the settling basins following a rain event. It included cattle manure and feedlot surface material particles suspended in the rainwater. Excess wastewater consisted of feedlot runoff that had been applied at the top of the VTA, and that was collected at the bottom of the VTA. Under normal operation, there was no excess wastewater. The VTA was usually managed so that only the amount of feedlot wastewater that could infiltrate the soil was applied to each VTA cell. For these experiments, feedlot wastewater was deliberately overapplied, so that we could evaluate the potential of the VTA as a mitigation for ARB/G. Sample Collection and Processing All samples were collected in sterile 1 L screw-cap plastic Nalgene bottles, placed immediately in coolers for transport back to the laboratory, and processed the same day as collection. In the laboratory, samples were homogenized by shaking the bottles before aliquots were removed for Experimental Design In order to evaluate the efficacy of the VTS and the dynamics of runoff-associated antibiotic resistant bacteria and antibiotic resistance genes (ARB/G), samples were collected following individual spring and summer rainfall events, for three consecutive years, for a total of six collection time points ( Figure 2). The original design included fall rainfall collection as well, but for the three years of the study, no fall rainfall events occurred that were large enough to require VTA application. Three types of samples were collected: rainfall runoff, feedlot runoff, and excess runoff. Rainfall runoff consisted of rain that had fallen on the VTA cells, but that did not infiltrate into the soil. VTA cells were paired (see Figure 1), and rainfall runoff was pooled for sets of two cells. The ARB/G in the VTA rainfall runoff represent organisms and genes that were still available for transport since the previous feedlot runoff application. Feedlot wastewater consisted of material that collected in the settling basins following a rain event. It included cattle manure and feedlot surface material particles suspended in the rainwater. Excess wastewater consisted of feedlot runoff that had been applied at the top of the VTA, and that was collected at the bottom of the VTA. Under normal operation, there was no excess wastewater. The VTA was usually managed so that only the amount of feedlot wastewater that could infiltrate the soil was applied to each VTA cell. For these experiments, feedlot wastewater was deliberately overapplied, so that we could evaluate the potential of the VTA as a mitigation for ARB/G. Experimental Design In order to evaluate the efficacy of the VTS and the dynamics of runoff-associated antibiotic resistant bacteria and antibiotic resistance genes (ARB/G), samples were collected following individual spring and summer rainfall events, for three consecutive years, for a total of six collection time points (Figure 2). The original design included fall rainfall collection as well, but for the three years of the study, no fall rainfall events occurred that were large enough to require VTA application. Three types of samples were collected: rainfall runoff, feedlot runoff, and excess runoff. Rainfall runoff consisted of rain that had fallen on the VTA cells, but that did not infiltrate into the soil. VTA cells were paired (see Figure 1), and rainfall runoff was pooled for sets of two cells. The ARB/G in the VTA rainfall runoff represent organisms and genes that were still available for transport since the previous feedlot runoff application. Feedlot wastewater consisted of material that collected in the settling basins following a rain event. It included cattle manure and feedlot surface material particles suspended in the rainwater. Excess wastewater consisted of feedlot runoff that had been applied at the top of the VTA, and that was collected at the bottom of the VTA. Under normal operation, there was no excess wastewater. The VTA was usually managed so that only the amount of feedlot wastewater that could infiltrate the soil was applied to each VTA cell. For these experiments, feedlot wastewater was deliberately overapplied, so that we could evaluate the potential of the VTA as a mitigation for ARB/G. Sample Collection and Processing All samples were collected in sterile 1 L screw-cap plastic Nalgene bottles, placed immediately in coolers for transport back to the laboratory, and processed the same day as collection. In the laboratory, samples were homogenized by shaking the bottles before aliquots were removed for Sample Collection and Processing All samples were collected in sterile 1 L screw-cap plastic Nalgene bottles, placed immediately in coolers for transport back to the laboratory, and processed the same day as collection. In the laboratory, samples were homogenized by shaking the bottles before aliquots were removed for analysis. Ten-fold dilutions were made using phosphate-buffered saline (ThermoFisher, Waltham MA, USA). In order to enumerate antibiotic resistant bacteria, and obtain isolates for multiple drug resistance testing commonly used in surveillance and monitoring programs [28], an Eddy Jet spiral plater (Neutec Group, Farmingdale, NY, USA) was used to plate all sample dilutions, in duplicate, onto MacConkey agar (MAC, Difco, Detroit, MI, USA), MacConkey with 16 µg mL −1 tetracycline at (TMAC), and MacConkey with 4 µg mL −1 cefotaxime (CMAC). Tetracycline was chosen because it is commonly used as a target to monitor antibiotic resistance in environmental samples, and cefotaxime was chosen because it is related to drugs that are fed to cattle and that are associated with drug resistant infections in children [29]. All plates were incubated overnight at 37 • C, and counted by hand 18-24 h after plating, using a standard spiral count procedure. Up to three isolates per sample were picked from each media type for further characterization, struck for isolation, and stored at −80 • C. All isolates were later grown overnight at 37 • C in in tryptic soy broth (Difco), and then confirmed as Escherichia coli using EC-MUG broth (Oxoid, Blasingstoke, Hampshire, UK). Isolates that did not display the fluorescence typical of E. coli were excluded from further analysis. Disk diffusion analysis was performed on all confirmed E. coli isolates against 12 drugs (n = 10,596 tests), according to Clinical Laboratory Standards Institute (CLSI) standard methods [30]. Briefly, Muller-Hinton broth (Beckton-Dickinson, Franklin Lakes, NJ, USA) was used to grow isolates overnight, cultures were adjusted to a standard optical density, and swabbed onto Mueller-Hinton agar plates. The CLSI clinical breakpoints were used to assign isolates to "resistant", "intermediate", or "sensitive" categories, although it is acknowledged that these isolates were of environmental origin, and were not confirmed human pathogens. For analysis, "intermediate" isolates were grouped with "sensitive" isolates, since both groups are not resistant. The term "sensitive" is used in results and discussion to include both the CLSI "sensitive" and CLSI "intermediate" isolates. The following drugs were used in the disk diffusion assays: amoxicillin/clavulanic acid (20 mg), ampicillin (10 mg), cefoxitin (30 mg), ceftriaxone (30 mg), chloramphenicol (30 mg), ciprofloxacin (5 mg), gentamycin (10 mg), kanamycin (30 mg), nalidixic acid (30 mg), streptomycin 10 mg, sulfamethoxazone trimethoprim (25 mg), and tetracycline (30 mg). The tetracycline resistance gene profile of the isolates was probed using the polymerase chain reaction (PCR), following the protocols of Ng et al. [31] with Jumpstart RedTaq Master Mix (Sigma, St. Louis, MO, USA). Eleven targets were assayed, representing all three tetracycline resistance gene mechanisms. These included efflux targets tet(B), tet(C), tet(D), tet(K); tet(L), and tetA(P), tet(S); ribosomal protection targets tet(M), tet(O), tet(S), and the enzymatic target tet(X). Thermocycling conditions consisted of 1 cycle of 94 • C for 5 min; 35 cycles of 94 • C for 1 min, 55 • C for 1 min and 72 • C for 90 s; and one cycle of 72 • C for 5 min [31]. Positive control strains were created by cloning [26], and are available upon request. Results A total of 178 wastewater and manure samples were collected over three years from the VTS following spring and summer rainfall events. Nebraska experienced drought over the course of the experiment, and no fall rainfall events were significant enough to allow for VTS operation. Also, there was no rainfall runoff available to be collected in the summers of year 2 and year 3. From the 178 samples, a total of 942 presumptive E. coli isolates were collected, and screened phenotypically and genotypically for selected antibiotic resistance targets (n = 444 from MAC, 467 from TMAC and 31 from CMAC). Enumeration of Tetracycline and Cefotaxime Resistant Bacteria Tetracycline resistant Gram negative enteric bacteria (bacteria that grew on MacConkey plates supplemented with 16 µg mL −1 tetracycline) were detected in all sample types (rainfall, wastewater, and excess) and at all collection times. Mean counts ranged between 1.12 and 5.12 log CFU/mL of sample (Table 1). Three-year mean values for rainfall runoff, feedlot wastewater, and excess wastewater were 1.56 log CFU/mL, 4.75 log CFU/mL, and 4.09 log CFU/mL, respectively (Table 2). Tetracycline Resistance Gene Assays Up to three isolates were randomly selected per sample, for each of the three media types (MAC, TMAC, CMAC), streaked for isolation, and stored for antibiotic resistance gene (ARG) screening. Isolates were screened for the carriage of eleven tetracycline resistance genes, representing three tetracycline resistance mechanisms. Tetracycline resistance gene prevalence, by year, season, and sample type, is displayed in Figure 3. The prevalence of individual tetracycline resistance genes varied for all parameters observed. The tet(B) gene, was the most frequently detected gene in this sample set, followed by tet(L) and tet(C). Most targets (excluding tet(C) and tet(Q)) were more frequently detected in spring isolates, compared to summer isolates. When examining the results by sample type, individual ARGs were less frequently detected in the rainwater runoff samples than the feedlot wastewater and excess wastewater. Note that x axis letters refer to tetracycline resistance genes. Disk Diffusion Assays Following E. coli confirmatory tests, 673 confirmed E. coli isolates were screened for multiple drug resistances using CLSI standardized disk diffusion assays. Of these, 27% (n = 179) were pan-susceptible to the 12-drug panel, 36% (n = 241) displayed phenotypic resistance to a single drug, 22% (n = 146) displayed phenotypic resistance to two drugs, and 7% displayed phenotypic resistance Note that x axis letters refer to tetracycline resistance genes. Disk Diffusion Assays Following E. coli confirmatory tests, 673 confirmed E. coli isolates were screened for multiple drug resistances using CLSI standardized disk diffusion assays. Of these, 27% (n = 179) were pan-susceptible to the 12-drug panel, 36% (n = 241) displayed phenotypic resistance to a single drug, 22% (n = 146) displayed phenotypic resistance to two drugs, and 7% displayed phenotypic resistance to three. Of the remaining 62 isolates, 18 displayed resistance to seven of the assay targets, and one isolate displayed resistance to nine drugs. The majority isolates displaying resistance to seven or more targets (n = 17/18) were picked from CMAC plates. The maximum number of resistances displayed by any one isolate at each of the six timepoints ranged from 3-9, and the maximum number of phenotypic resistances observed at any one time-point (all samples grouped together) ranged from 5-12 ( Figure 4). MAC: n = 31, n = 293, and n = 120 for Rainfall Runoff, Feedlot Wastewater, and Excess Wastewater, respectively. For TMAC: n = 33, n = 293, and n = 141 for Rainfall Runoff, Feedlot Wastewater, and Excess Wastewater, respectively. For CMAC: n = 0, n = 26, and n = 5 for Rainfall Runoff, Feedlot Wastewater, and Excess Wastewater, respectively. Only samples that fluoresced in EC+MUG (confirmed as E. coli) were stamped for resistance patterns. Up to three isolates were picked per sample and media type. The relationships between the rainfall runoff, feedlot wastewater, and excess runoff for tetracycline and streptomycin are displayed in Figure 5. Tetracycline and streptomycin were the most frequently detected resistances in isolates. The results for the remaining targets are available in Supplementary Table S1 and Supplementary Figure S1 (Disk Diffusion Supp Data). The tetracycline results were typical of the majority of targets, with slightly fewer feedlot wastewater isolates, proportionately, displaying resistance to the drug, compared to excess wastewater. Rainfall runoff samples, when present, tended to have lower proportion displaying resistance for each target. The relationships between the rainfall runoff, feedlot wastewater, and excess runoff for tetracycline and streptomycin are displayed in Figure 5. Tetracycline and streptomycin were the most frequently detected resistances in isolates. The results for the remaining targets are available in Supplementary Table S1 and Supplementary Figure S1 (Disk Diffusion Supp Data). The tetracycline results were typical of the majority of targets, with slightly fewer feedlot wastewater isolates, proportionately, displaying resistance to the drug, compared to excess wastewater. Rainfall runoff samples, when present, tended to have lower proportion displaying resistance for each target. tetracycline and streptomycin are displayed in Figure 5. Tetracycline and streptomycin were the most frequently detected resistances in isolates. The results for the remaining targets are available in Supplementary Table S1 and Supplementary Figure S1 (Disk Diffusion Supp Data). The tetracycline results were typical of the majority of targets, with slightly fewer feedlot wastewater isolates, proportionately, displaying resistance to the drug, compared to excess wastewater. Rainfall runoff samples, when present, tended to have lower proportion displaying resistance for each target. Discussion Mammalian feces are a rich source of bacteria, including antibiotic resistant bacteria and their genes (ARB/G). As such, wastewater treatment has an important role as a critical control point for remediation or mitigation. Due to the wide diversity of wastewater treatment options for food animal production systems, many qualitative data gaps remain before comprehensive risk assessment studies can be completed. This study provides seasonal and temporal data (3 years) on ARB/G in a VTS designed to manage wastewater from a beef cattle feedlot operation. Beef cattle are raised in every state of the U.S. [32], and there are an estimated 26,586 U.S. cattle feedlot operations. Data on ARB/G from beef production wastewaters is therefore an important component of efforts to understand and control ARB/G from agricultural production systems. In this study, tetracycline resistant bacteria (defined as the ability to grow on media with 16 μg mL −1 tetracycline) were cultured from all samples, with numbers from rainfall runoff generally 2-3 logs lower than feedlot wastewater. The rainfall runoff was a reflection of the persistence of bacteria and genes following application of wastewater to the soil, and the lower numbers indicate Discussion Mammalian feces are a rich source of bacteria, including antibiotic resistant bacteria and their genes (ARB/G). As such, wastewater treatment has an important role as a critical control point for remediation or mitigation. Due to the wide diversity of wastewater treatment options for food animal production systems, many qualitative data gaps remain before comprehensive risk assessment studies can be completed. This study provides seasonal and temporal data (3 years) on ARB/G in a VTS designed to manage wastewater from a beef cattle feedlot operation. Beef cattle are raised in every state of the U.S. [32], and there are an estimated 26,586 U.S. cattle feedlot operations. Data on ARB/G from beef production wastewaters is therefore an important component of efforts to understand and control ARB/G from agricultural production systems. In this study, tetracycline resistant bacteria (defined as the ability to grow on media with 16 µg mL −1 tetracycline) were cultured from all samples, with numbers from rainfall runoff generally 2-3 logs lower than feedlot wastewater. The rainfall runoff was a reflection of the persistence of bacteria and genes following application of wastewater to the soil, and the lower numbers indicate that the VTS was effective at reducing ARB over time. Manure-born bacteria thrive in the lower gastrointestinal tracts of mammals, where the temperatures are warm and stable. Following growth in the GIT, enteric bacteria are excreted into the environment where nutrients are often limiting, temperatures are generally cool and exceptionally variable, and competition from environmental bacteria is strong. While the bacteria can grow in the environment [33], most die before being re-inoculated into an animal host [34]. In light of these dynamics, and given the rich bacterial communities of soil, plants, and the rhizosphere, it is not surprising to see the observed lower numbers of ARB in the rainfall runoff samples. Compared to tetracycline, cefotaxime resistance (growth on media containing 4 µg mL −1 cefotaxime) was very low (less than 200 CFU per/mL) and was only detected during spring events in years two and three. An increase was noted between years two and three, but due to the small number of measurements, it is difficult to know if this is a long-term trend. Whether this is true seasonality is interesting to consider, but more study is needed to see if this is a consistent pattern in wastewater. It should be noted that no cefotaxime resistant microorganisms were detected in the rainfall runoff. Similar to the ARB results, tetracycline ARGs were consistently present in lower numbers in the rainfall runoff, compared to the wastewater samples. Unlike phenotypic resistance, which is characterized by a single measure (growth in the presence of a defined concentration of tetracycline), genotypically there are dozens of genes that code for tetracycline resistance [35,36]. Here we examined presence/absence of eleven genes, representing all three tetracycline resistance mechanisms. The most informative tetracycline resistance gene targets for this sample set were tet(B) and tet(L) (Figure 3), both coding for efflux proteins. When examined by season, both tet(B) and tet(L) occurred in a higher proportion of the spring isolates compared to the summer isolates. A third target, tet(C) was present in over 10% of isolates, but was isolated from a higher proportion of summer isolates. These particular results reinforce the idea that individual ARGs each have their own ecology [25,26], and the outcomes of data collection can vary depending on which target is chosen as a measure of resistance. In native Nebraska prairie soils, tet(L) and tet(B) were also informative, along with tet(D) and tet(O), which were detected in 54 and 38 percent of the samples, respectively. In the current study, tet(D) was only found in four isolates, and tet(O) was found in 38/822 isolates, with most of the tet(O) isolates (n = 30) coming from the feedlot runoff samples [26]. In contrast, the most frequently detected tetracyline resistance genes in Nebraska organic farm soils were tet(Q), tet(S) and tet(X) [25]. Examining results from all three sites as a way to determine which tetracycline resistance genes might be the most relevant for monitoring and screening, it is clear that the heterogeneity in tetracycline resistance gene prevalence makes choice of a single target or set of targets difficult, without prior knowledge of the tetracycline resistance gene profile of the source materials. Of note, the prairie and organic farm studies surveyed whole community samples, while the current study examined isolates. As with Microbial Source Tracking, protocols will need to start by specifically defining the research question, and include identification of the target, identification of the method (gene based or culture-based), and identification of the analytical approach [37]. As with microbial source tracking, baseline information of sites and targets will improve study outcomes In addition to plating and ARG screening, E. coli VTS isolates were characterized by their antibiotic resistance patterns to better understand the diversity and distribution of resistance in the microbial community. The majority of isolates were considered clinically sensitive to the antibiotics we used for screening. Both single and multiple resistant E. coli strains were more likely to be found in the feedlot wastewater and excess wastewater, compared to rainfall runoff ( Figure 3). The rainfall runoff provides information on the functioning of the VTS over time, since they provide insight into what organisms survive on and are released from the VTS after repeated feedlot wastewater applications. The lower numbers of ARGs found in rainfall runoff suggests that the VTA is effective in remediating the ARGs from feedlot runoff, over time. As might be expected, isolates picked off of plates containing antibiotics were more likely to display multiple resistances, likely due to the selective nature of the isolation, demonstrating the impact of isolation method on interpretation of results. Isolates from the TMAC plates were more likely to display clinical resistant to at least one antibiotic in our panel (94% of TMAC isolates categorized as resistant based on CLSI disk diffusion results), compared to isolates picked off of the plain MAC plate (45% resistant). Of interest is that isolates from the CMAC plates were frequently found to be resistant to multiple drugs, with the majority of isolates (66%) displaying resistance to 5 of the 11 drugs used for screening in this study. So although the prevalence of the cefotaxime-resistant bacteria is low compared to the tetracycline resistance phenotype, their high degree of multi-drug resistance means that they are an important component to monitor in these systems. Isolate resistance expressed in terms of years or seasons show no distinct patterns other than the aforementioned detection of cefotaxime resistant microorganisms only during early spring. VTS have been developed and built as an alternative to conventional full containment systems (holding pond systems) for managing manure, litter, and/or process wastewater discharges from animal feeding operations. VTS are used most often by small and medium-sized feedlots (less than 1000 head) [38]. An estimated 93% of feedlot operations fit this category (NCBA). In this study a VTS was constructed to serve an operation with greater than 1000 animals. Already in its third year of operation, we noticed that the infiltration capacity of the soils increased each year. In general, land application of manure-impacted solids and liquids results in an initial spike in target levels [39], followed by a decline over time [20,40,41]. We observed similar trends in this beef cattle feedlot VTS. The VTS evaluated as part of this study was effective at preventing the release of ARB/G from agricultural wastewaters into nearby surface waters, and successful at reducing ARB/G concentrations over time. Conclusions In this study, we evaluated tetracycline and cefotaxim resistance in bacteria and genes from a demonstration VTS, including feedlot wastewater, and background resistance from rainfall on application fields. Tetracycline resistance in the wastewater was 2 to 3 orders of magnitude higher, compared to rainfall runoff from the vegetated fields, indicating decay of ARB/G over time on the VTA. In addition to collecting data to inform risk models and better understand the ecology of tetracycline and cefotaxime resistance in agricultural wastewaters, our data provide an insight into the influence of microbiological methods on the detection of ARB/G in agricultural systems. Our data clearly reveal that the conclusions one can draw from both culture-based and PCR-based methods depends on the targets measured and the methods used. In this study for instance, the perception of "how much" tetracycline resistance was present in the system would vary greatly depending on which specific tetracycline resistance gene was chosen. It has been suggested that tet(M) be used for monitoring resistance in environmental samples [42]. In this instance, tet(M) prevalence in our E. coli isolates was less than 10%, compared to tet(B), which was 30-50%, depending on the year. And the amount of multiple-drug-resistance observed in E. coli isolates was strongly impacted by the type of media on which the sample was first plated. This media-based difference has important implications for antibiotic resistance monitoring and surveillance efforts, particularly in the context of wastewater and public health. It will be essential to standardize not only which targets are measured, but also how the isolates are first cultured from the environment. Based on results from this study, if detection of low level target is desired, use of selective media may prove useful.
2018-07-03T23:06:14.836Z
2018-06-21T00:00:00.000
{ "year": 2018, "sha1": "b176d425374420dab7706ea7f6f444172ba628e3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/15/7/1295/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b176d425374420dab7706ea7f6f444172ba628e3", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
214682366
pes2o/s2orc
v3-fos-license
Analytical Measurements to Elucidate Structural Behavior of 2,5‐Dimethoxy‐1,4‐benzoquinone During Charge and Discharge Abstract Organic compounds as electrode materials can contribute to sustainability because they are nontoxic and environmentally abundant. The working mechanism during charge–discharge for reported organic compounds as electrode materials is yet to be completely understood. In this study, the structural behavior of 2,5‐dimethoxy‐1,4‐benzoquinone (DMBQ) during charge—discharge is investigated by using NMR spectroscopy, energy‐dispersive X‐ray spectroscopy, magnetic measurements, operando Raman spectroscopy, and operando X‐ray diffraction. For both lithium and sodium systems, DMBQ works as a cathode accompanied with the insertion and deinsertion of Li and Na ions during charge—discharge processes. The DMBQ sample is found to be in two‐phase coexistence state at the higher voltage plateau, and the radical monoanion and dianion phases have no long‐distance ordering. These structures reversibly change into the original neutral phase with long‐distance ordering. These techniques can show the charge–discharge mechanism and the factors that determine the deterioration of organic batteries, thus guiding the design of future high‐performance organic batteries. Many organic compounds have been reported to fulfill the requirements of such a system. However, the reported data are primarily related to the electrochemical behavior of these compounds. In many cases, other physicochemical measurements were not fully undertaken. Previously, Yao and co-workers tested many types of low-molecular-weight benzoquinone analogs as active materials and reported that 2,5-dimethoxy-1,4benzoquinone (DMBQ) was suitable as a high-capacity positive electrode material. [43,45,[61][62][63][64] They explained that DMBQ exhibits a high capacity of around 300 mAh g À1 because of the twoelectron transfer-type redox reaction (Scheme 1). They reported that X-ray diffraction (XRD) measurements of the DMBQ electrode gave certain peaks derived from crystalline DMBQ in a fully charged (delithiated) state. [61,62] As shown in Figure 1, the electrode at the fully discharged (lithiated) state did not show any peaks (the discharged data in Figure 1 were not published in any journal and are only mentioned in the Ref. [61] and [62]). [61,62] However, DMBQ's crystallographic information during the transition state, particularly at the intermediate radical monoanion state [DMBQ(·À)], was unclear. Furthermore, as a physicochemical measurement for the DMBQÀ Organic compounds as electrode materials can contribute to sustainability because they are nontoxic and environmentally abundant. The working mechanism during charge-discharge for reported organic compounds as electrode materials is yet to be completely understood. In this study, the structural behavior of 2,5-dimethoxy-1,4-benzoquinone (DMBQ) during charge-discharge is investigated by using NMR spectroscopy, energy-dispersive X-ray spectroscopy, magnetic measurements, operando Raman spectroscopy, and operando X-ray diffraction. For both lithium and sodium systems, DMBQ works as a cath-ode accompanied with the insertion and deinsertion of Li and Na ions during charge-discharge processes. The DMBQ sample is found to be in two-phase coexistence state at the higher voltage plateau, and the radical monoanion and dianion phases have no long-distance ordering. These structures reversibly change into the original neutral phase with long-distance ordering. These techniques can show the charge-discharge mechanism and the factors that determine the deterioration of organic batteries, thus guiding the design of future high-performance organic batteries. Li system, other techniques compared to the preliminary XRD were not sufficiently investigated. Furthermore, although it would have been of considerable interest to researchers, whether the low-molecular-weight molecules coexisted in two-phases during reaction or rather in a homogeneous phase was not evident. Based on the assumed reaction formula shown in Scheme 1 and the plateau shown in the charge-discharge curve of the DMBQ electrode, [62] it would have been reasonable to conclude that a two-phase coexistence reaction had occurred. However, physicochemical evidence was still required to elucidate this reaction mechanism. In this study, the detailed crystallographic structure change of DMBQ during charge-discharge in a Li system was investigated by using an operando XRD technique. This technique and the crystallographic change in an Na electrolyte system were recently developed by one of our co-authors, Takeichi. [65] In addition to XRD measurements, to determine the charge carrier for the DMBQ cell, 7 Li NMR spectroscopy was carried out. Furthermore, to detect the intermediate radical compound, magnetic measurements were conducted by using a superconducting quantum interference device (SQUID). Raman spectroscopy, another common tool to examine the mechanism during charge-discharge, [66][67][68][69][70][71] was used, along with the theoretical calculations result based on density functional theory (DFT), to investigate the DMBQ electrode change behavior. In this study, the techniques examined are extremely helpful for revealing the charge-discharge mechanism and for determining the factors affecting the deterioration of organic batteries, which could form the basis on which high-performance organic batteries could be designed. Results and Discussion Li system Elucidating the carrier ion Multiple types of organic active materials have been reported to be suitable for the Li system. Their reaction mechanisms are classified into two categories: the first is in which active materials accept and release Li ions, [72] and the second uses an anion during the charge-discharge process rather than the Li ion. The difference in mechanism can have a considerable effect on cell design when fabricating a large-capacity cell. Therefore, analyzing and proving the mechanism becomes very important. To identify the charge carrier in the current DMBQ battery, the Li content concentration change in the electrode was examined ex situ by using 7 Li NMR spectroscopy. Figure 2 shows the change in the stoichiometric ratio of Li to DMBQ (R Li/DMBQ ) in the extracted solutions from the electrodes in the first two cycles. While the initial DMBQ molecule did not contain any Li ions (R Li/DMBQ = 0), the ratio increased to 1.8 (ca. 2) after the first discharge. Subsequently, the ratio can be reduced and become as small as 0.3 when the discharged electrode was recharged. A similar tendency in the ratio change was observed in the next cycle (second discharge: R Li/DMBQ = 1.7; second charge: R Li/DMBQ = 0.3). The observed stoichiometric ratio change indicated that the DMBQ molecule received and released two Li ions during the cathodic and anodic reactions and that the charge carrier in this system was considered to be a Li ion. Magnetic measurement During the charge-discharge process, the DMBQ electrode demonstrated two-plateau potential regions. At the higher potential, the transition between the neutral DMBQ molecule and the radical anion species was considered to proceed (Scheme 1). Moreover, the lower potential region could reflect the transition between the radical anion state and the dianion state. To confirm the formation of radical species, a magnetic measurement was performed for the electrodes obtained from the Figure 1. Changes in the XRD patterns of DMBQ powder and the electrode after the first and second discharge (lithiation) and charge (delithiation). This sample underwent discharge first because it was initially in an oxidized state. The charged patterns were adapted from our previous report [62] with permission from Elsevier. The discharged ones were newly added. cells, which were stopped at a given depth of discharge (DOD). Whereas, in the organic battery field, electron spin resonance (ESR) spectroscopy has often been applied for such a purpose, we decided to use another technique, the SQUID apparatus, in this study. SQUID, which specializes in measuring solid-state samples, is an easy and powerful tool for obtaining a detailed temperature dependency and magnetic force dependency for magnetic susceptibility. [73] Therefore, SQUID is commonly used in the molecular magnetism field to examine the magnetic interactions of solid samples. It is possible to amplify the magnetic responses from samples by increasing applied magnetic field strength. Accordingly, this technique is suitable for detecting a weak signal. Figure 3 a shows the temperature dependency of magnetic response (magnetic susceptibility) at each discharge capacity. As expected, all samples showed a magnetic response that obeyed Curie's law [Eq. (1)]: where c is magnetic susceptibility and T is absolute temperature in the applied temperature range. The observed behaviors were paramagnetic. This result indicates that the magnetic interaction between each molecule in the electrode was very weak, implying the high stability of radial species in the electrode. Furthermore, the intensity change of the magnetic response during the discharge process was examined at 3.0 K (Figure 3 b). The intensity of the magnetic response increased as the DOD increased to around 50 %, i.e., 150 mAh g À1 . After the maximum intensity at approximately 150 mAh g À1 , the response started to decrease. This observation indicates that, during the former discharge process, the radical concentration increased (DOD = 0-50 %). It decreased in the subsequent discharge process (DOD = 50-100 %), confirming the reaction mechanism model depicted in the Scheme 1. Operando Raman spectroscopy with DFT calculations We assumed that DMBQ changed its state, as shown in Scheme 1, during charge-discharge. The Raman spectra of each state should have been different from each other. Therefore, we performed Raman spectroscopy in operando while operating the cell. Figure 4 a shows the second charge curve of the DMBQÀLi half-cell for the operando Raman spectroscopy. The spectra in black in Figure 4 b (labeled A-F) correspond to timings marked with circles in Figure 4 a. During the lower plateau, there seems to have been no change in the Raman spectra. Note that the Raman spectrum obtained at state A may not have reflected the very state of A. The length of the lower plateau was lesser than that of the higher plateau, which is possibly because of the insufficient discharge during the last charging, which can be improved by applying a lower current density. During the upper plateau, there seems to have been changes at around 1334, 1584, and 1600 cm À1 ; the peaks at around 1334 and 1600 cm À1 increased, whereas that at around 1584 cm À1 decreased. These changes between the spectra indicate that a fraction of the main component in state B decreased to zero and a fraction of the main component in state F increased from zero during the upper plateau, which may indicate that there was a two-phase coexistence at the upper plateau. By using ex situ Raman spectroscopy, the differences in the Raman spectra during charge-discharge would have been difficult to analyze, largely because subtle differences in the conductive additives, binders, and impurities would have affected the spectra. However, operando Raman spectroscopy avoided such effects. Therefore, we were able to shed light on the DMBQ phase change analysis. Next, we considered that Raman spectroscopy alongside DFT calculations would help to examine the DMBQ phase change during charge-discharge. We performed DFT calculations to obtain the theoretical Raman shifts for certain states of DMBQ. The harmonic vibrational frequencies of the Raman spectra of [DMBQ(0)], [DMBQ(·À)], and [DMBQ(2À)] were calculated by using DFT at the B3LYP/6-31 + G(d) level in the Gaussian 16 [74] program and Gauss View 6 [75] molecular visualization program package. For the [DMBQ(·À)] and [DMBQ(2À)] states, the DMBQ skeleton was positioned next to one and two Li ions, respectively. These structures were then optimized. Figure 5 shows the obtained spectra. The simulation results were consistent with the measured results for the states from [DMBQ(·À)] to [DMBQ(0)], which correspond to the states from B to F. The simulation results were not consistent with the measured results for [DMBQ(2À)], which corresponds to state A. As mentioned above, the Raman spectroscopy must be conducted at a lower rate to afford more detailed discussion. The simulated peaks at 1393 cm À1 for [DMBQ(0)], 1645 cm À1 for [DMBQ(·À)], and 1659 cm À1 for [DMBQ(0)], all of which are re-lated to skeleton plane vibration modes, are considered to correspond to the peaks at 1334 cm À1 for F, 1584 cm À1 for B, and 1600 cm À1 for F, obtained from the measured results. Crystallographic phase change examination by operando XRD As shown in Figure 1, the XRD pattern of the DMBQ electrode at the fully charged (delithiated) state had certain peaks derived from DMBQ in the crystalline state. However, the DMBQ electrode at the fully discharged (lithiated) state did not show peaks, indicating that it had lost long-distance ordering. [61,62] In this case, the crystallographic information on DMBQ's transition state-the radical monoanion state [DMBQ(·À)]-was unclear. Thus, in this study, operando XRD measurements were carried out to reveal the transition state. Figure 6 a shows the initial discharge curve and the following charge curve of the DMBQ electrode using the operando two-electrode coin cell. The curves have two plateaus and the capacity of each plateau was approximately 150 mAh g À1 . The cell voltages at each plateau were roughly 2.8 and 2.6 V. These values closely resemble those reported previously, [62] indicating that the operando cell is sufficient for examining DMBQ as a half-cell and for XRD measurement. Figure 6 b, c, e shows XRD patterns of the DMBQ electrode obtained in operando condition during the initial charge-discharge. The XRD patterns shown in the Figure 6 b, c corresponds to the circled timing in Figure 6 a. The discharged state of DMBQ is amorphous. This is evident because XRD patterns have no peaks other than the additives and current collector (see Figure 1). Figure 6b, c, e shows the gradual peak disappearance during the higher plateau in the initial discharge. No changes in the peak position or half width of the peak are observed, indicating the unchanged lattice constant of the existing crystal phase and unchanged crystallite size and, hence, two-phase coexistence. If the reaction had been in the homogeneous phase, the peak would have gradually shifted and the half width might have changed during charge-discharge. These changes indicate that long-distance ordering gradually disappeared at a higher plateau. The phase then changed to an amorphous one. For the XRD profile change during the lower plateau in the initial discharge, no change in the pattern and thus no long-distance ordering was observed. With reference to the lower plateau region, no XRD profile changes were observed, even during the subsequent charge. However, the peak that had disappeared returned when the charging process proceeded to the higher plateau region. The peak intensity gradually increased without any changes in the half width or peak position, indicating that this transition system was a two-phase coexistence one. The relationship between the peak intensity at around 12.58 in the XRD pattern and the discharge and charge capacity is shown in Figure 6 b. Regarding the peak intensity during the higher plateau, there was an increase and decrease according to the state of charge. Poizot et al. [12] observed similar two-phase coexistence behavior by using the galvanostatic intermittent titration tech-nique for another DMBQ analog, 2,5-diamino-1,4-benzoquinone (DABQ), which gave rise to very flat plateaus. Moreover, they reported that the operando XRD pattern of DABQ directly indicated two-phase coexistence. [12] In their study, DABQ had long-distance ordering in the fully charged state, the halfcharged state, and the fully discharged state. This is a different result to that obtained for DMBQ, and the origin of this difference requires clarification in the future. In this study, the peaks at 2q % 12.58 were investigated. Figure 7 shows the optical and scanning electron microscopy (SEM) images of the DMBQ powder before electrode fabrication. From these images, the powder appears to be in a crystal phase, which is confirmed by the powder XRD pattern of DMBQ ( Figure 1). Figure 8 shows the XRD pattern for DMBQ simulated by using the reported crystallographic data [76] and a molecular arrangement in the crystal. From the crystallographic data, the peak at approximately 12.58 is derived from the scattering in the (110), (01À2), and/or (111) directions. When the crystal lacks strong anisotropy, it indicates that the peak at around 12.58 comes mostly from the scattering in the (110) direction. One of the (110) planes is shown in light blue in Figure 8 b. Elucidation of the carrier ion We next examined DMBQ in a Na system. Many organic cathodes have been reported to function both in Li systems and in Na systems, as well as other multivalent-cation-based systems. [6] To identify the charge carrier in the DMBQ battery for the Na system, the Na content concentration change in the electrode was examined ex situ by using the energy dispersive X-ray spectroscopy (EDX; Figure 9). Whereas the pristine electrode had no Na, the stoichiometric ratio of Na to DMBQ (R Na/ DMBQ ) increased to 2.7 after the first discharge. During the subsequent first charge, the ratio decreased to 0.7. Although the obtained values during the initial charge-discharge process seemed to have been slightly overestimated compared with the redox model, in which the ratio is supposed to change between 0 and 2, the value difference agrees with the model very well. This tendency was repeated for the second cycle, although the rate of change became lesser, thus reflecting the capacity decay during cycling. The observed stoichiometric ratio change indicates that the DMBQ molecule receives and releases Na ions during the cathodic and anodic reactions as described above, confirming that the charge carrier in this system is certainly the Na ion. Crystallographic phase change examination for the Na system The phase change in crystallography was not as clear for the Na system as for the Li system. Therefore, we first assembled the conventional coin cell with DMBQ in the Na system. Fig-ure 10 a shows the first discharge and the following charge curve. The obtained capacity was approximately 240 mAh g À1 , which is equivalent to roughly 80 % of the theoretical capacity, assuming the two-electron redox reaction of DMBQ. Thus, we were able to confirm that DMBQ definitely also works for the Na system. Figure 10 c shows the XRD pattern of the electrode obtained in operando conditions during the initial discharge obtained at the points marked with circles in Figure 10 b. Figure 10 d shows the variation in peak intensity in the XRD pattern shown in Figure 10 c. The changes in the XRD pattern were very similar to those for the Li system. In future, we aim to develop this type of experiment for a Mg system, [43] in which the monovalent DMBQ would be in a different state compared to the multivalent counter cation. Conclusions In this study, we investigated, by using several tools, the structural change of DMBQ when it is used as an active material. 7 Li NMR and EDX spectroscopies demonstrated that the DMBQ molecules receive and release Li ions and Na ions, in the Li system and Na system, respectively, during the cathodic and anodic reactions. Moreover, these tools revealed that the Magnetic measurements by using SQUID and operando Raman spectroscopy alongside DFT calculations produced data supporting the premise that DMBQ molecules in the electrode adopt three states during charge-discharge, whereas the intermediate state is a radical species. Operando XRD, which is a very simple method in which a hole is punched out from the cell exterior, was applied to the crystallographic analysis of DMBQ. Consequently, the DMBQ sample was considered to be in a two-phase coexistence state with the neutral [DMBQ(0)] state and the radical monoanion [DMBQ(·À)] state at the higher plateau. Moreover, the radical monoanion [DMBQ(·À)] and dianion [DMBQ(2À)] phases showed no long-distance ordering for either Li or Na systems. Two-phase coexistence during charge-discharge was also reported for other quinone derivatives. [4,9,12,15,18] Furthermore, the measurements revealed that these structures without long-distance ordering reversibly return to the original neutral [DMBQ(0)] phase with long-distance ordering. In future, the crystallographic information on the [DMBQ(·À)] and [DMBQ(2À)] phased on short-range ordering should be examined by using suitable techniques such as X-ray absorption fine structure spectroscopy. Interestingly, as confirmed by this and previous studies, DMBQ works in a divalent Mg system. [43,45,47] DMBQ crystallographic examination during charge-discharge in a divalent Mg system is of considerable interest and requires further examination. The techniques explained herein are helpful for revealing the charge-discharge mechanism and factors determining the deterioration of organic batteries. This will be useful as a guide when designing high-performance organic batteries in future. Experimental Section Electrode preparation A positive electrode composite sheet was first prepared by mixing DMBQ powder (> 98.0 %, Tokyo Chemical Industry, Japan), acetylene black as the conductive additive, and poly(tetrafluoroethylene) (PTFE) as the binder, in the weight ratio 4:5:1. The sheet was then pressed onto a mesh-type stainless steel current collector. The electrode was dried under vacuum at 60 8C for 1 h. Figure 9. The change in stoichiometric ratio of Na to DMBQ (R Na/DMBQ ) obtained from the EDX measurements in the first two cycles. Figure 10. a) The initial discharge and the following charge curves of the DMBQ electrode in a Na system with a traditional two-electrode coin cell. The vertical axis shows the voltage between the DMBQ working electrode and the Na counter electrode (Na CE ). b) The XRD pattern of the DMBQ electrode obtained under operando conditions during initial discharge. c) Initial discharge curves of the DMBQ electrode in a Na system with a two-electrode coin cell for operando measurements. The circled point indicates the time when the XRD patterns were obtained under operando conditions. d) Relationship between the peak intensity at approximately 12.58 in the XRD pattern and the discharge capacity. Cell fabrication Cells were fabricated in an argon-filled glove box. A two-electrode coin-type cell was fabricated in a dry chamber, in accordance with a reported method [65] where the dew point was < À60 8C. The positive electrode composite sheet pressed onto a mesh-type stainless steel current collector, Li foil, 1 m LiClO 4 /g-butyrolactone (GBL), and a glass fiber filter were used as the working electrode, the counter electrode, the electrolyte, and the separator, respectively. The aforementioned process was for the Li system, whereas Na foil and 1 m NaClO 4 /GBL were used as the counter electrode and the electrolyte, respectively, for the Na system. Electrochemical treatment DMBQ, and thus the cells, were in a charged state initially. Thus, the current first flowed in the discharge direction. The cell was then discharged and charged at room temperature. The current density and cut-off voltage range was set to be 10-20 mA g (DMBQ) À1 and 2.0-3.4 V for the Li system, and 1.65-2.65 V for the Na system. In this study, the obtained capacities are expressed per mass of DMBQ. For the ex situ experiment, the electrochemical treatment was stopped, for example, just after the first discharge and the following charge. Operando XRD measurements and cell fabrication A two-electrode coin-type cell was fabricated in the same manner as described above. For X-rays to penetrate through the exterior, a hole was punched in the center of the working electrode side of the exterior. Then, Al foil, with a 20 mm thickness, was pasted at the hole, and the gap was sealed with vacuum grease (Figure 11). Each cell was then set in the XRD machine to analyze the change in the bulk structure of DMBQ on discharge and charge using an X'Pert PRO MPD (PANalytical, Netherlands) diffractometer with MoKa radiation (wavelength, l = 0.70932 ) at an acceleration voltage of 60 kV and a current density of 50 mA (Figure 11). A MoKa line was used to reduce the attenuation of the Al foil rather than the copper radiation source. The XRD scans ranged from 2q = 118-148, with a step size of 0.0088. Each scan took 5 min at room temperature. We then collected the operando XRD data of the DMBQÀ Li system and the DMBQÀNa system by using the deintercalation process (electrochemical oxidation process) and intercalation process (electrochemical reduction process). Operando Raman spectroscopy and cell fabrication A two-electrode coin-type cell was fabricated in the same manner as described above. Figure 12 shows the structure of the cell used in this measurement. For light to penetrate through the exterior, a quartz glass window was positioned in the center of the working electrode side of the exterior. Note that the mixed electrode material was pressed into an Al mesh instead of onto metal foil such that the Raman scattered light could be detected from the side without a counter electrode. The Raman spectrum was then obtained in operando for each cell by using a Raman microscope (RAMANtouch VIS-NIR-LT, Nanophoton, Japan). The wavelength was 532 nm. The spectrum was obtained by averaging the signal from the scanned area (area flash mode in the machine) to prevent light energy from being concentrated in the focused area. The area was approximately 1500 1500 mm. The exposure time was 10 s (10 times of 1 second) for one spectrum and for one scanned area. Ex situ XRD measurements The same set-up was used as that used for the operando XRD measurements, but a Cu radiation source was used rather than a Mo X-ray source. After disassembling the cell without holes, the electrodes were taken from the cells and washed with GBL to remove the electrolyte salt. The resulting data for these measurements is shown in Figure 1. Energy dispersive X-ray spectroscopy EDX measurements were conducted ex situ for the quantitative analysis of elements in the electrode. In the experiment, the cells were disassembled in an argon-filled glove box. The DMBQ electrodes taken from the cells were wiped to remove the electrolyte solution and were then dried in a vacuum. The samples were then transferred to the chamber of a SEM (JSM-IT100, JEOL, Japan). The atomic ratios of sodium, fluorine, and chlorine were then collected, and then the values were corrected to remove the influence of the Figure 11. Photographs and schematic representation of the cross section of the coin cell and the XRD machine used for operando XRD measurements. [65] residue of the electrolyte salt (NaClO 4 ) by using the chlorine ratio. The calculated stoichiometric ratios of Na to F (R Na/F ) were 1.6, 0.4, 0.9, and 0.5 for the first discharge, first charge, second discharge, and second charge samples, respectively. Then, these values were converted to the stoichiometric value of Na to DMBQ (R Na/DMBQ ), considering the molar ratio of PTFE and DMBQ in the electrode. 7 Li NMR spectroscopy The Li ion concentration in the electrode during the charge-discharge process was measured ex situ by 7 Li NMR spectroscopy (500 MHz, JNM-ECA series, JEOL, Japan). For this measurement, the cells were assembled with a tetrahydrofuran-based electrolyte solution. The electrodes were taken out of the cells after a given charge-discharge process and were rinsed with a pure solvent to remove the electrolyte salt. The water-soluble contents were thoroughly extracted from the electrodes by immersing each electrode in a given amount of deuterium oxide. For this measurement, a coaxial double tube was used. The internal tube was filled with an organic solution of lithium bis(trifluoromethanesulfonyl)imide to use the signal intensity as an internal standard. The chemical shift values for the 7 Li nuclei in water and an organic solvent were different, reflecting the environmental difference. Therefore, it should be possible to detect the 7 Li nuclei intensity change in the target samples. The relative Li intensities of these solutions were first obtained using the 7 Li nucleus signal intensity in the solutions by comparing these signals with those of the internal Li standard solution. These values (ca. 1.4, 0.3, 1.3, and 0.2 for the first discharge, first charge, second discharge, and second charge samples, respectively) were altered to the quantitative concentration values of each solution by using calibration curves made in advance. Finally, the obtained values were converted to the stoichiometric ratio of Li to DMBQ (R Li/DMBQ ) in the electrodes.
2020-03-29T07:16:15.689Z
2020-03-27T00:00:00.000
{ "year": 2020, "sha1": "fc72c30ca08bb0c65bb2adefe0379994d080e34f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/cssc.201903575", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "41f38c2657971917e4a4e98a19b31e1ee254d7e3", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
254974201
pes2o/s2orc
v3-fos-license
Morpheus: An A-sized AUV with morphing fins and algorithms for agile maneuvering We designed and constructed an A-sized base autonomous underwater vehicle (AUV), augmented with a stack of modular and extendable hardware and software, including autonomy, navigation, control and high fidelity simulation capabilities (A-size stands for the standard sonobuoy form factor, with a maximum diameter of 124 mm). Subsequently, we extended this base vehicle with a novel tuna-inspired morphing fin payload module (referred to as the Morpheus AUV), to achieve good directional stability and exceptional maneuverability; properties that are highly desirable for rigid hull AUVs, but are presently difficult to achieve because they impose contradictory requirements. The morphing fin payload allows the base AUV to dynamically change its stability-maneuverability qualities by using morphing fins, which can be deployed, deflected and retracted, as needed. The base vehicle and Morpheus AUV were both extensively field tested in-water in the Charles river, Massachusetts, USA; by conducting hundreds of hours of operations over a period of two years. The maneuvering capability of the Morpheus AUV was evaluated with and without the use of morphing fins to quantify the performance improvement. The Morpheus AUV was able to showcase an exceptional turning rate of around 25-35 deg/s. A maximum turn rate improvement of around 35% - 50% was gained through the use of morphing fins. I. INTRODUCTION The directional stability of autonomous underwater vehicles (AUVs) ensure the ability to maintain a steady course with minimal corrective control actions in the presence of disturbances [1]- [3]. The agility, or the maneuverability, of an AUV is the potential to make rapid maneuvers in heading and depth planes. The stability and agility have contradictory requirements; i.e. static or controlled surfaces located towards the stern of the vehicle (e.g. rudders, elevators, fixed fins, shrouds, etc.) increase the directional stability; however, they also adversely affect the maneuverability, reducing the ability to make rapid turns. This is because the increment in the stability index of a vehicle due to stern control surfaces is often larger than the turning moment it provides [4]- [11]. Both stability and maneuverability are desired features for AUVs [12]; therefore, AUVs in general are designed for a middle ground performance, partially compromising both stability and maneuverability. Improving both these features simultaneously was not possible for torpedo-shaped AUVs because they impose contradictory requirements. However, in our recent work [9], [11], we theoretically as well as experimentally showed that both stability and maneuverability can be improved by dynamically altering the directional stability, adopting the concept of bio-inspired morphing fins. A. Designing an A-sized "base" AUV An AUV is a complex system with a number of corelated subsystems. These subsystems can be primarily divided into two categories: (1) the base layer, and (2) the specialized layer. The base layer consists of components that are essential for basic autonomous operations of the vehicle (i.e. the base vehicle); for example, underwater navigation, basic autonomy, low-level control, basic communication, and related essential sensing capabilities. In general, AUVs are employed to conduct specific task(s) and mission(s). The specialized layer includes additional hardware and software components that are vehicle and application specific, which are built on top of the base layer. This layer may include additional hardware interfaces and drivers, sensor processing algorithms, autonomy algorithms, etc. For instance, a vehicle designed to conduct side-scan sonar mapping operations, the specialized layer will include the sonar related hardware components and specialized software modules such as sensor drivers, on-the-fly data processing and recording software, and potential autonomy algorithms for adaptive sampling and mapping. In this work, we designed and constructed an A-sized base AUV, augmented with a stack of modular hardware and software, including navigation, autonomy, control and high fidelity software-in-the-loop (SITL) and hardware-inthe-loop (HITL) simulation capabilities (note -A-size stands for the standard sonobuoy [13] form factor, with a maximum diameter of 124 mm and a length of around 0.9 m, ensuring the ability to launch from standard sonobuoy launchers onboard a wide array of fixed wing and rotary wing air crafts, surface ships and submarines [14]). Subsequently, leveraged from our previous work [11], we extended the base vehicle with a new tuna-inspired morphing fin payload module design, where the fins that can be deployed, deflected and retracted as required, augmenting the vehicle with capability to dynamically change its stability-maneuverability qualities. The designed base vehicle with morphing fin payload mechanism is named as the Morpheus AUV. B. Designing a morphing fin payload module Aquatic animals that specialize in cruising, such as tunas, require a higher directional stability to minimize the control action needed during cruising. Hence, they have streamlined bodies that are relatively stiff, limiting their body flexing to the last 30% of their length [15]. However, their prey consists of smaller fish that have high body flexibility, and, hence, by employing significant body curvature, they can turn very rapidly [16]. As a result, tunas are capable of systematically changing the shape of their body fins to dynamically change the directional stability to conduct rapid maneuvers at high speed [17], [18] -when the forward located fins are retracted, their body becomes more directionally stable, gaining the ability to stably cruise at high speeds using a small amount of control energy. When they need to make a rapid turn, especially at high speeds, they deploy the dorsal fins whose mere presence destabilizes the body, increasing maneuverability. In addition, active control of the ventral fins provides additional turning moment, and smooth transients of forces and moments to obtain the precise level of directional body stability for the intended maneuver [19]. Inspired by tuna's adaptation mechanism, our previous work [9], [11] demonstrated the ability to implement an engineered design of retractable fins for rigid-hull, torpedoshaped AUVs in order to dynamically vary the stabilitymaneuverability indices. While there is a number of recent studies and designs on developing biomimetic AUVs [20]- [23], as suggested by [24] and [25], there is still a large gap between fish-like vehicle platforms and their corresponding aquatic animal; hence, torpedo-shaped AUVs are still superior to biomimetic AUVs in terms of speed and endurance. In addition, many missions tasked to AUVs, such as seabed mapping, anti-submarine warfare, and surveillance, cannot be performed by biomimetic vehicles due to their unsteady large lateral motions, mechanical noise, and difficulty to design a sufficiently large payload bay while fulfilling soft-body bio-mimetic requirements. However, bio-mimetic AUVs are superior in terms of their maneuverability and agility, as compared to traditional torpedo-shaped vehicles [24]. Therefore, the intention of our work is not to develop a biomimetic AUV, but rather to create a bio-inspired vehicle by replicating the resultant hydrodynamic effects for a rigidbody engineered design, in order to enhance the vehicle's operational performance. Through theoretical derivations, towing tank experiments and analytical simulations, [9] investigated the ability to alter the stability and maneuvering qualities of self-propelled, rigid hull AUVs by employing morphing fins. Our previous work [11] further extended this by investigating the variation of stability-maneuverability with different vehicle configuration and appendage designs. The evaluated vehicle configurations included: (1) the bare hull vehicle, (2) bare hull with different sizes of stern control surfaces, (3) different sizes of stern control surfaces combined with forward fins (4) different sizes of forward fins, and (5) different locations of the forward fins. This investigation was carried out by employing mathematical analysis, captive model tests and maneuvering simulations; validated with free swimming experiments. A 1-meter long bare hull AUV, retrofitted with different 3D-printed static appendages was used to investigate the variation of turning rate with free-swimming experiments. In this paper, we present the design and construction of an A-sized base vehicle, including: (1) hull form and appendage configuration; (2) base vehicle hardware design, including the mechanical design and the construction of actuators, internal electronics and embedded computer system; (3) base vehicle software design, including underwater navigation, basic autonomy, low-level control, basic communication, and related essential sensing capabilities. Subsequently, leveraged from our previous work [11] in terms of hydrodynamic design, we develop an operational morphing fin payload module, and outfit it into our A-sized base vehicle; creating the Morpheus AUV. We present the design and construction of the morphing fin payload, including: (1) theoretical aspects; (2) The base vehicle developed in this work was a derivation of the expendable mobile anti-submarine warfare training target (EMATT) vehicle hullform, designed and produced by Lockheed Martin Corporation [14]. Figure 1A shows the first iteration of our base vehicle, which utilized an EMATT shell, augmented with our own electronics and software stacks. The shell included the original EMATT nose-cone, empty body shell, main-motor bay that housed the original EMATT main motor, and a free-flood tail-cone with solenoid controlled control surfaces. Throughout this paper, we refer to this vehicle as the "MIT-EMATT". The second iteration of our base-vehicle; i.e the optimized edition, is shown in Figure 1B, which included hydrodynamically optimized nose and tail cones. The optimized nosecone included an embedded GPS antenna, LED strobes, external pressure sensor and vacuum port; and the optimized tail-cone included four individually controlled, servo-driven control surfaces. These are further discussed in the following sections. Figures 1C and 1D show the base vehicle appended with the morphing fin payload module. When our base vehicle is appended with the morphing fin payload it is referred to as the Morpheus AUV. The Morpheus AUV had an overall length of 0.9 m and an A-sized maximum diameter of 0.123 m. Figure 1E shows the base vehicle appended with morphing fin payload module as well as a piUSBL payload module [26], [27], which is referred to as the Perseus AUV. A. Nose-cone design The original EMATT platform had a nose-cone with a flattip, primarily to ensure a higher usable space density, which resulted in a higher drag coefficient (see Figure 2A). In this work, a new nose-cone was designed as shown in Figures 2B and 2C, which was optimized to reduce the hydrodynamic resistance of the body as well as to preserve the usable space density inside the nose-cone [28]. The optimized nose-cone was manufactured by 3D printing; which was connected to the vehicle body using a metal hull extending ring, as shown in Figure 2C. The nose-cone was designed with two cylindrical slots, which were used to install the vehicle's external pressure sensor sensor (i.e. Blue Robotics depth sensor [29] housing an MS5837-30BA pressure sensor [30]) to measure the vehicle depth, and a vacuum port. A third rectangular slot was also designed, which allowed to insert a circuit board containing the GPS antenna and LED strobe. Upon installation of the circuit board, the slot was potted with clear epoxy to ensure the water-tightness. The vehicle's main battery bank was housed inside the nose-cone. B. Tail-cone design The original EMATT platform had a tail-cone with a solenoid-controlled single rudder and a single elevator. Due to solenoid control, they both were limited to three positions: hard-to-port, hard-to-starboard and neutral. In the MIT-EMATT base-vehicle, we maintained the same actuator mechanism, connected to our own electronics and software. The hydrodynamically optimized tail-cone version, as shown in Figure 3 had four cruciform-shaped, independently The 3D printed optimized nose-cone consisted of two cylindrical slots to install the external pressure sensor and vacuum port; and a third rectangular slot to insert a circuit board containing the GPS antenna and LED strobes, which was potted with clear epoxy to make it waterproof. controlled, servo-driven control surfaces that can provide heading, pitch and roll control to the vehicle. Due to servo control, each control surface had the ability to be precisely controlled with a maximum articulation angle limit of around 15 degrees. The control surfaces and the propeller were protected by a shroud, which was an operational requirement. The shroud was connected to the hull of the vehicle using four fixed fins, which were not only acting as supports for the shroud, but their fixed 3 • deflection angle also developed lift forces and hence a moment that counteracts the propeller torque. As shown in Figure 3A, the tail-cone assembly consisted of the four control surfaces, their control linkages, and four servos, all mounted in a 3D-printed shell. The servos were held in place in the shell by 3D-printed dogs. Servo shaft rotation was converted to push-rod reciprocating action by cam assemblies. The push-rods then drove control surface articulation through control arms, while the control surfaces rotate around fixed pins. Since the tail-cone module was freeto-flood and the utilized micro-servos were not intended for submersion, they were oil-filled in-house. The servo cables were transitioned to the watertight main-motor bay through marine epoxy-filled bulkhead penetrators. The tail-cone assembly consisted of the four oil-filled micro-servos, cam and push-rod assemblies and control linkages, all mounted in a 3D-printed shell. (B) The two rudders, two elevators and propeller were protected by a shroud. The shroud was connected to the hull of the vehicle using four fixed fins, which had a fixed 3 • deflection angle to counteract the propeller torque. (C) The free-flood, 3D-printed tail-cone section was separated from the watertight main-motor bay using a bulkhead, which had a shaft seal to penetrate the propeller drive shaft. C. Electronics design batteries, power controls, sensors, servos, and a custom LED strobe panel. 1) BeagleBone Blue: The BeagleBone Blue single board computer [31] was selected as the main vehicle computer. It contains an ARM processor [32] that runs Linux operating system, as well as two Programmable Real-Time Units (PRUs), which are independent of the ARM processor; therefore, capable of quickly responding to inputs and produce very precisely timed outputs, such as PWM motor control outputs, similar to a micro-controller. Thus, both low-level control as well as higher-level processes were able to be run on a single computer, reducing the complexity and saving physical space that is precious for A-sized AUVs. The BeagleBone Blue included analog to digital converters (ADC), general purpose input and output (GPIO), PWM support, embedded inertial measurement unit (IMU), I 2 C interface, embedded WiFi, and universal asynchronous receiver-transmitter (UART) serial buses. 2) Backboard: A custom PCB served as an electronic and physical integration foundation for the BeagleBone and other components, including the motor controller, current sensor, GPS module, 6V power supply, power conductors, servo connections, and system cabling. Discrete components provided ADC conditioning and logic translation for the LED strobe. 3) Motor controller: The utilized main motor controller was a Cytron MD25HV 7-58V controller, selected to provide power in excess of 1 kW, should this be required for future designs, all in a compact and durable package. The standard EMATT motor required 150 W. 4) Current sensor: A Pololu 4046 current sensor produced an analog signal that was used to monitor motor current. 5) Depth and barometric pressure sensors: A Blue Robotics Bar30 depth sensor provided real-time depth readings via the I 2 C bus to the BeagleBone. The BeagleBone itself has an ambient pressure sensor, useful for pre-and mid-test internal vacuum monitoring. 6) LED strobe: The LED strobe panel was located in the nose-cone. It consisted of LEDs and a GPS antenna mounted on a custom PCB, all potted in clear epoxy. 7) Servo drive: The six micro-servos in the Morpheus vehicle design were driven by conventional PWM signaling from the BeagleBone through the backboard. 8) GPS and cellular modem: GPS and cellular modem capabilities were provided by an Adafruit FONA 3G Cellular Breakout board. The antenna for this was routed to the nose-cone. 9) Power supply: The nominal battery voltage for the system was 44.4 volts, supplied by two 6 cell lithium-polymer (LiPo) batteries in series. These were connected to the rest of the system by a 30A combination circuit breaker/power switch, followed by a relay-operated DC contactor. The contactor was only activated when an external plug is inserted into a throughhull connector. This allowed us to control vehicle power after the hull has been closed and pressurized. Raw battery voltage was brought down to 12V by an automotive-style buck converter and distributed to the Bea-gleBone and LED panel. A Pololu 4092 buck converter provided regulated 6V DC power for the Adafruit FONA GPS and 3G cellular module. III. BASE VEHICLE SOFTWARE DESIGN In this work, we subdivided the critical components (both software and hardware) of the Morpheus AUV into two categories; the base vehicle components, and specialized components. This modular subdivision allowed us to rationally reconfigure and re-purpose our A-sized base vehicles for other applications. Figure 5 illustrates a higher level overview of hardware and software components of the Morpheus AUV, together with the information flow among them. The components that belong to the base layer are filled in blue, while those belong to the specialized layer (i.e. related to morphing fins, in this case) are filled in green. The remainder of this section will discuss the vehicle software components in the base layer. The base software layer, shown in Figure 5 with blue filled blocks, is responsible for all the essential functionalities of a vehicle that transforms the hardware package into an autonomous platform. This includes navigation, autonomy, lowlevel control, communication and interfacing with related sensor and actuator hardware. As seen from Figure 5, the sensor-to-actuator data flow begins with navigation sensor drivers, which communicate with external navigation sensor hardware, and provide raw sensor data to the vehicle's navigation system. The navigation software is responsible for estimating the vehicle's position (i.e. the latitude, longitude of the vehicle where it thinks it is at), depth, speed and attitude (roll, pitch and heading angles), while on the surface as well as underwater. The autonomy software is responsible for guiding the vehicle to the intended target location while avoiding no-go zones and obstacles in order to accomplish the mission tasks. The autonomous helm typically digests the navigation solution, and continually produces decisions on the desired heading, desired speed and desired depth commands that the vehicle needs to follow in order to achieve mission objectives. The low-level control software is responsible for executing the desired heading, speed and depth commands instructed by the autonomy system. This is achieved by controlling the actuators of the vehicle (e.g. propeller speed and control surface angle of attacks) to maintain the desired commands to the best of its ability. The actuator drivers then communicate these commands to the external actuator hardware; for example, using PWM, GPIO, and controller area network (CAN) bus commands. With respect to our base vehicle software design, the components shown in Figure 5 are subdivided into three modular sub-systems. The primary software hub, which is referred to as the MITFrontseat, includes low-level sensor and actuator drivers, low-level control system, higher-level mission and safety management processes. The MITFrontseat outsources the navigation task to a specialized navigation enginenamed HydroMAN 2.0, which provides the vehicle's position estimate (i.e. navigation solution) to MITFrontseat. The vehicle autonomy is either handled within MITFrontseat, or can also be outsourced to a user's own payload autonomy software system. Each sub-component outlined in this paragraph are further explained in the following sections. A. Middlewares -inter-process and inter-system communication As visualized in Figure 5, the base vehicle software is composed of a number of distributed components; e.g. low-level interfaces, navigation, autonomy and low-level control modules. Hence, using a suitable middleware, or a combination of several middlewares, to glue the software components together is important [33], [34]. There are a few different choices of middleware typically used by the marine robotics community, such as, common object request broker architecture (CORBA) [35], mission oriented operating suite (MOOS) [36], data distribution service (DDS) [37], robotics operating system (ROS) [38], Goby3 [39], lightweight communications and marshalling (LCM) [40], etc. In this work, we primarily use MOOS as the middleware for inter-process communication within the software sub-systems. In addition, a standardized interface that exchanges pre-defined, encoded google protocol buffer (protobuf) [41] messages over the TCP communication architecture is also used for inter-system communications; for example, between MIT-Frontseat and HydroMAN 2.0, and between MITFrontseat and payloads. This architecture was chosen to ensure the independence of each sub-system from others, and the ability to re-use these sub-systems in different frameworks, even if they use non-MOOS middleware. This is further discussed in Sections III-D.1 and III-I. Figure 6 illustrates the complete base-vehicle software diagram of the Morpheus AUV. In this work, we developed a boilerplate frontseat software stack, referred to as the MITFrontseat, that is sufficiently modular and generic to be utilized as a base-vehicle frontseat software for other types of AUV designs as well. As seen from Figure 6, the MITFrontseat stack handles both low-level routines such driving low-level hardware (i.e. driving and communicating with sensors and actuators), while also handling higher-level processes such as navigation, autonomy, control and vehicle safety management. Typically, a micro-controller is used to handle low-level routines, which is interfaced with a single board computer that runs higher level processes. In this work, however, we have used a BeagleBoard single board computer [31] as the main vehicle computer. It contains an ARM processor [32] that runs Linux operating system, as well as two PRUs, which are independent of the ARM processor; therefore, are capable of quickly responding to inputs and produce very precisely timed outputs, such as PWM motor control outputs, similar to a micro-controller. Thus, both lowlevel routines as well as higher-level processes were able to be run on a single computer, reducing the complexity and saving physical space that is precious for A-sized AUVs. B. Embedded computing system Since the main vehicle computer also handles low-level hardware, the MITFrontseat included an array of MOOS drivers for various sensors and actuators that are typically used in micro AUVs. One of the drawbacks of this architecture is that some of these MOOS drivers are only supported for BeagleBoard computers; hence, if one anticipates to run MITFrontseat on a different single board computer board, these low-level drivers may required to be modified accordingly. That said, all the higher level processes (i.e. navigation, autonomy, control and vehicle and missions safety algorithms) are agnostic to the computer board. C. Sensor software drivers within MITFrontseat As seen from Figure 6, an array of MOOS drivers were developed to communicate with various hardware sensors and actuators used in the vehicle via various hardware interfaces available onboard the BeagleBoard. Each sensor driver publishes the raw sensor data to the MOOS database of the MITFrontseat MOOS community. 1) Depth sensor driver (iBlueRoboticsDepth): The depth of the vehicle was obtained by measuring the external hydrostatic pressure, and converting it to a corresponding depth value, accounting for the water temperature and density. The Morpheus vehicle was equipped with a Blue Robotics depth sensor [29] that housed an MS5837-30BA pressure sensor [30]. The pressure sensor was connected to the I 2 C bus of the BeagleBone Blue embedded computer using its JST connectors. A MOOS driver application, iBlueRoboticsDepth, was developed to read the external pressure and temperature measurements from the Blue Robotics pressure sensor, and to compute the corresponding vehicle depth, using a preconfigured water density. The calculated depth was then filtered with an outlier rejection scheme -if a depth reading that suggests a depth rate of over 5 m s −1 (pre-configurable) was observed, it is rejected since a 5 m s −1 depth rate is unrealistic. Upon the outlier rejection scheme, a moving average filter with a window size of 20 samples (preconfigurable), which results in a time interval of around 2 seconds, is also applied to smoothen the data. Filtered depth value is finally published to the MOOS database. 2) IMU sensor driver (iBBBlue and iXsensMTi): The attitude (i.e. the roll, pitch and heading angles) of the vehicle was measured using an InvenSense MPU-9250 [42] micro-electromechanical system (MEMS) 9-axis inertial measurement unit (IMU), embedded in the BeagleBone Blue computer board, which is routed to the I 2 C bus. An existing, thirdparty MOOS driver application, iBBBlue [43] was used to read data from the IMU. The iBBBlue application utilizes several functions given in the Robot Control Library [44] to conduct tasks such as reading IMU data from the I 2 C bus, IMU calibration correction and fusion of acceleration, angular velocity and magnetic intensity data to compute the roll, pitch and heading of the sensor. These raw data are then published to the MOOS database. For applications that require more accurate attitude and heading information as compared to the InvenSense MPU-9250 [42] sensor, an external IMU sensor can be utilized, together with a MOOS driver for the sensor. An example is the Xsens MTi-3 [45] IMU unit, which can be connected to the BeagleBone using the UART interface. A MOOS driver, iXsensMTi, was created to read fused attitude data from the sensor, and publish them to the MOOS database. 3) GPS and cellular modem driver (iAdafruitFona): An Adafruit FONA cellular breakout board [46], which contains a SIM5320 cellular module with an integrated GPS receiver [47] was used as the GPS and cellular modem of the vehicle. This module is connected to the vehicle's BeagleBone Blue computer using the UART interface. A MOOS driver application, iAdafruitFona, was developed to publish raw GPS data to the MITFrontseat MOOS database. This application also acts as a service that sends and receives short message service (SMS) text messages. Any incoming SMS messages from allowed phone numbers (pre-configured) are published to the MOOS database with the message content and sender's phone number. Any application within the MITFrontseat community can publish a specific MOOS variable to the database, containing the message content and phone number to forward the message to an outside phone number. For example, this service can be used to send an SMS to the vehicle operator's phone number with the GPS coordinates, upon mission completion and surfacing. While the SIM5320 module also has the internet tethering capability, which could enable remote login to the Beagle-Bone Blue computer via the cellular network, this was not implemented in this work. 4) Battery and power management system driver (iBBBlue): A custom battery and power management system was developed and integrated to monitor the main motor current draw and main battery voltage. The current and voltage values are provided to the BeagleBone Blue via its analog-todigital converters (ADC). The iBBBlue reads these values are publishes them to the MITFrontseat MOOS community, which is used by pFrontseatManager for vehicle safety management. 5) Monitoring the internal pressure: Typically, the air inside the vehicle pressure hull is partially pumped out through a vacuum port (see Section II-A) upon vehicle assembly. Once the air is pumped out, the vacuum port is closed, leaving a low pressure zone inside the hull. If the hull is watertight, the internal pressure will be holding. A raise in the internal pressure indicates a leak in the pressure hull. In the base vehicle, the air pressure inside the pressure hull is measured using the barometer embedded in the BeagleBone Blue computer. The barometer reading can be monitored upon vehicle assembly using functions and scripts given in the Robot Control Library [44]. D. Navigation software -HydroMAN 2.0 In our software architecture, the MITFrontseat outsources the navigation task to an independent navigation engine, similar to most commercial AUVs (i.e. most AUVs rely on commercially available INS units for navigation -an INS is a combination of an IMU and a computer running a 'black-box' navigation fusion algorithm that fuses the IMU measurements with external sensors such as GPS, DVL, depth, USBL, etc. [48]- [50]). In this work, we utilize HydroMAN 2.0 as the navigation engine. HydroMAN (stands for hydrodynamic model aided navigation) is a self-learning, independent underwater navigation engine [51]- [56]. The HydroMAN 2.0 synthesizes raw measurements from sensors such as IMU, DVL, CVL, LBL/USBL, terrain-aided navigation and GPS into its selfcalibrating vehicle flight dynamic model to compute the navigation solution, with the use of an array of sensor preprocessors and a layered extended Kalman filter based fusion algorithm. When accurate sensor measurements are available; for example, DVL bottom-lock and/or acoustic position updates, the HydroMAN self-calibrates the vehicle model to the local operating environment, largely compensating for the navigation drift provided by underwater currents and the flight dynamic model's own error estimate. The calibrated vehicle model is then utilized for navigation aiding when accurate sensors are unavailable, or turned off in order to save power. Expensive tactical and navigation grade INS units and DVLs are infeasible for low-cost micro-AUVs that are limited by the cost, such as Morpheus. Therefore, they typically rely on inexpensive MEMS IMUs and an RPM-to-speed table for dead-reckoning based navigation, resulting in poor navigation performance. For such vehicles, HydroMAN uses a pre-identified vehicle flight dynamic model to estimate the vehicle velocity in surge, sway and heave directions, by using a combination of input variables such as the propeller speed, control surface angles and the rates of changes of roll, pitch and heading angles. As shown in [56] and [55], the vehicle dynamic model is capable to improving the navigation accuracy by orders of magnitude as compared to traditional RPM-to-speed curve based dead-reckoning. In the meantime, HydroMAN also provides the possibility to extend the base vehicle with additional navigation sensors if one anticipates to, without requiring changes to the navigation software stack. HydroMAN is comprised of a number of MOOS applications as described in this companion paper [51]. Following subsections briefly summarize key functionalities of these sub-component: 1) The interface to the HydroMAN MOOS community (iHydroMAN Gateway): The HydroMAN version 2.0 is an independent navigation engine that interfaces with client systems (i.e. MITFrontseat in this case) using a TCP network connection. Therefore, the front-end of HydroMAN 2.0 is similar to internal fusion engines of tactical and navigation grade INSs -the client system can send external raw sensor data to HydroMAN; and HydroMAN will provide the fused, model-aided navigation solution. Hence, the HydroMAN is independent of the client's software architecture. iHydroMAN Gateway application runs within the Hy-droMAN MOOS community, serving as the gateway to HydroMAN. All incoming messages to HydroMAN (e.g. raw sensor data) and outgoing messages from Hydro-MAN (e.g. final navigation solution) are handed over by iHydroMAN Gateway. It runs a TCP server, which allows the client system's HydroMAN driver (in this case, iHydroMAN MITFrontseat MOOS application that runs within the MITFrontseat MOOS community) to connect as a TCP client, and exchange messages using a google protocol buffer based standardized message definition. This standardized message definition and server-client architecture ensures the independence of HydroMAN; i.e., the client system's HydroMAN driver does not necessarily need to be a MOOS application. 2) Vehicle flight dynamic model (pHM BasicModel): As virtual navigation aiding sensor, an embedded AUV flight dynamic model, based on the principle of conservation of energy [55] is used in HydroMAN to estimate the linear velocities of the AUV (i.e. u, v and w). While the actual structure of the vehicle dynamic model varies from vehicle to vehicle, Equations 1-3 demonstrate how the propeller speed (RP M (t) ), measured vehicle angular velocities (p (t) , q (t) and r (t) ), and the linear velocities estimated at the previous timestamp (u (t−1) , v (t−1) and w (t−1) ) are used to derive the vehicle velocities at the current timestamp (i.e. u (t) , v (t) and w (t) ) for an example vehicle: where α n , β n , and γ n are AUV-dependent flight dynamic model parameters that were estimated using a real-time recursive least squares system identification algorithm. The derivation of the dynamic model, model optimization and parameter estimation procedures are beyond the discussion of this paper, and further details are given in [55]. The velocity and position estimated by the flight dynamic model are relative to the water column (i.e. ν model (auv|water) and x model (auv|water) ) since the model excludes water currents. Therefore, the error sources of the model-based velocity and position estimates include the drift due to water currents and the uncertainty of the model [55], [56], which are counteracted by the self-adaptation of the flight dynamic model 3) Self-calibration of the flight dynamic model to the operating environment (pHM ModelCalibrator): The uncertainty of the dynamic model, and water current velocity (ν (water|earth) ) are estimated on-the-fly within pHM ModelCalibrator when accurate sensor measurements such as DVL bottom-lock and/or acoustic navigation updates are available. These estimates are used to convert the model-based velocity from water reference to earth reference, as detailed in Equation 4: Two self-calibration strategies are available within pHM ModelCalibrator: (a) using acoustic position updates and (b) using the bias error of the model-based velocity (i.e. bias error estimated by the error-state extended Kalman filter (EKF) of the fusion algorithm). Further details on these algorithms are given in [51]. 4) DVL pre-processor (pHM DVLProcessor): This MOOS application processes raw sensor measurements from velocity aiding sensors such as DVL and CVL. By considering the configured orientation of the sensor, the velocity measurements are transformed to the HydroMAN standard axis convention. An orientation mismatch detection mechanism is implemented to warn the operator and/or execute vehicle safety protocols if the configured sensor orientation is detected to be not accurate. In under-ice operations, if the sensor is in an upwardfacing configuration, measuring the velocity of the AUV relative to surface ice, pHM DVLProcessor is capable of counteracting the velocity for potential drifts in the surface ice (i.e. surface ice in the Arctic is translated and rotated by wind and current forcing [57], and this ice drift velocity can be up to around 1 m s -1 , which can cause considerable navigation drift [58]). pHM DVLProcessor can be aided with ice drift velocity information obtained from actual measurements (e.g. measured by a GPS unit located on the surface, and transmitted down to the vehicle via an acoustic link) or from modeling approaches [59]. : is the vehicle position from the self-adapting flight dynamic model at timestamp T . The current timestamp is given by t and the LBL timestamp is given by t N . This pre-processor allows HydroMAN to effectively utilize navigation updates that are time-lagged by large time periods (i.e. more than 5-minutes). 6) Sensor fusion engine (pHM SensorFusion): The sensor fusion application consists of two EKFs: (a) the error-state EKF that estimates the bias errors of the sensors, and (b) main-state EKF that fuses the bias error corrected measurements to obtain the final navigation solution. The error-state EKF computes a running estimate of the bias errors of velocity sensors (e.g. DVL, CVL, etc.) and flight dynamic model in a layered pattern, in the hierarchy of the accuracy of the sensor. That is, the outlier removed acoustic position updates are first used to compute the bias error estimate of the DVL sensor, which is used to correct the DVL measurements. The bias error corrected DVL measurements are then used to compute the bias error of dynamic model, and other velocity aiding sensors, in a hierarchical order. Since the variation of bias error is generally a slowly changing function, this method allows HydroMAN to maintain a good navigation accuracy by using bias corrected dynamic model, even in events where the DVL drops out or turned off for a long period of time. The main-state EKF included six states -the three dimensional velocity and position vectors. These states were estimated by fusing the bias corrected DVL, flight dynamic model and other velocity measurements together with the depth and position based navigation updates. More information regarding this layers sensor fusion approach is given in [51]. 7) Navigation manager (pHM Manager): The HydroMAN system consists of a number of navigational safety management systems; e.g. EKF re-initialization when large navigation drifts are detected, executing safety protocols in situations where the filter is diverging due to faulty sensors, etc. The pHM Manager application manages these features and publishes the final navigation solution. E. Autonomy software Unlike unmanned sea-surface, ground, and aerial vehicles, AUVs cannot be remotely controlled due to the low bandwidth in acoustic communications; they must make decisions autonomously. Remote control, or teleoperation, in land, air, or surface vehicles may be viewed as a means to allow conservative, risk-averse operation with respect to the degree of autonomy afforded to the vehicle. In underwater vehicles, similar conservative tendencies are realized by scripting the vehicle missions to be as predictable as possible. Missions typical of early-model UUVs were composed of a preplanned set of waypoints accompanied by depth and perhaps speed parameters. The onboard sensors merely collected data that were analyzed after the vehicle was recovered from the water. However, improved sensor processing methods, embedded computing power, underwater navigation performance and adaptive and collaborative autonomy technology has enabled advanced autonomy for AUVs [60]. The base vehicle software stack that we developed carries several autonomy capabilities of several fidelity levels: (1) primitive missions with scripted decision outputs; (2) autonomous decision making with MOOS-IvP behavioural helm that runs on the MITFrontseat MOOS community; and (3) payload autonomy where the MITFrontseat ingests decision commands from thirdparty payload-based autonomy systems. 1) Higher level autonomy management (pFrontseatManager): The autonomy helm of the vehicle, regardless of the fidelity level, sits beneath and bound by a safety envelope set by this mission management application. In addition to enforcing safety rules, this application is also responsible for executing and switching autonomy behaviors with the use of a state machine. The frontseat mission manager enforces vehicle-dependant and cruise-dependant safety rules, set by the operator during pre-launch mission configuration. The vehicle-dependent rules ensure the integrity of structural and electrical components and water-tightness of the vehicle by administering variables such as the maximum vehicle diving depth, minimum operating battery voltage, maximum operating motor current, maximum internal pressure. When a specific rule is violated, the vehicle mode will be autonomously switched to an orchestrated safe mode, depending on the violated rule. The cruise, or mission dependent rules include: (a) mission start time -the main motor start time could be delayed by a pre-configured time period since the mission launch; (b) mission end time -the mission manager leases the vehicle's control authority to the autonomy helm only for a pre-configured temporary time period; beyond which, the mission ending mode is executed; (c) maximum cruise depth -if the maximum safe operating depth for the cruise region is below the maximum diving depth of the vehicle. When MOOS-IvP helm is run within the MITFrontseat MOOS community, the frontseat mission manager functions as a mission commander that carries out on-board command and control of mission behaviors. That is, with the use of a state machine, this application controls which IvP helm behaviors are spawned at a given time [60] and how they are switched between. The switching of IvP behaviors is either conducted completely autonomously, or manually triggered via communication methods detailed in Section III-F. 2) Passive helm (pHelmPassive): The passive helm allows scripting of a timetable of predefined helm decisions (i.e. desired speed, desired heading, desired depth and vehicle mode command); each against a corresponding execution time (i.e. mission legs). During the mission, pHelmPassive reads the pre-configured helm decisions from the configuration block, and posts to the MOOS database. Therefore, this primitive helm can be run without a vehicle navigation solution, making it a useful tool for preliminary testing of the vehicle. Another key use case of the passive helm is for tuning of the vehicle's low-level control system. Most AUVs still use proportional-integral-derivative control systems for their lowlevel control; and fine-tuning them, which is typically done trial-and-error, is rather dull process. During this process, the autonomy system is required to first command a constant heading, speed and depth; followed by a step-change command in the either heading, speed or depth, depending on which degree-of-freedom is being tuned. The passive helm is an ideal tool for such simple, pre-dictated missions. In addition, passive helm also allows the users to configure PID gain changes during legs, which is ingested by the control engine as runtime PID gain updates. This functionality allows the operators to test multiple PID gain settings during a single mission, expediting the time consuming tuning process by orders of magnitude. Listing 1 shows a sample mission configuration block of pHelmPassive, where the P-gain of the vehicle's heading controller is updated during the third leg. 3) MOOS-IvP autonomy helm (pHelmIvP): The MOOS-IvP helm runs as a single MOOS application and uses a behavior-based architecture for implementing autonomy. Behaviors are distinct software modules that can be described as self-contained mini-expert systems dedicated to a particular aspect of overall vehicle autonomy. The helm implementation and each behavior implementation expose an interface for configuration by the user for a particular set of missions. This configuration often contains particulars such as a certain set of waypoints, search area, and vehicle speed. It also contains a specification of mission modes that determine which behaviors are active under which situations and how states are transitioned. When multiple behaviors are active and competing for influence of the vehicle, the IvP solver is used to reconcile the behaviors. More information regarding MOOS-IvP can be found from [60] and [61]. The Morpheus base vehicle software stack allows the users to run MOOS-IvP autonomy from within the MIT-Frontseat MOOS community. In this architecture, required behaviors are loaded to the mission configuration block, and the pFrontseatManager application acts as the mission commander in-charge of spawning and switching between behaviors. 4) Payload autonomy: The main idea in the payload autonomy paradigm, or the backseat driver is the separation between vehicle control and vehicle autonomy. The vehicle control system runs on a platform's main vehicle computer, and the autonomy system runs on a separate payload computer. This separation is also referred to as the mission controller -vehicle controller interface. A primary benefit is the decoupling of the platform autonomy system from the actual vehicle hardware [60], [62]- [67]. The payload autonomy capability is built-in to the Morpheus base vehicle software stack through the development of a standard payload interface. This interface allows to forward any information posted in the MITFrontseat MOOS community to a payload computer via a TCP connection, using a standard protobuf message scheme; and the payload autonomy system is able to send autonomy commands (e.g. desired heading, desired speed and desired depth commands) back to the MITFrontseat via the same interface. More information regarding the interface is given in Section III-I. Hardware-wise, the payload autonomy system could run either on the same main vehicle computer or on a separate autonomy computer. In the case of Morpheus, the payload autonomy system was also run on the same main vehicle computer (i.e on the BeagleBone Blue) in order to conserve space inside the vehicle. In the payload autonomy mode, as discussed in Section III-E.1, the pFrontseatManager leases the command of the vehicle to the payload autonomy system for a pre-configured time period. However, the payload autonomy system is still bound by the safety envelope set by the frontseat manager. If one anticipates to take unconditional control of the vehicle, this can be done by turning off safety parameters within pFrontseatManager. F. Communication software Communication of low-cost, micro AUVs such as Morpheus can be generally classified into three categories: (1) short-range surface communication; (2) long-range surface communication; and (3) underwater acoustic communication. Short-range surface communication is generally via a wifi network connection with the topside network. Morpheus vehicle achieves this using the BeagleBone Blue computer's embedded wifi modem. The computer's network settings are configured such that it connects to a specific wifi network whenever the vehicle is in range. This network is typically used to access the vehicle computer in order to conduct operations such as system testing, launching missions, debugging, data transfer, etc. In Morpheus vehicle, long-range surface communication is achieved via the cellular network; with the use of SMS messages with a dedicated topside cellular phone. Very basic command and control, and vehicle status monitoring can be achieved with this service (e.g. this service is typically configured to send an SMS to the topside phone with the GPS coordinates, upon mission completion and surfacing). We expect to expand the long-range surface communication capability by establishing a remote connection between the topside computer and vehicle with the use of cellular internet tethering for more advanced surface command and control, and telemetry monitoring; though the use of Goby-Acomms library [68], [69] for marshalling and dynamic priority queuing; and Goby liaison as the command and control GUI [70]. At the time of writing, the Morpheus vehicle is not equipped with an acoustic modem that enables transmission of datagrams while underwater. However, the piUSBL system in the Perseus payload allows very basic underwater command and control by switching the transmitter between various broadcast linear frequency-modulated chirps; each corresponding to different pre-defined vehicle autonomy behaviors [27]. In the future, we plan to expand the electronics stack of the vehicle with a small acoustic modem (e.g. [71]- [73]) for more advanced underwater command and control, and telemetry; through the use of Goby-Acomms library, which will also be used for long-range surface communication. G. Low-level control software The low-level control software is responsible for executing the decisions commanded by the autonomy system; such as, desired heading, desired speed, desired depth and desired glide angle (i.e. in the case for gliding vehicles) commands. The vehicle control system executes such commands by controlling the vehicle-specific actuators such as the propulsion thrusters, control surfaces, buoyancy engines and weight shifting mechanisms, etc. Hence, the low-level control system of an AUV is generally platform-dependant. In MITFrontseat, we have generalized the control system by sub-dividing it to three components as shown in Figure 7: (1) a platform-independent control engine that produces control correctives in roll, heading, speed, and pitch; (2) a platform-dependant actuator mapper application that maps the control engine outputs to the actuator configuration of a specific vehicle; and (3) actuator drivers that produces lowlevel signals such as PWM, GPIO and CAN bus messages that drive the actuators. 1) Control engine (pControlEngine): The low-level control engine of MITFrontseat consists of a set of single loop (i.e for heading, speed and roll sub-systems) and multi-loop (i.e. for the depth sub-system) PID control blocks that produce control corrective outputs. Control corrective outputs are essentially in the same order as actuator commands; e.g. control surface angle commands. In order to ensure the platform-independence of the control engine, PID outputs are published as control correctives; and the mapping of control correctives to actuators of a specific vehicle is conducted in a separate MOOS application. For some vehicle hardware designs, there could be an offset between the IMU mount axis and vehicle axis. In MITFrontseat, this offset correction is carried out in the control engine. The axis offset correction for heading is done by pointing the vehicle's nose towards north, and poking a given MOOS variable. Similarly, roll and pitch offsets are corrected by keeping the vehicle at zero roll and pitch, and poking two separate MOOS variables. These offsets are written to a configuration text file, which is read on start-up to correct the IMU offset. The control engine contains three independent single-loop PID control blocks for heading, speed and roll sub-systems. They ingest the difference between the desired and actual values (i.e. for example, the heading error), and compute a PID corrective that would attempt to minimize the error, with the use of configured PID gains. For under-actuated vehicles such as flying type AUVs, the depth DOF cannot be directly controlled; and is rather controlled by varying the pitch angle of the vehicle. Thus, a two-loop PID controller is implemented for the depth sub-system. As seen from Figure 7, the depth PID control block produces a depth control corrective, which becomes the desired pitch input for the pitch PID control block. The latter then computes the pitch control corrective, which is sent to the actuators that control the pitch DOF of the vehicle (e.g. elevators). However, for gliding vehicles, the optimized desired glide angle (i.e. desired pitch value) is provided by the autonomy system. For such vehicles, the control engine by-passes the depth PID block; and the pitch PID block uses the desired glide angle as the desired pitch value. The MITFrontseat is compatible for vehicle with multiple modes; for example, for hybrid gliders that has both propelled and gliding modes; and for amphibious vehicles that are capable of operating in-water as well as ashore. For such multi-mode vehicles, multiple control settings are typically required; for instance, in the case of hybrid vehicles, one PID setting for the propelled mode (i.e. regulating the propeller and control surface angles to control the speed, pitch and heading), and another PID setting for the gliding mode (i.e. regulating the buoyancy engine and battery-pack position to control the same variables). As shown in Figure 7, the control engine handles this by dynamically creating an 'N' number PID controller sets during start up, each set corresponding to a vehicle mode. When the autonomy system switches to a specific vehicle mode, the control engine also switches itself to the corresponding PID controller and gain setting. 2) Mapping control correctives to vehicle actuators (pActuatorMap morpheus): In this framework, as shown in Figure 7, the platformindependent control correctives produced by the control engine are converted to actuator commands of a given vehicle by a platform-dependent MOOS application; for instance, in the Morpheus AUV, this is carried out by pActuatorMap morpheus. As outlined in Section II-B, the Morpheus class vehicles are equipped with four independently actuated stern control surfaces (i.e. upper rudder, lower rudder, port elevator and starboard elevator), two vertical forward morphing fins and a propeller at the stern end. The heading and pitch correctives are first mapped out to corresponding stern rudder and elevator angles. If the Fig. 7. The low-level control system of MITFrontseat has been semi-generalized by sub-dividing it to three components: (1) a platform-independent control engine (pControlEngine) that produces control correctives in roll, heading, speed, and pitch; (2) a platform-dependant actuator mapper application (e.g. pActuatorMap morpheus) that maps the control engine outputs to the actuator configuration of a specific vehicle; and (3) actuator drivers that produces low-level signals such as PWM, GPIO and CAN bus messages that drive the actuators. vehicle is at zero roll angle (or if the roll control subsystem is deactivated), these commands will be the final rudder and elevator commands. However, in situations where the vehicle is rolled, as shown in Figure 8, the zero position of all four stern control surfaces will be offset by a small angle (the maximum deflection limit is typically configured as 5 degrees), attempting to create a righting moment to zero out the roll angle. ro ll a n g le As shown in Figure 9, when the vehicle is at non-zero roll angles, a rudder deflection will not only create a heading change, but will also create an unintended pitching moment, and vice versa. Thus, the vehicle will have unintended depth fluctuations during turns, and heading fluctuations during depth changes. The roll compensation system within pActuatorMap morpheus attempts to mitigate this by accordingly deflecting the opposing control surfaces to cancel out the unintended moment as given in Equations 6 -9; for instance, deflecting the elevators to cancel out the unintended pitching moment created by the rudders. ro ll an gl e Fig. 9. Roll compensation -Left: when the AUV is at zero roll, heading correction is simply mapped out to a rudder deflection. Right: when the AUV is rolled, however, a simple rudder deflection will not only create a heading change, but also will create an unintended pitch change. Roll compensating system will attempt to mitigate this by accordingly deflecting the elevators to cancel out the pitching moment created by the rudders, and vice versa. In pActuatorMap morpheus, the forward located morphing fins are controlled according to the magnitude of the heading error. The fins were deployed if the heading error (i.e. the difference between the desired and current vehicle heading) is larger than 30 • . Once deployed, the fins were actively controlled with an equal but opposite angle to the rudder deflection. When the heading error reduced to less than 5 • , the morphing fins were retracted. All final control surface angles are finally published to the MOOSDB as angle as well as normalized commands, which are to be read by the actuator drivers. The speed correctives are also mapped out and published as percentage thrust and normalized thrust commands. H. Actuator software drivers A set of MOOS drivers were developed to communicate with various hardware actuators of the vehicle via hardware interfaces available onboard the BeagleBoard computer. Each driver reads corresponding commands from the MOOSDB, and drives the hardware by providing relevant GPIO, PWM and I 2 C commands. 1) Main motor driver (iProp): The main motor MOOS driver handles main motor and its related circuitry. Main motor propeller is a hazardous sub-system; hence is protected by an electrical gate that needs to be triggered in order to switch the propeller on. During the mission envelope (which is dictated by pFrontseatManager), The main motor driver triggers the gate by sending a GPIO signal. Subsequently, the relevant PMW signal is sent to the motor, according to the percentage thrust commanded by pActuatorMap morpheus. 2) Servo motor driver (iServo): The servo MOOS driver is responsible for driving the servo motors to the positions commanded by pActuatorMap morpheus. Servo driver achieves this by providing PWM signals (i.e. via the BeagleBoard's PWM channels) that incrementally changes the servo position until it arrives to the commanded position. 3) LED strobe driver (iLED): The LED MOOS driver handles the circuitry related to the vehicle's mast LED strobe. In this framework, the pFrontseatManager posts various different LED pattern commands, each corresponding to the current mode of the vehicle. For instance, four different LED blinking patterns were configured to indicate: (1) a mission has been launched and waiting till the actuator-engage-time, (2) the mission clock is within 10-seconds to the actuator-engage-time, (3) mission is currently being executed and actuators are engaged, and (4) mission has ended and actuators are secured. The LED driver reads these LED commands and sends corresponding GPIO signals to the LED driving circuitry. I. Software extension for additional payloads and payload autonomy systems (iMITFrontseat Gateway) The base AUVs are typically extended with additional payloads according to its application [74], [75]. To ensure this extendability, the hardware as well as the software of the base vehicle should include boilerplate hooks to interface with additional payload sensors, actuators and processes [74]. The iMITFrontseat Gateway is a such boilerplate hook that allows payloads (i.e. including payload autonomy systems) to connect to MITFrontseat and exchange information. Similar to the iHydroMAN Gateway discussed in Section III-D.1, this application creates a TCP server, which allows payload systems to connect as a TCP client, and exchange MOOS messages wrapped around a google protocol buffer based standardized message definition. This interface allows payloads to read and publish any MOOS variable to the MITFrontseat MOOS community. The standardized message definition and TCP server-client architecture ensures the independence of payload systems; i.e., the payload system does not necessarily need to be a MOOS based system. Multiple payload systems can be connected to MITFrontseat at a given instance by spawning multiple instances of iMITFrontseat Gateway application. IV. MORPHING FIN PAYLOAD DESIGN The stability and maneuverability indices of a torpedoshaped vehicle can be dynamically altered using different modes of retractable fin implementations [11]. In this work, we implemented forward located morphing fins, where the stability of an originally stable vehicle can be decreased by deploying the fins; increasing the maneuverability, similar to tuna's dorsal fins. As shown in Figure 10A, the morphing fins were usually retracted during straight runs in order to increase the stability. When the vehicle is required to make a quick heading change, the morphing fins were deployed, as shown in 10B, to destabilize the body, increasing the maneuverability. In addition, the morphing fins were able to be articulated, as shown in Figure 10C, providing a turning moment, to further increase the turning rate. The theoretical derivations of stability-maneuverability criteria, and the mathematical representation of forward-located morphing fins were presented in-detail, in our prior work [11]; therefore, a concise summary is given here in Appendix A. The morphing payload module was developed as an independent section, that can be outfitted to any place within the mid-body of the base vehicle. Our previous work [11] investigated the variation of the stability index with the location of morphing fins; concluding that a larger stability index variation can be achieved when the fins were located closer to the nose-tip of the vehicle. Thus, in both the Morpheus and Perseus vehicles, we placed the morphing fin payload module immediately after the nose-cone. A. Morphing fin hardware design The morphing fin hardware design, as shown in Figures 10D -10F, consists of two morphing fins that were driven in and out of the hull through fin cutouts by push rods. The push rods and their mounting arms were in turn driven by a 32 pitch gearwheel and an oil-filled micro-servo. The fins and rods moved in unison, providing symmetric deployment and retraction. The range of the fin movement was such that when fully retracted, just a few millimeters of fin protrudes from The morphing fin payload module was placed immediately after the nose-cone of the AUV in order to obtain the maximum stabilityindex variation. The morphing fins can be (A) retracted, (B) deployed and (C) articulated up to a 20 degree angle of attack. (D) The morphing mechanism was housed inside a free-flood chamber within the module. (E) Two watertight channels were located on either sides of the chamber to run electrical cables across the morphing fin module. (F) The two morphing fins were driven in and out of the hull through fin cutouts by a servo-driven push rods mechanism, which was placed on a carriage that can be rotated using another servo, providing fin articulation. the hull, while when fully deployed the fin bottom clears the hull, allowing for articulation. The deploying mechanism was mounted on a carriage with ability to swing approximately 20 degrees to either side, thereby resulting in fin articulation. A 3D-printed rack on the carriage was driven by a 32 pitch gearwheel and an oilfilled articulation micro-servo. Similar to deployment action, the articulation of the fins was also symmetric. The morphing mechanism was nested into a free-flood hull chamber. Watertight channels were designed on either sides of the chamber, providing watertight wiring channels to run electrical cables across the morphing fin module. B. Morphing fin software design As discussed in Section III-G.2 and illustrated in Figure 7, the platform-independent control correctives produced by the pControlEngine were converted to actuator commands of the Morpheus vehicle in pActuatorMap morpheus. The adaptive morphing argument was also embedded within this application. The morphing fins were controlled according to the magnitude of the heading error. The fins were deployed if the heading error (i.e. the difference between the desired and current vehicle heading) is larger than 30 • . Once deployed, the fins were actively controlled with an equal but opposite angle to the rudder deflection. When the heading error reduced to less than 5 • , the morphing fins were retracted. V. RESULTS AND DISCUSSION The original MIT-EMATT base vehicle, the optimized base vehicle and Morpheus AUV were all extensively field tested in-water in the Charles river, Massachusetts, USA by conducting hundreds of hours of operations over a period of two years (see Figure 11). In this section, we present in-water test results from a set of randomly picked missions. The vehicle tracks shown in this section were produced using the HydroMAN navigation solution. The base vehicles and Morpheus AUV were limited to a depth sensor and an IMU. Therefore, the HydroMAN navigation engine was heavily relying on its embedded vehicle flight dynamic model. The HydroMAN navigation engine requires an initial vehicle motion response dataset to identify the parameters of the vehicle flight dynamic model [51], [56]. In this work, we used the Perseus vehicle configuration (shown in Figure 1E), which was outfitted with the piUSBL payload, to obtain the parameter estimation dataset. The piUSBL system was configured as a long baseline (LBL) system, and followed the same methodology as [56] to estimate the vehicle flight dynamic model. Figure 12 compares the HydroMAN navigation solution against the LBL-based navigation solution, for validation and verification purposes. A. In-water tests of the MIT-EMATT base vehicle Figure 13 shows a typical control response plot of the original MIT-EMATT base vehicle from an example zigzag mission. The top subplot shows the desired and actual heading responses of the vehicle together with the rudder commands. As discussed in Section II-B, the original MIT-EMATT base vehicle tail-cone had a solenoid-driven rudder and elevator that only allowed bang-bang control. Therefore, as seen from Figure 13 top subplot, the rudder commands had only three positions; i.e. hard-to-port, hard-to-starboard and neutral. This resulted in around 5-10 degree amplitude oscillations in the heading response. Figure 13 middle subplot shows the desired and actual pitch responses of the vehicle, together with bang-bang elevator commands. As discussed in Section III-G.1, the desired pitch was computed by the depth control loop within pControlEngine; attempting to maintain the vehicle depth at the desired depth command. A constant roll angle of around 20 degrees was generally observed; primarily as a result of propeller torque. The original MIT-EMATT did not have split rudders or split elevators that allowed implementation of active roll control. We addressed this drawback in the optimized vehicle by having individually controlled split rudders and split elevators. In addition, we also included fixed fins with a 3-degree constant angle of attack to counteract the rolling effect due to propeller torque. Figure 13 bottom subplot illustrates the desired and actual depth responses of the vehicle, which had an oscillation of around 0.2 -0.4 m amplitude. This is primarily due to the band-bang control strategy and external disturbances. Figure 14 shows the vehicle navigation tracks of the original MIT-EMATT base vehicle from three identical zigzag missions, conducted at different thrust percentage values for PID tuning. These missions were conducted using pHelmPassive, which publishes pre-scripted, time triggered desired heading and desired depth commands. Such simplified missions were used during PID tuning and heading performance evaluation stages. Tracks of the original MIT-EMATT base vehicle conducting three identical zig-zag pattern missions using pHelmPassive at various propeller thrust percentages. B. In-water tests of the Morpheus AUV Similar to the MIT-EMATT base vehicle, both the optimized base vehicle and Morpheus AUV were intensively tested in-water for PID tuning and maneuverability-stability evaluations. Figure 15 shows the vehicle navigation track of the Morpheus AUV for two identical zig-zag missions; one with morphing fins engaging according to the argument discussed in Section IV-B, and the second without engaging morphing fins. As seen, both runs provide small turning radii, with the run that engaged morphing fins outperforming the other. Figure 16 illustrates a comparison of the starboard turns of the same runs. In addition, it also includes a similar turn of the original MIT-EMATT base vehicle. The Morpheus vehicle provided a turning radius of around 2.5 m when morphing fins were not engaged. The morphing fins were able to further reduce the turning radius down to approximately 1.5 m. In comparison, the turning radius of the original MIT-EMATT vehicle was limited to around 10 m. As seen, a significant turning rate improvement was obtained through the use of morphing fins. Figure 17 shows the turning rate responses of the Morpheus AUV for six different example runs, both with and without engaging morphing fins. When comparing top and bottom subplots, the starboard turns always had a significantly higher turning rate as compared to port turn (i.e. approximately 10 deg s −1 higher). We believe that this was as a result of the propeller torque favoring the starboard turns. As seen from Figure 17, the Morpheus AUV was able to showcase an exceptional turning rate of around 25-35 deg s −1 . A maximum turn rate improvement of around 35% -50% was gained through the use of morphing fins. VI. CONCLUSIONS We designed and constructed an A-sized base AUV, augmented with a stack of modular and extendable hardware and software, including navigation, autonomy, control and high fidelity simulation capabilities. The base vehicle developed in this work was a derivation of the EMATT vehicle hullform, designed and produced by Lockheed Martin Corporation. During the first iteration, we used the original EMATT shell, including the original nose-cone, main-motor bay, and a freeflood tail-cone with solenoid-driven control surfaces; augmented with our own electronics and software stacks. In the second iteration of the base-vehicle, we hydrodynamically optimized nose and tail cones. The optimized nose-cone included an embedded GPS antenna, LED strobes, external pressure sensor and vacuum port; and the optimized tail-cone included four individually controlled, servo-based control surfaces. Subsequently, we extended the optimized base vehicle with a novel tuna-inspired morphing fin payload module (referred to as the Morpheus AUV), to achieve good directional stability and exceptional maneuverability; properties that are highly desirable for rigid hull AUVs, but are presently difficult to achieve because they impose contradictory requirements. The morphing fin payload allows the base AUV to dynamically change its stability-maneuverability qualities by using morphing fins, which can be deployed, deflected and retracted, as needed. The original MIT-EMATT base vehicle, the optimized base vehicle and Morpheus AUV were all extensively field tested in-water in the Charles river, Massachusetts, USA by conducting hundreds of hours of operations over a period of two years. The Morpheus vehicle provided a turning radius of around 2.5 m when morphing fins were not engaged. The morphing fins were able to further reduce the turning radius down to approximately 1.5 m. In comparison, the turning radius of the original MIT-EMATT vehicle was limited to around 10 m. The Morpheus AUV was able to showcase an exceptional turning rate of around 25 Utilizing the hydrodynamic coefficient representation of vehicle motion, Triantafyllou et al. [9] provides the theoretical derivation of the stability criterion for underwater vehicles, and how the stability-maneuverability is affected by the stern control surfaces and forward morphing fins. In this section, we summarize the theory given in Triantafyllou et al. [9], and further extend the derivation to show how the stability-maneuverability is affected by the size of stern control surfaces, forward morphing fins, and shroud. The body-fixed axes system and notations utilized in this article is shown in Figure 18. A. Directional stability versus maneuverability Utilizing the hydrodynamic coefficient representation of vehicle motion [76], the linearized equations of sway and yaw motion of a torpedo-shaped vehicle, decoupled from surge, heave, roll and pitch motion, can be written as given in Equations 10 and 11. where, m is the mass, I zz is the moment of inertia about the origin, x G is the longitudinal location of the center of gravity, U is the forward speed of the vehicle, v andv are the sway velocity and acceleration, r andṙ are the yaw velocity and acceleration. The hydrodynamic coefficients −Yv and −Yṙ denote the added mass in sway due to swaying and yawing acceleration, respectively. −Nv and −Nṙ denote the added moment of inertia due to sway and yaw acceleration. Y v and Y r are the linear resistance force in sway due to sway and yaw velocities, and N v and N r are the linear resistance moments in yaw due to sway and yaw velocities. When the rudder is deflected to an angle δ, after the transients die down and a steady turning at forward velocity U, yaw rate r, and side velocity v is achieved, the acceleration terms can be dropped. Then, Equations 10 and 11 become Equations 12 and 13, respectively: where, Y δ and N δ are the linear hydrodynamic coefficients of the rudder. Note that the rudder forces are taken to be a linear function of the rudder angle within this section. Thus, the yaw rate, r can be written as: where, the denominator C can be shown to be the dynamic stability index, C, as given in Equation 15 [76]. If C > 0, the body is directionally stable, otherwise, it is linearly unstable. Equation 15 can be recasted as: where, x r is the distance of the Center of Rotational motion (CR) from the origin, i.e. the location where the side force acts when the body performs a pure rotation at constant speed U and (small) angular velocity r, and v = 0. x AC is the distance from the origin to the Aerodynamic Center (AC), i.e. the location where the side force acts when the body performs a steady translation at forward velocity U and side velocity v, while r = 0 (what is referred to, also, as sideslip velocity). As noted by [9] [9], the aerodynamic center is a critical quantity in determining the body stability. Since Y v is always a negative quantity [76], and (mU − Y r ) > 0, as mU is a large positive quantity, the stability criterion can be recast as: x r > x AC (19) As the difference between these two values increases, the linear stability of the vehicle increases while the maneuverability decreases as shown from equation (14). B. Presence of stern control surfaces The effects of the rudder are next added to the equations of motion, with the subscript b corresponding to the bare body coefficients. Following [9] [9], the updated hydrodynamic coefficients are: where x R = N δ /Y δ is the location where the force acts on the rudder and Y δ and N δ are defined earlier. The stability index is now updated to take into account the effect of the rudder. Plugging the updated hydrodynamic coefficients into Equation (15) and denoting the stability index of just the bare body by C b leads to the following updated stability index: (24) which reduces to: The yaw rate from Equation (14) is calculated within linear theory as: C. The size of stern control surfaces The lift generated by the stern control surfaces contributes to the hydrodynamic coefficients, as shown in Equations 20-23. To estimate how much lift is generated by the rudder, the general form for lift is used: where C L is the coefficient of lift and S rudder is the area of one of the two rudders. As [9] [9] concluded, the addition of these stern control surfaces can stabilize an initially unstable vehicle, as long as it is above a threshold value that provides stability. Rearranging Equation 25 determines what this threshold value for A should be in order to bring the stability index C from a negative to a positive value. Since A is determined by the amount of lift generated for a certain speed, the size of the rudder is what keeps this value close to the threshold value. If the size of the rudder increases, resulting in an increase in A, the vehicle surpasses the stability threshold, becoming more stable and reducing the turning rate of the vehicle. The value of C should remain close to this stability transition in order for the rudder to have a significant effect on the turning rate. D. The presence of forward morphing fins The addition of forward morphing fins to an underwater vehicle has also an effect on the stability and maneuverability of the vehicle. The hydrodynamic coefficients from Equations 20-23 are updated to take into account the forward morphing fin following a similar process used for the rudder [9]: where x f = N f δ /Y f δ is the location where the force acts on the vehicle, Y f δ is the fin force per unit rudder angle and N f δ is the moment per unit angle. Once again, the stability index in Equation 25 is updated to reflect the effects of the forward morphing fin: with B = Y f δ U < 0 and η = x f . Hence, the yaw rate in Equation 28 can be calculated with the updated stability index, C, that takes into account the effects of both the rudder and forward morphing fin. The values of mU − Y r,b > 0 and −Y v,b > 0 require that η > x r and η < x AC . In other words, the forward morphing fin needs to be positioned ahead of the bare body center of rotational motion and behind the bare body aerodynamic center of the vehicle, requiring that the bare body vehicle be initially directionally unstable. For a given stable vehicle and rudder configuration, the main way to increase the turning rate is to decrease the stability parameter. E. The size and angle-of-attack of forward morphing fins A fish bends its body when it initiates a turn [9]. Since the forward morphing fins are located ahead of the center of gravity, the fins deflect in the opposite direction of the rudder. This can be adapted to the coefficients derived so far for the case when δ f = −δ, the magnitude of the forward morphing fin and rudder angles are equal but in opposite directions. The deflection does not affect the stability criterion. The turning rate for a vehicle with both rudder and forward morphing fins is: where η = x f is the fin position, ξ = −x R is the rudder position, A = Y δ U and B = Y f δ U . The last term in the equation contributes the strongest increase in the turning rate since −A > 0, −B > 0, and −Y v.b > 0 [9]. In order to increase the rate of turning of the vehicle, the forward morphing fins should have a comparable size and lift generation as the rudder.
2022-12-23T06:42:11.783Z
2022-12-22T00:00:00.000
{ "year": 2022, "sha1": "c95bef94a7a121dd0fee251a90f5b1625be654d2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c95bef94a7a121dd0fee251a90f5b1625be654d2", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Physics" ] }
249888937
pes2o/s2orc
v3-fos-license
Deep Learning Models on CPUs: A Methodology for Efficient Training GPUs have been favored for training deep learning models due to their highly parallelized architecture. As a result, most studies on training optimization focus on GPUs. There is often a trade-off, however, between cost and efficiency when deciding on how to choose the proper hardware for training. In particular, CPU servers can be beneficial if training on CPUs was more efficient, as they incur fewer hardware update costs and better utilizing existing infrastructure. This paper makes several contributions to research on training deep learning models using CPUs. First, it presents a method for optimizing the training of deep learning models on Intel CPUs and a toolkit called ProfileDNN, which we developed to improve performance profiling. Second, we describe a generic training optimization method that guides our workflow and explores several case studies where we identified performance issues and then optimized the Intel Extension for PyTorch, resulting in an overall 2x training performance increase for the RetinaNet-ResNext50 model. Third, we show how to leverage the visualization capabilities of ProfileDNN, which enabled us to pinpoint bottlenecks and create a custom focal loss kernel that was two times faster than the official reference PyTorch implementation. All these varieties of hardware make it hard to propose a universal methodology for the efficient training of DL models. Since GPUs have dominated deep learning tasks, comparatively little attention has been paid to optimizing models running on CPUs, especially for training (Kalamkar et al. (2020)). Previous DL model research conducted about CPUs focused mostly on performance comparison of CPUs and GPUs (Wang et al. (2019); Buber and Banu (2018); Shi et al. (2016); Dai and Berleant (2019)), or only focused on CPU inference (Qian (2020)). A key question to address when optimizing training performance on CPUs is what metrics should guide the optimization process (Shen et al. (2023b)). Several metrics and benchmarks have been proposed to measure DL workload and training performance. For example, Multiply-Accumulate (MAC) has been used as a proxy for flops to measure computational complexity for Convolutional Neural Network (CNN) models (Chang et al. (2018)). Time-to-Train (TTT) has been widely adopted to measure the training performance of a DL model by measuring the time models take to reach certain accuracy metrics. NetScore (Wong (2019)) was proposed as a universal metric for DL models that balances information density and accuracy. Until recently, however, no widely accepted benchmark for DL models existed that incorporated a wide range of domain tasks, frameworks, and hardware. MLPerf (Mattson et al. (2019)) was proposed as a comprehensive DL benchmarking suite that covers a variety of tasks and hardware. Many major tech companies have contributed to this effort by competing for better performance. Intel has been actively participating in the MLPerf challenge to improve the training performance of Deep learning models across multiple domains. To address portability issues related to AI running on different hardware platforms, Intel has open-sourced the oneAPI Deep Neural Network Library (oneDNN) (One), which is a cross-platform performance library of basic deep learning primitive operations, including a benchmarking tool called benchDNN. Intel has also created optimized versions of popular frameworks with oneDNN, including Intel ® Optimizations for TensorFlow and Intel ® Extensions for PyTorch (Ipe). Few guidelines exist, however, for profiling and optimizing DL model training on CPUs. Several fundamental research challenges must be addressed when training DL models on CPUs, including the following: Libraries for CPU are less well-known. It is therefore essential to understand how to fix performance bottlenecks, e.g., by locating and implementing custom operation kernels for both forward and backward propagation, as well as adopting proper lowprecision training so computing time can be reduced without sacrificing accuracy for CPUs. 3. How to set achievable goals. Projections for CPUs are often done in a crude way by dividing CPU performance in flops over flops required for model training. In a computation-bounded scenario, it is essential to create an experiment-based projection for deep learning models so that the goal is realistically achievable, i.e., not only theoretically achievable but also considers hardware limits and kernel optimizations. To address these challenges, we designed a structured top-down method that helped us prioritize different optimizing options for training DL models (e.g., RetinaNet (Lin et al. (2017))) on CPUs. Incorporating this new approach, we also developed a DL performance profiling toolkit called ProfileDNN that is oneDNN-aware and supports profiling and projection at the model level, thereby bridging the gap for oneDNN-specific model-level projection. The remainder of this paper is organized as follows: Section 2.1 and Section 2.2 summarize different profile tools and their contribution to locating hot spots and discrepancies; Section 2.3 describes projection goal and procedure, as well as ProfileDNN's structure and workflow; Section 2.4 through Section 2.8 discuss recommendations and approaches to enable efficient training without sacrificing accuracy; Section 3 analyzes the training efficiency and convergence under distributed situation; and Section 4 presents concluding remarks and our future work. All experiments in this paper were performed on Intel Xeon Cooper Lake processors. Method Summary First, we summarize the method component of our contribution to optimizing training on CPUs. Our goal is to provide a structured approach for users to optimize training DL models on CPUs. Our method adopts a top-down approach similar to what Yasin (2014) has described that aims to locate the critical bottlenecks in a fast and feasible manner. We believe the workflow can be roughly categorized into three stages: profiling, projection, and optimization. As shown in Fig 1, we break each stage into different tile groups. Users are advised to follow the method groups from left to right, as each group benefits from the previous group's results. Our toolkit ProfileDNN can work both as a profile tool and a projection tool. Profile and Tracing During the profile stage, users should observe the breakdown of operation kernel components of the DL model and their relative significance. Special attention should be paid to the discrepancies between their model and data versus the reference implementation and original use case. For example, do all the major kernel operations of reference exist in their model? Likewise, does the kernel component percentage stays about the same? If the answer to either question is "no" their code may perform worse due to poor oneDNN kernel adoption. ProfileDNN helps users better compare the distribution of the kernel components by producing intuitive visualization. This tactic was also adopted by vTune (Reinders (2005)). ProfileDNN supports all primitive kernels (conv, pool, matmul, reorder, etc) from benchDNN. Convolutional Neural Networks (CNNs) (Krizhevsky et al. (2012)), Recurrent Neural Networks (RNNs) (Graves et al. (2013)), and Transformers (Vaswani et al. (2017)) are some of the most popular Neural Network models today. ProfileDNN can break down the primitive operations by type and directory, as shown in Fig 2a-c. We found that both CNN and RNN models spend more time doing back-propagation than forward-propagation. Transformer models consist mostly of inner product and matrix multiplication, which correspond to the softmax operation that is often a performance bottleneck for transformer-based models (Lu et al. (2021) Li et al. (2023)). A primitives-level breakdown is often sufficient to locate model bottlenecks since many DL training tasks are computation-bound. In the case of a memory/cache-bounded scenario, however, a tracing analysis is needed to inspect the orders in which each operation runs. A trace is an ordered set of span sequences, where each span has an operation name, a start and end timestamp, as well as relations to other spans (child process, etc). If a trace is highly fragmented there is significant context switching, so a custom merged operator may help improve performance. VTune is another very powerful tool for profiling CPU and heavily adopts the top-down methodology (Yasin (2014)). vTune divides the CPU workflow pipeline into frontend and backend, with the former bounded by latency and bandwidth, and the latter bounded by core (computation) and memory (cache), as shown in Fig 3. The first round of profiling The profiling round can be followed by micro-architecture exploration that measures CPU utilization rate (spinning time), memory bandwidth, and cache (L1, L2, or L3) miss rate. After pinpointing the primitive operation with the most computation-heavy footprint, algorithm-or implementation-level optimizations can be applied. If memory is the bottleneck, memory access and IO analysis can also be performed on individual operations. Data Discrepancy An easily overlooked discrepancy is the difference between the reference dataset and the custom dataset. The data distribution can not only affect the performance of the same model, but it can also sometimes change the model itself (Shen et al. (2023c)). For example, RetinaNet-ResNext50 is a classification model that changes structure based on the number of classes from the dataset. Figure 4: Open Image vs COCO Training Time Ratio Breakdown After we switched the dataset from COCO (Lin et al. (2014)) to OpenImage (Kuznetsova et al. (2020)), the training time increased dramatically. We found that the dataset size increased 10 times, but the training time per epoch increased 20 times, which is not proportional. Part of this increase can be attributed to a bigger fully connected (FC) layer in the backbone. In particular, we found that the major increase in time is within the focal loss calculation caused by three times more classes, as shown in the detailed breakdown in Fig 4. A tracing analysis also corresponded to the conclusion by showing that one-third of the backward calculation time was spent on focal loss. We addressed this issue by implementing our custom focal loss kernel, as discussed in Section A.3. Projection and Toolkit structure The projection of DL models aims to determine the theoretical performance ceiling of a specific model/framework/hardware combination. Intel has an internal tool that can perform projection for DL models, but this tool requires much manual setting and tuning. BenchDNN can be used to project performance on specific hardware automatically, but only one operation at a time. We therefore designed ProfileDNN to combine the advantage of both since it can project the whole DL model with little manual effort. As is shown in Fig 5, ProfileDNN takes in an arbitrary log file produced by running deep learning models on a platform that supports oneDNN with DNNL VERBOSE set to 1. The stats.py file then collects and cleans the raw log file into CSV format, produces a template parameter file, calculates and plots the component distribution of primitive operations. The benchDNN.sh file runs each primitive operation multiple times and takes the average. The efficiency.py then takes a weighted sum of all operations' time by the number of calls and produces an efficiency ratio number. To ensure our toolkit can accurately reproduce the running behavior of the kernels from the original model, we ensure both the computation resources and the problem descriptions are the same. We use numactl to control the number of CPU cores and memory binding and the mode is set to p (performance) in benchDNN to optimize performance. These parameters are carefully controlled and summarized in Table 1 Dataloader and Memory Layout By examining the DL training process from the same vTune top-down perspective shown in Fig 3, the data loader can be seen as a frontend bounded by bandwidth and latency. There are three sources of bottlenecks for the data loader: I/O, decoding, and preprocessing. We found similar performance for data in NVMe or loaded to RAM and the I/O overhead is negligible. We observed a better decoding performance by adopting Pillow-SIMD and accimage as the backend in torchvision. A PyTorch dataloader parameter controls the number of worker processes, which are usually set to prevent blocking the main process when training on GPUs. For training on CPUs, however, this number should not be set to minimize memory overhead. Since CPU RAM memory is usually larger than GPU memory-but has a smaller bandwidth-training on CPUs has the advantage of allowing larger batch size and training larger model (Wang et al. (2019)). Here we define n as batchsize, c as channel, h as height, and w as width. The recommended memory layout in Intel ® Extension for PyTorch is nhwc (channel last) for more efficient training, though the default layout in benchDNN is nchw. We set the default behavior of ProfileDNN to adopt nchw to follow tradition. If the log input specifies the memory layout, ProfileDNN will automatically override the default. Library Optimization Substituting slow operation implementations with a more efficient library can improve performance significantly, as we discovered by replacing the official PyTorch implementation with the Intel ® Extension for PyTorch counterpart. ProfileDNN helped identify a discrepancy between the number of backward convolution calls between the official PyTorch vs. the Intel ® Extension for PyTorch library. Using a detailed analysis of the computation graph and our ProfileDNN-based visualization, we found calls emanated from the frozen layers in the pre-trained model (ResNext backbone). Our analysis helped increase the performance of RetinaNet-ResNext50 model training with 2 fixed layers by 16%. We also found that the primitive operation frozenbachnorm2d was missing in Fig 2d and torchvision.ops.misc.FrozenBatchNorm2d was interpreted as mul and add ops, which meant it was not a single oneDNN kernel operation. Our analysis indicated that bandwidth-limited operations made the torchvision.ops.misc.FrozenBatchNorm2d operation inefficient. It therefore cannot be fused with other operations to reduce memory accesses. Training performance increased by 29.8% after we replaced the torchvision.ops.misc.FrozenBatchNorm2d operation with IPEX.nn.FrozenBatchNorm2d. Low-precision Training Low-precision training has proven an efficient way for high-performance computing and BF16 (Brain Floating Point) is widely supported by various Deep learning hardware. BF16 is unique since it has the same range as float32 but uses fewer bits to represent the fraction (7 bits). This BF16 datatype characteristic can be beneficial when computing speed is important, but can also lead to accuracy loss when compared with float32 in calculating the loss. As shown in Fig 6, computation time is almost half when done in BF16 compared to float32. There is a significant discrepancy between the forward/backward training time ratio compared with that of bare-bone kernel time. This discrepancy indicates highly inefficient non-kernel code in the forward pass. We found that the loss function does not scale well and comprises a significant portion of computation time. After locating the focal loss as having significant overhead, we implemented our version of the focal loss kernel. However, the loss result is different from the original implementation. We pinpointed the accuracy loss as happening during low-precision casting to BF16 by torch.cpu.amp.autocast. Unless convergence can be guaranteed, therefore, casting data into BF16 should be avoided for loss calculation, especially when reduction operations are involved. Layer Fusion and Optimizer Fusion In inference mode, certain layers can be fused for a forward pass to save cache copying operation since an intermediate is not needed. In training mode, however, the layers containing trainable weights need to save the intermediate for backpropagation. When oneDNN is in inference mode, it enables batchnorm+relu and conv+relu respectively, but not frozenbatchnorm (FBN)+relu. OneDNN already supports eltwise (linear, relu) post-ops for conv and chaining of post-ops. We therefore treat FBN as a per-channel linear operation to enable conv+FBN+relu. This fusion potentially increases performance 30% and is a work-in-progress (WIP). Intel ® Extension for PyTorch currently supports fusion of SGD (Robbins and Monro (1951)) and the Lamb (You et al. (2019)) optimizer. We tested a fused/unfused Lamb optimizer with RetinaNet and found a 5.5X reduction in parameter updating time when the optimizer is fused. Custom Operation Kernel Custom operation kernels are essential to optimize performance by eliminating computation overhead, e.g., unnecessary copying and intermediates. These kernel implementations must be mathematically equivalent to the reference code and can show significant performance gains under all or most circumstances, as discussed below. Theoretical deduction Instead of relying on the PyTorch implementation (Appendix A.1) of forward pass for focal loss and adopting the default generated backward pass, we implemented a custom focal loss kernel for both forward and backward pass (backward kernel implementation is optional, as implicit autograd can be generated). Focal loss can be represented as in Equation 1 and we adopt γ = 2 and α = 0.25. The forward pass can be simplified further by assuming x and y are real in Equation 2. Lastly, since y is a binary matrix, all the terms that contain y(y-1) equals to 0 and can be removed as shown in Equation 3. The backward equation is shown in Appendix A.2. Implementation and Assessment The operators in ATEN of PyTorch can be roughly categorized into two types: in-place operation and standard operation, with the former suffixed by (as in add ). Since in-place operation modifies the Tensor directly, the overhead of copying or creating new spaces in the cache is avoided. The implementation shown in Appendix A.3 heavily adopts in-place operation as much as possible, which enhances efficiency. After confirming that our kernel implementation is mathematical equivalent to the reference implementation, we tested our kernel against the reference code under both float32 and BF16 settings. As shown in Fig 6, the custom forward kernel is 2.6 times faster than the default implementation under the BF16 setting. Although the PyTorch framework can generate implicit autograd for our custom kernel, its performance is not ideal. The custom backward kernel is 1.3 times faster than the reference implementation and 1.45 times faster than the generated implicit autograd kernel. We also discovered that the custom backward kernel can also boost forward kernel performance and we suspect that the explicit backward kernel can prevent the forward kernel from saving unnecessary intermediates. The combined improvement from the custom focal loss kernel is two times faster. Our code has been integrated into Intel ® Extension for PyTorch and will be available in that library soon. Distributed Training Compared to inference, which can be scaled out amongst independent nodes, training DL models often require much greater computing power working synchronously. Meeting this need can be accomplished by scaling-up nodes with additional CPU resources or by scaling out amongst multiple nodes. When training a system at scale-whether multiple nodes, multiple sockets, or even a single socket-it is necessary to distribute the workload across multiple workers. Coordination among distributed workers requires communication between them. Distributing workloads on CPUs can be performed via multiple protocols and middleware, such as MPI (Message Passing Interface) (Gropp et al. (1999)) and Gloo (glo). We use MPI terminology in subsequent sections. Distributed Training Performance To maximize training performance, a training workload should target one thread per CPU core of each system node. For example, an 8-socket system with 28 cores per socket should target 224 total threads. The total threads may be apportioned across several workers identified by their rank, e.g., 8 ranks of 28 threads, 16 ranks of 14 threads, 32 ranks of 8 threads, etc. The selection of ranks and threads should not cause any rank to span multiple sockets. In practice, better performance may be achieved by utilizing more ranks with fewer threads each, rather than fewer ranks with more threads each at the same global batch size. Table 2 shows how the throughput goes up diagonally from bottom-left to top-right. However, the number of available ranks is limited by the available system memory, model Training Convergence As a training system is scaled-out to more nodes, sockets, or ranks, two factors are known to degrade the model's convergence time: weak scaling efficiency and convergence point. Weak scaling efficiency is a ratio of the performance of a system to N systems doing N times as much work and tends to lag behind the linear rate at which resources are added. This phenomenon and its causes are well-documented (Sridharan et al. (2018)) across hardware types and are not explored further in this paper. A model's convergence point is the second factor that impacts convergence time as a training system scales. In particular, the global batch size increases as a distributed system scales out, even though the local batch size per worker remains constant. For instance, if a 2-socket system launches a combined 8 ranks with a global batch size of 64 (BS=8 per rank), when scaled out to 8-sockets, the global batch size becomes 256 even though each rank has the same local batch size. As the number of epochs required to converge at a model's target accuracy increases the global batch size of a training workload also increases, as shown in Fig 7. This increase in the epochs to reach a convergence point can detract substantially from the increased resources. When planning a system scale-out, it is therefore critical to account for the resulting convergence point and mitigates it by reducing the local batch size if possible (mlc). Concluding Remarks This paper explores various aspects of optimization for training DL models on CPUs, in addition to a method guide. We present a DL profile/projection toolkit called ProfileDNN that helped us locate several issues for training RetinaNet-ResNext50, which when fixed, lead to a 2 times performance increase. We also created a custom Focal Loss kernel that is 1.5 times faster than the PyTorch reference implementation when running on CPUs. The following is a summary of the lessons learned from our study of training deep learning models using CPUs: • Efficient DL frameworks that are optimized for CPUs like Intel ® extension for Py-Torch can reduce training time dramatically with little cost. • Model profiling should be done both on the reference code and custom implementations, especially when the data set is changed. Discrepancies between different implementations and corresponding low-level op distributions can help pinpoint the bottlenecks. • Implementing both forward pass and backward pass explicitly for custom kernels leads to the best training performance. • Local batch size is highly correlated with convergence point and should be reduced properly when planning a system scale-out. Our future work will focus on testing our methodology and toolkit on other popular models (Fu et al. (2022) Fu et al. (2021) and conduct a more in-depth study on optimizing training DL models with distributed CPU clusters. As Large Language Models (LLMs) such as ChatGPT (White et al. (2023)) gain widespread popularity, the significance of leveraging preexisting infrastructure grows more pronounced.
2022-06-22T01:16:40.538Z
2022-06-20T00:00:00.000
{ "year": 2022, "sha1": "777644c82f2e00597e028bc4d895c5488bf55606", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "777644c82f2e00597e028bc4d895c5488bf55606", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
233182410
pes2o/s2orc
v3-fos-license
Electrochemical Activation of Atomic Layer-Deposited Cobalt Phosphate Electrocatalysts for Water Oxidation The development of efficient and stable earth-abundant water oxidation catalysts is vital for economically feasible water-splitting systems. Cobalt phosphate (CoPi)-based catalysts belong to the relevant class of nonprecious electrocatalysts studied for the oxygen evolution reaction (OER). In this work, an in-depth investigation of the electrochemical activation of CoPi-based electrocatalysts by cyclic voltammetry (CV) is presented. Atomic layer deposition (ALD) is adopted because it enables the synthesis of CoPi films with cobalt-to-phosphorous ratios between 1.4 and 1.9. It is shown that the pristine chemical composition of the CoPi film strongly influences its OER activity in the early stages of the activation process as well as after prolonged exposure to the electrolyte. The best performing CoPi catalyst, displaying a current density of 3.9 mA cm–2 at 1.8 V versus reversible hydrogen electrode and a Tafel slope of 155 mV/dec at pH 8.0, is selected for an in-depth study of the evolution of its electrochemical properties, chemical composition, and electrochemical active surface area (ECSA) during the activation process. Upon the increase of the number of CV cycles, the OER performance increases, in parallel with the development of a noncatalytic wave in the CV scan, which points out to the reversible oxidation of Co2+ species to Co3+ species. X-ray photoelectron spectroscopy and Rutherford backscattering measurements indicate that phosphorous progressively leaches out the CoPi film bulk upon prolonged exposure to the electrolyte. In parallel, the ECSA of the films increases by up to a factor of 40, depending on the initial stoichiometry. The ECSA of the activated CoPi films shows a universal linear correlation with the OER activity for the whole range of CoPi chemical composition. It can be concluded that the adoption of ALD in CoPi-based electrocatalysis enables, next to the well-established control over film growth and properties, to disclose the mechanisms behind the CoPi electrocatalyst activation. ■ INTRODUCTION Meeting the word's energy requirements by adopting renewable sources, such as wind and solar energy, also demands viable solutions in terms of electricity storage. 1 A valid approach is water electrolysis, yielding hydrogen and oxygen, which can then be converted back to electricity on-demand 2 or serve as building blocks for the production of synthetic fuels. 3 However, the energy efficiency of the water electrolysis process currently employed in the industry is limited by the sluggish kinetics and high overpotentials of the oxygen evolution reaction (OER). 4 State-of-the-art catalysts for the OER with much higher activities and lower overpotentials than those used in industries are available, but they are made of noble metal oxides. 1 Although these catalysts lead to high energy efficiencies, the resources they are made of are scarce, and as such, they are not suitable for large-scale deployment. Alternative catalysts are needed based only on earth-abundant elements, while still providing excellent energy efficiencies. 5 It has been found that among the earth-abundant elements, cobalt is one of the best alternatives to noble metals 6,7 and catalysts based on cobalt phosphate (CoPi) have demonstrated excellent catalytic performance, both under alkaline and neutral conditions. 8−10 However, as-deposited CoPi layers often do not show good OER performance. Instead, these films require activation by exposure of the film to a high potential for a long time or by repeatedly cycling the applied potential. 11−13 Due to the wide range of employed preparation and activation conditions, the activity of CoPi catalysts reported in the literature is highly inconsistent, 14−16 and much remains unknown about the activation process and the nature of these catalysts under operating conditions. The activation of CoPi has previously been explained by conversion to amorphous or nanocrystalline cobalt hydroxide. 17 X-ray absorption spectroscopy studies of activated catalysts indicate that this cobalt hydroxide forms nanometer-sized sheets of edge-sharing CoO 6 octahedra, with the intersheet spaces occupied by water molecules and phosphate groups. 18−21 The formation of this layered hydroxide structure is thought to result in the bulk of the film becoming catalytically active. 22 Furthermore, density functional theory calculations indicate that at the surface of these sheets, oxidation of two neighboring Co atoms with pendent oxygen groups to Co 4+ allows for the direct coupling of these oxygen groups to form O 2 , with a simultaneous reduction of the Co 4+ centers. 23 The phosphate groups in CoPi mediate this process by acting as a flexible support to accommodate morphological changes during oxidation and reduction of the Co centers in the nanosheets, as well as by acting as proton acceptors. 24 While the characterization of CoPi films before and after activation is reported, limited research has been carried out to gain insight into the (chemical and morphological) changes that CoPi undergo during its activation. Furthermore, the role of the chemical composition of the pristine CoPi film in the whole activation process has not been the subject of investigation so far. In this regard, atomic layer deposition (ALD) offers the opportunity to tune the chemical composition of the electrocatalyst. 25−28 ALD has already been adopted for the deposition of catalysts based on noble metals such as palladium, 29,30 platinum, 31−39 and alloys thereof; 36 transition-metal oxides such as nickel oxide 40−42 and cobalt oxide; 43,44 and various transition-metal phosphates 11 including CoPi. 45 In our previous work, we showed that ALD can be used to prepare smooth, amorphous CoPi films with a tunable chemical composition based on the Co/P atomic ratio. 45 In this work, we further exploit the unique capabilities of ALD in order to study the relationship between chemical and structural changes during activation as well as the influence of the initial film composition on the properties of the activated film. Our results demonstrate that the activation process proceeds in parallel with the progressive leaching of phosphorous and formation of cobalt hydroxide as indicated by X-ray photoelectron spectroscopy (XPS). Analysis of cyclic voltammetry (CV) measurements shows that this change in the chemical composition is directly related to the electrochemical activation of cobalt species and is accompanied by an increase in the OER performance. Subsequently, the effect of the chemical composition of the pristine film (Co/P ratio) on the activation process is studied. The OER activity after activation differs significantly depending on the initial Co/P ratio. By comparing the electrochemical active surface area (ECSA) of these samples, we demonstrate that after activation, the OER activity is directly proportional to the ECSA. ■ RESULTS AND DISCUSSION Chemical and Morphological Characterization of the As-Deposited Samples. As detailed in the experimental section, CoPi samples were deposited on Si(100) wafers, as well as on fluorine-doped tin oxide (FTO)-coated glass slides, which we will refer to as FTO in the rest of this document. FTO served as a substrate for the electrochemical characterization due to its high conductivity and its poor performance as an OER catalyst, which allows us to exclude any contribution from the substrate. 45 As the FTO substrate is textured, all quantities reported here have been normalized to the geometric surface area of the sample, rather than the (unknown) exposed surface area, unless noted otherwise. For a discussion on the details of this normalization procedure, we refer to the experimental section. The stoichiometry was varied by adopting a super-cycle approach. These samples have been named CoPi-x, with x corresponding to the Co/P ratios of 1.4, 1.6, 1.7, and 1.9, as determined by XPS. Each sample was deposited using 600 ALD cycles. As a reference, Co 3 O 4 samples were also deposited using 600 ALD cycles. Co 3 O 4 specifically was chosen as a reference because it is known to not undergo an electrochemical activation process unlike other oxidized cobalt species like CoOOH or Co(OH) 2. 46 Crosssectional scanning electron microscopy (SEM) and spectroscopic ellipsometry measurements revealed that all CoPi films deposited on Si(100) had a thickness of 65 nm, while the thickness of the Co 3 O 4 film derived from spectroscopic ellipsometry measurements was 30 nm. A close inspection of the top-view SEM images of pristine FTO and FTO coated with CoPi-1.6 ( Figure S1) reveals smoothening of the sharp edges and corners of the FTO crystallites, but no appearance of new crystallites. This also holds for all other CoPi samples. Thus, we conclude that all as-deposited CoPi films, independent of their stoichiometry, conformally coat the FTO substrate. Grazing incidence X-ray diffraction (GIXRD) analysis was used to determine the crystal structure of the samples deposited on FTO, see Figure S2. The FTO substrate is crystalline and exhibits intense diffraction peaks. The GIXRD patterns of the CoPi samples are identical to that of the FTO substrate, indicating that all CoPi samples are amorphous or contain crystallites at a nanoscale insufficient to exhibit diffraction peaks. On the other hand, ALD Co 3 O 4 is expected to crystallize in a spinel structure 47 2p spectrum shown in Figure 1a reveals the presence of a single phosphorous species, associated with the phosphate unit. The O 1s spectrum (Figure 1b) of the CoPi sample is dominated by oxygen species incorporated in phosphate groups. Finally, due to strong spin−orbit coupling, the Co 2p spectrum (Figure 1c) is split into 2p 3/2 and 2p 1/2 contributions. The Co 2p 3/2 region consists of a primary peak at 781.5 eV followed by a broad satellite peak at 785 eV and the same pattern is repeated for the 2p 1/2 region. Furthermore, the energy difference between the primary Co 2p 3/2 and the primary Co 2p 1/2 peaks is 16.0 eV. Both this energy difference and the intense shake-up satellites are characteristic of Co 2+ species. 50 As the Co 2p region of all CoPi species has nominally the same spectral shape, we conclude that for all Co/P ratios in the range of 1.4 to 1.9, the cobalt atoms are primarily in the 2+ oxidation state. The spectra recorded for the Co 3 O 4 sample are again similar to what has been reported before. 47 The O 1s spectra ( Figure 1b) consist of contributions from hydroxyl groups and oxygen atoms bound to cobalt atoms. The Co 2p spectra of the Co 3 O 4 sample ( Figure 1c) are spin-split, with an energy difference between Co 2p 3/2 and Co 2p 1/2 of 15.0 eV. Furthermore, the intense shake-up satellite observed in the CoPi samples has been replaced with a minor satellite at a binding energy of 790 eV. Together, these results point toward a spectrum dominated by Co 3+ species. 50 As ALD Co 3 O 4 adopts a spinel crystal structure, 47 which contains two Co 3+ species and one Co 2+ species per unit cell, a minor contribution from Co 2+ should be detected in the Co 2p spectra, but it is likely that this contribution is not well resolved as it overlaps with the more intense Co 3+ contribution. In order to support the XPS analysis and to obtain the absolute elemental concentration, Rutherford backscattering spectrometry (RBS) and elastic recoil detection (ERD) were employed to verify the elemental composition of CoPi films deposited on Si(100), see Table 1. These analyses confirm that the stoichiometry of the films can be tuned by ALD. Furthermore, they show that these samples have a negligible hydrogen content. Additionally, the densities of all CoPi films, obtained from the atomic loadings per unit of geometric surface area and the film thickness measured by SEM, slightly exceed that of crystalline bulk CoPi (3.8 g cm −3 ). 51 We assign this to a minor excess of cobalt and oxygen in these films compared to the ideal stoichiometry of Co 3 P 2 O 8 . A systematic difference is observed when comparing the Co/ P ratios obtained by RBS and XPS. We assign this difference to the occurrence of an Auger peak of cobalt, which, when using Al Kα radiation, appears at binding energies of ca. 10 eV below that of the Co 2p 3/2 peak. 52 The presence of this Auger signal results in significant uncertainty in the estimation of the background signal below the Co 2p peaks. The deviations between the Co/P ratios determined by RBS and XPS caused by this Auger peak are structural in nature, with the ratios derived from XPS being ca. 0.2 lower than those derived from RBS in all cases, see Figure S5. As the literature primarily reports XPS data, and generally no correction is made for this Auger peak, in the interest of consistency, we report Co/P ratios as determined by the XPS analysis in the remaining discussion. Electrochemical Activation of CoPi by Potential Cycling. A detailed study of electrocatalytic water oxidation has been performed in a 0.1 M potassium phosphate (KPi, pH = 8.0) electrolyte solution. As-prepared CoPi and Co 3 O 4 films and fresh FTO were taken as working electrodes without further treatment. Cyclic voltammetry (CV) curves were measured for all catalysts and are shown in Figure 2a. In order to compare the activities of the CoPi samples in their activated state, they all underwent 500 CV cycles prior to the measurement as shown in Figure 2. As noted before, FTO shows negligible activity toward OER, while the Co 3 O 4 film and all CoPi films show significant OER activity. At a potential of 1.8 V versus reversible hydrogen electrode (RHE), CoPi-1.6 displays the highest current density. Tafel analysis was performed to gain insight into the OER kinetics ( Figure 2b). The Tafel slope of CoPi-1.6 is comparable to the average Tafel slope (145 mV/dec) reported in the literature for CoPi-based OER catalysts operating under neutral conditions. 14,53, 54 We note that this Tafel slope likely contains mass transfer contributions and that they could be improved using gas diffusion electrodes. Nevertheless, for the sake of comparison with the aforementioned literature, we have chosen to limit ourselves to conventional FTO substrates. The lower Tafel Atomic densities per unit of geometric surface area were measured by RBS and ERD. The stoichiometry of the samples was deduced from these areal densities (excluding H, as it is negligible). Densities were derived from the atomic areal densities determined by RBS and the film thicknesses were determined by spectroscopic ellipsometry (65 nm for the CoPi samples and 30 nm for Co 3 O 4 ). ACS Catalysis pubs.acs.org/acscatalysis Research Article slope of CoPi-1.6 with respect to the other CoPi films suggests a smaller activation energy and a fast reaction rate of the OER. Chronoamperometry shows that after activation, the OER performance of these samples shows acceptable stability, with CoPi-1.6 retaining 86% of its initial activity over 8 h ( Figure S6). Thus, CoPi-1.6 turns to be the best sample for OER in 0.1 M KPi, as indicated by a superior Tafel slope, high current density, and good stability. As such, CoPi-1.6 was selected for further investigation of the activation process. As highlighted in the introduction, CoPi-based electrocatalysts have been found to significantly improve their catalytic performance with an increasing number of CV cycles. To better understand the changes occurring in the film during the activation process, the evolution of the electrochemical properties of Co 3 O 4 and CoPi-1.6 was studied in depth as a function of the number of CV sweeps at a scan rate of 10 mV/ s, see Figure 3. Co 3 O 4 shows a moderate initial current density during the initial CV cycle, which declines slightly during successive CV cycles. On the other hand, CoPi-1.6 shows an activation process, which leads to an order of magnitude increase in the current density at 1.8 V versus RHE, going from 0.29 mA cm geo −2 at the first 1st CV cycle up to 4.5 mA cm geo −2 at the 200th CV cycle. Then, the current slightly decreases, reaching 3.9 mA cm geo −2 at the 500th cycle. Meanwhile, both oxidative and reductive noncatalytic waves appear between 1.2 and 1.5 V versus RHE. The areas of both noncatalytic waves also increase with CV sweeps. For activated CoPi catalysts, a close relation between the redox activity (indicated by the total amount of charge transferred during the noncatalytic wave per unit of geometric surface area) and the catalytic activity (indicated by the current density at 1.8 V vs RHE) has been reported previously. 18 The amounts of charge transferred per unit of geometric surface area during the oxidative and reductive noncatalytic waves are linearly correlated, and within experimental error, consistent with a slope of 1.0 (Figure 4a), indicating that the process is reversible. These noncatalytic waves have previously been assigned to the formation of Co 3+ species, 54,55 and the formation of Co 3+ in CoPi-1.6 after 500 CV cycles has also been confirmed by UV−vis spectroscopy (see Figure S7) and XPS (see Figure 5 and associated discussion). However, we did not observe the appearance of any new diffraction peaks in GIXRD measurements after activation ( Figure S8), indicating that no recrystallization is associated with the formation of these new Co 3+ species. As observed by Gonzaĺez-Flores et al., 18 the number of charges transferred during the oxidative wave is linearly correlated to the OER performance ( Figure 4b). Co 3+ species are thought to be the dominant catalytic active centers during the OER. 17,56 After activation, 6.4 mC cm geo −2 is transferred Table 2 and associated discussion). This means that the number of Co atoms that have become redox-active is as high as 22 ± 1% of the total number of Co atoms in the film. We note that this estimation could be lower by up to 50% if we take into consideration also the formation of Co 4+ during the noncatalytic wave, but this will not substantially affect the following discussion. As previously addressed, the CoPi-1.6 film has a thickness of 65 nm and this film is found to be compact, smooth, and defectfree. If the top 1 nm would be accessible to the electrolyte, 1.5% of the Co atoms in the film would be redox-active. The fact that (i) 22% of the Co atoms were involved during the noncatalytic waves and (ii) the OER performance is proportional to the noncatalytic wave indicate that a significant fraction of the Co atoms in the bulk of the film has become accessible to the electrolyte and is catalytically active. The hypothesis that in CoPi films a significant fraction of the Co atoms below the outer geometric surface of the film is accessible to the electrolyte after activation has been proposed earlier in the literature in order to explain the fact that the activity of CoPi films increases with the film thickness. 22 As such, it can be expected that this activation process may be accompanied by changes in the morphology of the CoPi film that significantly increase the exposed surface area of the film relative to its geometric surface area. Chemical Changes in CoPi upon Activation. As highlighted in the introduction, activation of the CoPi films is typically accompanied by the partial conversion of cobalt phosphate to layered cobalt oxide or hydroxide. However, it is unknown whether the chemical conversion occurs prior to the activation or develops in parallel with the activation process. In order to shed light on this, CoPi-1.6 films on FTO were subjected to several CV cycles and subsequently their chemical composition was analyzed by XPS. The resulting Co 2p, P 2p, and O 1s XPS spectra obtained after various number of CV cycles are shown in Figure 5a−c. From Figure 5a, we observe that the Co shake-up satellites at 785 and 802 eV (associated with Co 2+ species) decrease as a function of the number of CV cycles, while the satellite peaks at 790 and 805 eV (associated with Co 3+ species) increase in intensity. Furthermore, the separation between the Co 2p 3/2 and Co 2p 1/2 main peaks decreases from 16.0 to 15.0 eV, which also suggests a transition from Co 2+ to Co 3+ . Simultaneously with these changes in the Co 2p spectra, the relative intensity of the P 2p spectra decreases (Figure 5b), indicating leaching of phosphate species. The O 1s spectra shown in Figure 5c show a shift toward a lower binding energy, consistent with an increase in oxygen species bond to cobalt. In addition, there is also an increasing shoulder visible at the high binding energy side, indicating formation of additional hydroxyl groups. 57,58 Taken together, these findings suggest that the activation process is associated with the conversion of CoPi to Co 3+ -rich cobalt oxide or hydroxide. We find these findings to be universal for all CoPi samples, see Figures S9,S10. In order to study the relationship between these chemical changes and the electrochemical activity of these films, the Co/ P and Co/O ratios obtained from XPS are compared with the current density obtained at 1.8 V versus RHE, see Figure 5d,e. The activation process correlates with a significant increase in the Co/P ratio and a decrease in the Co/O ratio. This is consistent with leaching of phosphorous groups and the incorporation of additional oxygen in the films. When comparing the evolution of the Co/P ratio and the current density, we observe that these two trends are very similar. In both cases, we observe an increase and saturation after ca. 200 CV cycles. The trend for the Co/O ratio is mirrored, that is, it decreases quickly initially, until it also saturates after ca. 200 CV cycles. Referring back to Figure 5a, we also observe that after 200 cycles, changes in the Co 2p spectra are close to saturation, indicating that the change in the stoichiometry and the conversion of Co 2+ species to Co 3+ species are correlated. Thus, based on these results, we can conclude that the activation process that leads to an increase in the OER activity of these CoPi films occurs in parallel with the change in their chemical composition. While these XPS measurements give compelling evidence for conversion of CoPi to cobalt oxide or hydroxide in the nearsurface region, they are not representative of the bulk of the film. Therefore, RBS measurements were performed before and after activation of a CoPi-1.6 film on FTO, see Table 2. By comparing the elemental concentration of the as-deposited samples and the samples after 500 CV cycles, we observe a decrease in the phosphorous content of the entire film by approximately 52%, indicating that the loss of phosphorous is not limited to the near-surface region. The fact that XPS can still detect phosphorous suggests that P leaching occurs homogeneously throughout the whole CoPi thickness. Conversely, the amount of cobalt before and after CV cycling differs to within less than 2 standard deviations, indicating that cobalt leaching is within the error of the measurement and thus the change in the film composition can purely be ascribed to the loss of phosphate units. Effect of pH on the Activation of CoPi. The preliminary conclusion that activation is associated with the loss of phosphate from CoPi suggests that the quantitative leaching of phosphate may lead to a more active catalyst. Therefore, we investigated an alternative activation procedure for CoPi-1.6. A CoPi-1.6 sample was subjected to CV cycling in a 1 M KOH solution (pH = 14) prior to being transferred to a KPi buffer solution (pH = 8.0) for electrochemical characterization. During this alternative activation process, changes in the electrochemical properties of the sample occur much more rapidly than during activation in the KPi buffer, see Figure S11. While activation in KPi took over 200 CV cycles, the activity of CoPi-1.6 in KOH already saturated after 20 cycles. In the following text, we will refer to the sample activated by 100 CV cycles in 1 M KOH as CoPi-1.6 (act. pH 14), while the sample activated by 500 CV cycles in KPi buffer will be ACS Catalysis pubs.acs.org/acscatalysis Research Article referred to as CoPi-1.6 (act. pH 8). Figure 6a shows the CV cycles in KPi buffer (pH = 8.0) of CoPi-1.6 (act. pH 8) and CoPi-1.6 (act. pH 14): the current density at 1.8 V versus RHE of CoPi-1.6 (act. pH 14) is higher than the current density attained by the film activated at pH 8. In addition to this, the area of the noncatalytic wave of CoPi (act. pH 14) is significantly larger than that of CoPi (act. pH 8). When integrating the noncatalytic wave, we find that 73 × 10 15 units of elementary charge are transferred per cm geo 2 during the noncatalytic wave. Since the Co atomic concentration is fixed, we can conclude that 39 ± 1% of all Co atoms in CoPi-1.6 (act. pH 14) are redox-active, representing a significant increase over CoPi-1.6 (act. pH 8). XPS measurements were performed on pristine CoPi-1.6, CoPi-1.6 (act. pH 8), and CoPi-1.6 (act. pH 14), see Figure 6b−d for the detailed spectra and Figure S12 for an XPS survey spectrum of CoPi-1.6 (act. pH 14). In order to highlight the changes in the chemical environment, all spectra were normalized to the height of the Co 2p 3/2 peak. Figure 6c shows that at pH 14, the phosphorous loss is complete. A comparison of the Co 2p spectra of CoPi-1.6 (at. pH 8) and CoPi-1.6 (act. pH 14) reveals that the intensity of the Co 2+ satellite is higher for CoPi-1.6 (act. pH 8) than for CoPi-1.6 (act. pH 14), indicating that some Co 2+ phosphate remains in CoPi-1.6 (act. pH 8) even after 500 CV cycles. A comparison of the O 1s regions shows a shift of the spectra to lower binding energies when going from pristine CoPi-1.6 to CoPi-1.6 (act. pH 8) and then to CoPi-1.6 (act pH 14). We assign this to progressive loss of the phosphate-related oxygen peak around 531.5 eV and the increase in the cobalt oxide-related O 1s peak at 530.0 eV. Structural Modifications of CoPi upon Activation. To investigate any structural change associated with the activation process, the activated CoPi-1.6 film (after 500 cycles) was studied by SEM and the corresponding SEM images are shown in Figure 7. Large micrometer-sized cracks appeared on the surface (Figure 7b), in sharp contrast with the morphology of pristine CoPi-1.6 (Figure 7a), which is crack-free and closely follows the FTO crystallites. We do expect that these cracks are most likely formed due to drying of the film after removal from solution and are therefore not representative of the morphology of the sample during operation. 8,59 In the uncracked regions, a significant change of microstructure can be observed: the sharp edges from the FTO crystallites present before OER are completely absent after OER and close-packed smoothed features are observed, see Figure 7c. This smoothing of the film suggests that significant restructuring of the initially highly conformal film occurs upon CV cycling. Furthermore, high-magnification SEM reveals that these features contain stacked, poorly ordered platelets ( Figure 7d). We tentatively assign the appearance of these platelets to the formation of cobalt oxide or hydroxide nanosheets, which have been identified previously by EXAFS as the dominant phase in activated electrodeposited CoPi catalysts. 18−21 Influence of the ECSA on the Activation Process. In our previous study, 45 we pointed out to the relationship between the Co/P ratio of the pristine samples and their electrochemical activity. However, as shown here, the composition of CoPi-1.6 changed significantly after activation by 500 CV cycles, and as a matter of fact, Figure S13 shows that there is no universal correlation between the stoichiometry of the layers after activation and their electrochemical activity. In addition to that, as shown previously, the activation process leads to a significant enhancement of the fraction of Co atoms accessible by the electrolyte and for restructuring of the film. As such, while the initial stoichiometry undoubtedly determines the catalytic activity of the samples after activation, the relationship is not direct and there must be an alternative figure of merit that reflects the difference among samples. The morphological changes that the CoPi films undergo upon their activation can be expected to lead to a significant enhancement of the surface area exposed to the electrolyte for the same amount of geometric surface area immersed in the electrolyte. As such, we characterized the ECSA of the samples 60 When reporting the current density as a function of ECSA (Figure 8), a close relationship between these two parameters is observed. In particular, all samples prior to activation show an ECSA within one order of magnitude of the geometric surface area of the sample, and a low current density as well. After activation, the current density of all samples increased significantly and at the same time the ECSA increased, thus confirming that the activation process leads to the restructuring of the film that makes a significant part of the bulk of the film accessible to the electrolyte. Furthermore, Figure 8 shows that for the best performing sample after activation, that is, CoPi-1.6, the ECSA increases by a factor 30 upon activation in a pH 8.0 KPi buffer and by a factor of 40 upon activation in a pH 14 KOH solution. Conversely, for the poorest performing sample after activation, CoPi-1.9, the ECSA only increases by a factor 3.6 upon activation in the pH 8.0 KPi buffer. For all activated samples, including the sample activated in the pH 14 KOH solution, we observe a linear relationship between ECSA and the OER activity, as indicated by the straight-line fit obtained from all activated CoPi samples (note that this fit was forced to yield a current density of 0 mA cm geo −2 for an ECSA of 0 cm 2 ). As such, we can conclude that the activity per unit of ECSA for all activated CoPi samples considered here is roughly equal and that the differences between different samples are governed by differences in the surface area exposed to the electrolyte. While we observe a deviation from this linear behavior for the as-deposited CoPi samples and the Co 3 O 4 sample (see also Figure S16 for the activity per unit of ECSA), we note that the current density for these samples was derived from the first CV cycle (potentially including some activation), while the ECSA was measured prior to activation. In addition to that, the relationship between C dl and ECSA may change depending on the composition of the solid. 61 The composition of all the activated samples is sufficiently similar to assume a linear correlation between C dl and ECSA, but there is a significant difference in the composition between the as-deposited samples and the activated samples. As such, we cannot say if this deviation represents a difference in the activity per unit of ECSA before and after activation or merely a deviation in the double-layer capacitance per unit of ECSA. Nevertheless, the difference in activity per unit of ECSA of all these samples is far smaller than the typical variation in ECSA found when comparing electrocatalysts prepared using different synthesis methods. In addition to that, Bergmann et al. 46 have shown that the typical variation in activity per unit of ECSA of Co 3 O 4 , CoOOH, and Co(OH) 2 is relatively small as well. As such, we expect that variations in ECSA typically play a much larger role than variations in the activity per unit of ECSA in determining the activity of cobalt phosphate-, oxide-, and hydroxide-based electrocatalysts. Nature of the CoPi Activation Process. A comparison of CoPi catalysts with different initial stoichiometries shows that the degree to which their electrochemical activity increases upon activation depends strongly on the initial stoichiometry. Simultaneously, during this activation process, changes in the stoichiometry and ECSA of the films are observed. A higher initial Co/P ratio leads to more extensive leaching of phosphorous from the film, but this does not yield a more active film. Instead, we hypothesize that the activation process involves a restructuring process that simultaneously leads to the loss of phosphate and redistribution of the material across the sample. The fact that the ECSA of these films increases by up to a factor of 40 and over 20% of all Co atoms are redoxactive after activation strongly indicates that the activation process proceeds in parallel with CoPi bulk modification. The latter makes the film more accessible to infiltration by the electrolyte. The activity of all CoPi catalysts after this restructuring process is found to be directly correlated to the ECSA. This indicates that the activity per unit of ECSA of the material formed during this restructuring process is quite insensitive to details of the initial stoichiometry or the activation procedure, similar to what was noted by Bergmann et al. for a range of cobalt oxides and hydroxides. 46 However, different pristine chemical compositions are responsible for different ECSAs after activation and the ECSA eventually determines the OER activity of the CoPi film. As was shown in a recent review by Jiang et al., 62 electrochemical activation processes are, in general, sensitive to the experimental conditions accompanying the activation process. For several systems, activation at a strongly alkaline pH leads to a more active catalyst. Here, we also observe a strong effect of pH on the activation of CoPi. While activation of CoPi-1.6 at pH 8 leads to an increase in the ECSA by a factor of 30, performing the activation at pH 14 raises this to a factor of 40. We assign this to a more efficient removal of phosphate groups from the bulk of the material and a more complete conversion to cobalt oxide or hydroxide. As the removal of phosphate groups and the restructuring of the layer is likely also dependent on parameters such as temperature, the Figure 8. Relationship between the current density at 1.8 V vs RHE in a pH 8.0 KPi buffer solution and the ECSA for samples with different initial Co/P ratios. The symbols represent experimental data points, with error bars indicating the standard error in the current density and ECSA (for some data points, error bars are smaller than the symbols). The line indicates a linear fit to the postactivation experimental data points; for details, see text. For samples marked as-dep, the ECSA was obtained prior to any CV cycling and the current density at 1.8 V vs RHE was obtained from the first CV cycle. Samples marked with act. ACS Catalysis pubs.acs.org/acscatalysis Research Article potential range and sweep rate of the CV scans, or additives to the buffer solution, we speculate that it might be possible to obtain further enhancement by optimizing the activation conditions. Further work will be needed to elucidate the relationship between activation conditions and the attained ECSA. ■ CONCLUSIONS In summary, we have studied the electrocatalytic activity toward OER in neutral media of ALD-prepared CoPi films. The ALD approach allows us to tune the stoichiometries of these CoPi films, which strongly affects the activity of these electrocatalysts after activation. In particular, we find that CoPi films with a Co/P ratio of 1.6 deliver the best OER performance as demonstrated by a high current density (3.95 mA cm geo −2 at 1.8 V vs RHE) and a Tafel slope of 155 mV dec −1 , which is comparable to other high-performing CoPibased systems. As films with other Co/P ratios were found to have significantly worse performance, this proves that tuning of the stoichiometry is a valuable tool for obtaining highperformance CoPi-based electrocatalysts. We find that all ALD-prepared CoPi films undergo an activation process, which is accompanied by chemical composition variation. During catalyst activation by CV, a progressive increase in the OER current density and a noncatalytic wave associated with the conversion of Co 2+ to Co 3+ are observed. The number of charges transferred during this noncatalytic wave reveals that during activation, up to 39% of all Co atoms in the film become redox-active. This indicates that a significant part of the bulk of these films becomes accessible to the electrolyte due to the activation process. XPS measurements during the activation process show loss of phosphorous from the film and a progressive conversion from CoPi to cobalt oxide or hydroxide containing primarily Co 3+ species. This compositional change occurs concurrently with the electrochemical changes highlighted earlier. As similar activation processes have been observed for CoPi-based catalysts deposited hydrothermally, this suggests that for these systems, phosphorous loss is intrinsically linked to the activation process. We explain these observations by noting that the activation process leads to restructuring of the film, which increases its ECSA up to a factor of 40. The intrinsic activity per unit of ECSA is the same for all activated samples, independent of their preactivation composition. However, differences in the initial stoichiometry lead to a different final ECSA, which is the deciding factor for the overall activity of these films. We argue that this conclusion does not only hold for ALD CoPi films. The phosphate groups in CoPi are a sacrificial component and they can, in principle, be replaced by any other dissolvable species. In fact, in the literature, a wide range of other dissolvable anions has been employed, such as borate 9 or tungstate, 63 and oxidation-induced dissolution has been demonstrated for phosphide-and chalcogenide-based OER catalysts as well. 64 For these films too, we expect a restructuring process to take place and thus the insights described above are likely of general applicability to this whole class of materials. Finally, we note that the relationship between ECSA and the activity of CoPi films could only be unraveled through a rational control of the film chemical composition. Our results indicate that the ability of ALD to rationally tune the composition of electrocatalysts allows us to gain insight into their activation mechanism and performance. ■ EXPERIMENTAL SECTION Materials and ALD Process. Cobalt phosphate and cobalt oxide thin films were deposited using a home-built plasmaenhanced ALD reactor. 65 The pumping system, consisting of a turbomolecular pump connected to a rotary vane pump, is capable of reaching a base pressure of < 1 × 10 −6 mbar. The reactor is equipped with a remote inductively coupled plasma (ICP) source with a power supply operating at 13.56 MHz. During the deposition, the walls of the chamber were heated to 100°C, while the substrate holder, suitable for fitting a 100 mm-diameter substrate, was heated to 300°C. Cobaltocene (CoCp 2 , 98% purity) and Trimethyl phosphate (TMP, (CH 3 O) 3 PO, 97% purity), both purchased from Sigma-Aldrich, were selected as precursors for the process. CoCp 2 was contained in a stainless steel cylindrical bubbler heated to 80°C. Argon gas (> 99.999% purity) was used to carry the CoCp 2 vapor from the bubbler to the reactor through a line heated to 100°C. TMP was vapor-drawn to the chamber; the bubbler containing it was heated to 50°C and the line to the reactor to 70°C. For O 2 plasma, used as a reactant in the process, O 2 gas (> 99.999% purity) was allowed to flow through the plasma source for 4 s to stabilize the pressure to 8.0 × 10 −3 mbar and then the plasma was ignited by providing 100 W of power to the ICP source. Sample Preparation. Samples with varying stoichiometries were prepared using the approach outlined in our previous work, 45 see Scheme 1. In short, we employed a recipe for the deposition of Co 3 O 4 first developed by Donders et al. 47 and a recipe for the deposition of CoPi developed by Di Palma et al. 66 In order to obtain CoPi films with varying stoichiometries, these two processes were combined in a super-cycle approach. The ALD process for Co 3 O 4 consists of two half-cycles: the first half-cycle of the process is a 2 s CoCp 2 precursor dosing step, while the second half-cycle consists of 5 s of O 2 plasma exposure. The ALD process for CoPi consists of four half-cycles. The first two half-cycles are the same as the Co 3 O 4 ALD process, the third half-cycle consists of a 0.6 s TMP precursor dosing step, and the last half-cycle is a 2 s O 2 plasma exposure step. For the super-cycle processes, each super-cycle consisted of n CoPi deposition cycles followed by one Co 3 O 4 deposition cycle. Samples deposited for n equals to 23, 11, and 5 are referred to as CoPi-1.6, CoPi-1.7, and CoPi-1.9, respectively, with the name indicating the Co/P ratio ALD Film Characterization. The thickness and the dielectric function of the films were monitored during the ALD process by in situ spectroscopic ellipsometry with a J.A. Woollam, Inc. M2000U ellipsometer. Data were recorded within a spectral range between 1.38 and 4.13 eV. A Cauchy dispersion model was utilized to model the CoPi samples and an optical model employing a Gauss, a Tauc−Lorentz, and one Lorentz oscillator was used for Co 3 O 4 samples. ERD and RBS measurements were performed using a 2000 keV He + beam at a 10°incidence angle with the sample surface, and recoiled atoms were collected at a 25°scattering angle. As the FTO substrate is textured, atomic densities derived from RBS are calculated based on the footprint of the beam rather than the exposed surface area of the sample. X-ray diffraction (XRD) was performed using a PANalytical X'Pert Pro MRD X-ray diffractometer using Cu Kα radiation (λ = 1.540598 Å) in the 2θ range of 20−70°at a scanning rate of 1.5°min −1 . Reference XRD patterns were obtained from the International Crystal Structure Database. 67 XPS was performed using a Thermo Scientific K-Alpha system, equipped with a monochromatic Al Kα X-ray source, and the samples were analyzed without further treatment after placement into the UHV system. The surface morphologies of samples were investigated using a field-emission scanning electron microscope (SEM) (Zeiss Sigma, Germany). In order to obtain meaningful SEM results at 1,000,000 times magnification despite drift of the sample and focusing optics, 200 individual images were recorded at 100 ms per image. These images were corrected using the ImageJ plugin TurboReg, which implements a drift correction algorithm developed by Thevenaz et al. 68 Subsequently these images were averaged to obtain a single image with adequate statistics. OER Measurements. The catalytic activity of the CoPi and Co 3 O 4 samples was tested in a 0.1 M (pH 8.0) phosphate buffer solution (HK 2 PO 4 /H 2 KPO 4 , KPi) using a singlecompartment three-electrode electrochemical cell. Activation in 1 M KOH (pH 14) was performed using the same configuration. CoPi films and Co 3 O 4 films on FTO were used as working electrodes. A high-surface Pt mesh was used as a counter electrode, and an Ag/AgCl (saturated) reference electrode was employed. As noted in the main body of the text, the actual area of the samples exposed to the electrolyte is unknown due to the texture of the FTO electrodes. As such, current densities were normalized to the geometric surface area of the substrate immersed into the electrolyte (1 cm 2 ), which was derived from the lengths of the sides of the triangular section of the substrate in contact with the liquid phase. This introduced an uncertainty of ca. 5% in the immersed surface area, which has been taken into account in determining the uncertainties in the current density and derived quantities. As the use of a Pt counter electrode can lead to deposition of Pt on the sample after repeated potential cycling, XPS measurements were performed after electrochemical analysis. These showed that even after 500 CV cycles, Pt concentrations on the samples were below the detection limits. The electrochemical characterization was carried out using a CompactStat (Ivium) potentiostat. Electrochemical properties were evaluated by CV, chronoamperometry (i−t), and ECSA. All CV curves were obtained at a scan rate of 10 mV s −1 and corrected with 80% iR-compensation. The potential measured was converted to the potential relative to the RHE using the formula: E RHE = E Ag/AgCl + 0.197 V + 0.059 × pH V. Standard errors in current were estimated from the current densities at 1.8 V vs RHE obtained between 400 and 500 CV cycles, after subtraction of a linear background to account for structural loss in activity. Standard errors were found to be less than 1.5% of the absolute current density in all cases. As such, the uncertainty in the current density is effectively determined by the uncertainty in the immersed area of the electrolyte and this has been used to estimate the uncertainties in all quantities derived from CV measurements here. The ECSA for each system was estimated from the electrochemical double-layer capacitance of the catalytic surface (C dl ). 61 The electrochemical capacitance was determined by measuring the nonfaradaic capacitive current associated with double-layer charging from the scan-rate dependence of CVs. The CV curves of samples were measured with nonfaradaic potential ranges (0.92 to 1.02 V vs RHE) at various scan rates (5 to 30 mV s −1 ). Uncertainties in C dl were estimated from the standard error on the slope obtained this way without considering the uncertainty in the integrated currents. This standard error was combined with the uncertainties in the immersed geometric area to estimate the total uncertainty in the double-layer capacitance. As the samples were removed and reimmersed in between CV and C dl measurements, uncertainties in the current density and ECSA were uncorrelated.
2021-04-09T05:17:43.753Z
2021-02-15T00:00:00.000
{ "year": 2021, "sha1": "bc826f4d73e4c6ec52b36e227591092f031f241c", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acscatal.0c04933", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "bc826f4d73e4c6ec52b36e227591092f031f241c", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }